Skip to content

Commit cb1c0b2

Browse files
committed
fixed typos
1 parent 91a92ca commit cb1c0b2

File tree

1 file changed

+18
-18
lines changed

1 file changed

+18
-18
lines changed

notebooks/dynamic-shapes.ipynb

Lines changed: 18 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
"metadata": {},
88
"outputs": [],
99
"source": [
10-
"# Copyright 2020 NVIDIA Corporation. All Rights Reserved.\n",
10+
"# Copyright 2022 NVIDIA Corporation. All Rights Reserved.\n",
1111
"#\n",
1212
"# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
1313
"# you may not use this file except in compliance with the License.\n",
@@ -36,14 +36,14 @@
3636
"id": "73703695",
3737
"metadata": {},
3838
"source": [
39-
"Torch-TensorRT is a compiler for PyTorch/TorchScript, targeting NVIDIA GPUs via NVIDIA's TensorRT Deep Learning Optimizer and Runtime. Unlike PyTorch's Just-In-Time (JIT) compiler, Torch-TensorRT is an Ahead-of-Time (AOT) compiler, meaning that before you deploy your TorchScript code, you go through an explicit compile step to convert a standard TorchScript program into an module targeting a TensorRT engine. Torch-TensorRT operates as a PyTorch extention and compiles modules that integrate into the JIT runtime seamlessly. After compilation using the optimized graph should feel no different than running a TorchScript module. You also have access to TensorRT's suite of configurations at compile time, so you are able to specify operating precision (FP32/FP16/INT8) and other settings for your module.\n",
39+
"Torch-TensorRT is a compiler for PyTorch/TorchScript, targeting NVIDIA GPUs via NVIDIA's TensorRT Deep Learning Optimizer and Runtime. Unlike PyTorch's Just-In-Time (JIT) compiler, Torch-TensorRT is an Ahead-of-Time (AOT) compiler, meaning that before you deploy your TorchScript code, you go through an explicit compile step to convert a standard TorchScript program into a module targeting a TensorRT engine. Torch-TensorRT operates as a PyTorch extension and compiles modules that integrate into the JIT runtime seamlessly. After compilation, using the optimized graph should feel no different than running a TorchScript module. You also have access to TensorRT's suite of configurations at compile-time, so you are able to specify operating precision (FP32/FP16/INT8) and other settings for your module.\n",
4040
"\n",
41-
"We highly encorage users to use our NVIDIA's [PyTorch container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch) to run this notebook. It comes packaged with a host of NVIDIA libraries and optimizations to widely used third party libraries. This container is tested and updated on a monthly cadence!\n",
41+
"We highly encourage users to run this notebook using our NVIDIA's [PyTorch container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch). It comes packaged with a host of NVIDIA libraries and optimizations to widely used third-party libraries. In addition, this container is tested and updated on a monthly cadence!\n",
4242
"\n",
4343
"This notebook has the following sections:\n",
44-
"1. [TL;DR Explanation](#1)\n",
45-
"1. [Setting up the model](#2)\n",
46-
"1. [Working with Dynamic shapes in Torch TRT](#3)"
44+
"1. TL;DR Explanation\n",
45+
"1. Setting up the model\n",
46+
"1. Working with Dynamic shapes in Torch TRT]"
4747
]
4848
},
4949
{
@@ -633,7 +633,7 @@
633633
"id": "21402d53",
634634
"metadata": {},
635635
"source": [
636-
"Let's test our util functions on the model we have set up, starting with simple predictions"
636+
"Let's test our util functions on the model we have set up, starting with simple predictions."
637637
]
638638
},
639639
{
@@ -820,19 +820,19 @@
820820
"source": [
821821
"---\n",
822822
"## Working with Dynamic shapes in Torch TRT\n",
823-
"\n",
824-
"Enabling \"Dynamic Shaped\" tensors to be used is essentially enabling the ability to defer defining the shape of tensors until runetime. Torch TensorRT simply leverages TensorRT's Dynamic shape support. You can read more about TensorRT's implementation in the [TensorRT Documentation](https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#work_dynamic_shapes).\n",
825-
"\n",
823+
" \n",
824+
"Enabling \"Dynamic Shaped\" tensors to be used is essentially enabling the ability to defer defining the shape of tensors until run-time. Torch TensorRT simply leverages TensorRT's Dynamic shape support. You can read more about TensorRT's implementation in the [TensorRT Documentation](https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#work_dynamic_shapes).\n",
825+
" \n",
826826
"#### How can you use this feature?\n",
827-
"\n",
827+
" \n",
828828
"To make use of dynamic shapes, you need to provide three shapes:\n",
829829
"* `min_shape`: The minimum size of the tensor considered for optimizations.\n",
830-
"* `opt_shape`: The optimizations will be done with an effort to maximize performance for this shape.\n",
831-
"* `min_shape`: The maximum size of the tensor considered for optimizations.\n",
832-
"\n",
833-
"Generally, users can expect best performance within the specified ranges. Performance for other shapes may be be lower for other shapes (depending on the model ops and GPU used)\n",
834-
"\n",
835-
"In the following example, we will showcase varing batch size, which is the zeroth dimension of our input tensors. As Convolution operations require that the channel dimension be a build-time constant, we won't be changing sizes of other channels in this example, but for models which contain ops conducive to changes in other channels, this functionality can be freely used."
830+
"* `opt_shape`: The optimizations will be done in an effort to maximize performance for this shape.\n",
831+
"* `max_shape`: The maximum size of the tensor considered for optimizations.\n",
832+
" \n",
833+
"Generally, users can expect the best performance within the specified ranges. Performance for other shapes maybe be lower for other shapes (depending on the model ops and GPU used)\n",
834+
" \n",
835+
"In the following example, we will showcase varying batch sizes, which is the zeroth dimension of our input tensors. As Convolution operations require that the channel dimension be a build-time constant, we won't be changing the sizes of other channels in this example, but for models which contain ops conducive to changes in other channels, this functionality can be freely used."
836836
]
837837
},
838838
{
@@ -1015,7 +1015,7 @@
10151015
"name": "python",
10161016
"nbconvert_exporter": "python",
10171017
"pygments_lexer": "ipython3",
1018-
"version": "3.8.13"
1018+
"version": "3.9.6"
10191019
}
10201020
},
10211021
"nbformat": 4,

0 commit comments

Comments
 (0)