|
5 | 5 |
|
6 | 6 | > Ahead of Time (AOT) compiling for PyTorch JIT and FX
|
7 | 7 |
|
8 |
| -Torch-TensorRT is a compiler for PyTorch/TorchScript/FX, targeting NVIDIA GPUs via NVIDIA's TensorRT Deep Learning Optimizer and Runtime. Unlike PyTorch's Just-In-Time (JIT) compiler, Torch-TensorRT is an Ahead-of-Time (AOT) compiler, meaning that before you deploy your TorchScript code, you go through an explicit compile step to convert a standard TorchScript or FX program into an module targeting a TensorRT engine. Torch-TensorRT operates as a PyTorch extention and compiles modules that integrate into the JIT runtime seamlessly. After compilation using the optimized graph should feel no different than running a TorchScript module. You also have access to TensorRT's suite of configurations at compile time, so you are able to specify operating precision (FP32/FP16/INT8) and other settings for your module. |
| 8 | +Torch-TensorRT is a compiler for PyTorch/TorchScript/FX, targeting NVIDIA GPUs via NVIDIA's TensorRT Deep Learning Optimizer and Runtime. Unlike PyTorch's Just-In-Time (JIT) compiler, Torch-TensorRT is an Ahead-of-Time (AOT) compiler, meaning that before you deploy your TorchScript code, you go through an explicit compile step to convert a standard TorchScript or FX program into an module targeting a TensorRT engine. Torch-TensorRT operates as a PyTorch extension and compiles modules that integrate into the JIT runtime seamlessly. After compilation using the optimized graph should feel no different than running a TorchScript module. You also have access to TensorRT's suite of configurations at compile time, so you are able to specify operating precision (FP32/FP16/INT8) and other settings for your module. |
9 | 9 |
|
10 | 10 | Resources:
|
11 | 11 | - [Documentation](https://nvidia.github.io/Torch-TensorRT/)
|
12 | 12 | - [FX path Documentation](https://github.com/pytorch/TensorRT/blob/master/docsrc/tutorials/getting_started_with_fx_path.rst)
|
13 | 13 | - [Torch-TensorRT Explained in 2 minutes!](https://www.youtube.com/watch?v=TU5BMU6iYZ0&ab_channel=NVIDIADeveloper)
|
14 |
| -- [Comprehensive Discusion (GTC Event)](https://www.nvidia.com/en-us/on-demand/session/gtcfall21-a31107/) |
| 14 | +- [Comprehensive Discussion (GTC Event)](https://www.nvidia.com/en-us/on-demand/session/gtcfall21-a31107/) |
15 | 15 | - [Pre-built Docker Container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch). To use this container, make an NGC account and sign in to NVIDIA's registry with an API key. Refer to [this guide](https://docs.nvidia.com/ngc/ngc-catalog-user-guide/index.html#registering-activating-ngc-account) for the same.
|
16 | 16 |
|
17 | 17 | ## NVIDIA NGC Container
|
@@ -44,7 +44,7 @@ If you would like to build outside a docker container, please follow the section
|
44 | 44 | #include "torch_tensorrt/torch_tensorrt.h"
|
45 | 45 |
|
46 | 46 | ...
|
47 |
| -// Set input datatypes. Allowerd options torch::{kFloat, kHalf, kChar, kInt32, kBool} |
| 47 | +// Set input datatypes. Allowed options torch::{kFloat, kHalf, kChar, kInt32, kBool} |
48 | 48 | // Size of input_dtypes should match number of inputs to the network.
|
49 | 49 | // If input_dtypes is not set, default precision follows traditional PyT / TRT rules
|
50 | 50 | auto input = torch_tensorrt::Input(dims, torch::kHalf);
|
@@ -306,7 +306,7 @@ Supported Python versions:
|
306 | 306 |
|
307 | 307 | ### In Torch-TensorRT?
|
308 | 308 |
|
309 |
| -Thanks for wanting to contribute! There are two main ways to handle supporting a new op. Either you can write a converter for the op from scratch and register it in the NodeConverterRegistry or if you can map the op to a set of ops that already have converters you can write a graph rewrite pass which will replace your new op with an equivalent subgraph of supported ops. Its preferred to use graph rewriting because then we do not need to maintain a large library of op converters. Also do look at the various op support trackers in the [issues](https://github.com/pytorch/TensorRT/issues) for information on the support status of various operators. |
| 309 | +Thanks for wanting to contribute! There are two main ways to handle supporting a new op. Either you can write a converter for the op from scratch and register it in the NodeConverterRegistry or if you can map the op to a set of ops that already have converters you can write a graph rewrite pass which will replace your new op with an equivalent subgraph of supported ops. It's preferred to use graph rewriting because then we do not need to maintain a large library of op converters. Also do look at the various op support trackers in the [issues](https://github.com/pytorch/TensorRT/issues) for information on the support status of various operators. |
310 | 310 |
|
311 | 311 | ### In my application?
|
312 | 312 |
|
|
0 commit comments