|
| 1 | +## Pre-built wheels |
| 2 | +Stable versions of Torch-TensorRT are published on PyPI |
| 3 | +```bash |
| 4 | +pip install torch-tensorrt |
| 5 | +``` |
| 6 | + |
| 7 | +Nightly versions of Torch-TensorRT are published on the PyTorch package index |
| 8 | +```bash |
| 9 | +pip install --pre torch-tensorrt --index-url https://download.pytorch.org/whl/nightly/cu121 |
| 10 | +``` |
| 11 | + |
| 12 | +Torch-TensorRT is also distributed in the ready-to-run [NVIDIA NGC PyTorch Container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch) which has all dependencies with the proper versions and example notebooks included. |
| 13 | + |
| 14 | +## Building a docker container for Torch-TensorRT |
| 15 | + |
| 16 | +We provide a `Dockerfile` in `docker/` directory. It expects a PyTorch NGC container as a base but can easily be modified to build on top of any container that provides, PyTorch, CUDA, cuDNN and TensorRT. The dependency libraries in the container can be found in the <a href="https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/index.html">release notes</a>. |
| 17 | + |
| 18 | +Please follow this instruction to build a Docker container. |
| 19 | + |
| 20 | +```bash |
| 21 | +docker build --build-arg BASE=<CONTAINER VERSION e.g. 21.11> -f docker/Dockerfile -t torch_tensorrt:latest . |
| 22 | +``` |
| 23 | + |
| 24 | +In the case of building on top of a custom base container, you first must determine the |
| 25 | +version of the PyTorch C++ ABI. If your source of PyTorch is pytorch.org, likely this is the pre-cxx11-abi in which case you must modify `//docker/dist-build.sh` to not build the |
| 26 | +C++11 ABI version of Torch-TensorRT. |
| 27 | + |
| 28 | +You can then build the container using the build command in the [docker README](docker/README.md#instructions) |
| 29 | + |
| 30 | +## Compiling Torch-TensorRT |
| 31 | + |
| 32 | +### Installing Dependencies |
| 33 | + |
| 34 | +#### 0. Install Bazel |
| 35 | + |
| 36 | +If you don't have bazel installed, the easiest way is to install bazelisk using the method of you choosing https://github.com/bazelbuild/bazelisk |
| 37 | + |
| 38 | +Otherwise you can use the following instructions to install binaries https://docs.bazel.build/versions/master/install.html |
| 39 | + |
| 40 | +Finally if you need to compile from source (e.g. aarch64 until bazel distributes binaries for the architecture) you can use these instructions |
| 41 | + |
| 42 | +```sh |
| 43 | +export BAZEL_VERSION=<VERSION> |
| 44 | +mkdir bazel |
| 45 | +cd bazel |
| 46 | +curl -fSsL -O https://github.com/bazelbuild/bazel/releases/download/$BAZEL_VERSION/bazel-$BAZEL_VERSION-dist.zip |
| 47 | +unzip bazel-$BAZEL_VERSION-dist.zip |
| 48 | +bash ./compile.sh |
| 49 | +``` |
| 50 | + |
| 51 | +You need to start by having CUDA installed on the system, LibTorch will automatically be pulled for you by bazel, |
| 52 | +then you have two options. |
| 53 | + |
| 54 | +#### 1. Building using cuDNN & TensorRT tarball distributions |
| 55 | + |
| 56 | +> This is recommended so as to build Torch-TensorRT hermetically and insures any bugs are not caused by version issues |
| 57 | +
|
| 58 | +> Make sure when running Torch-TensorRT that these versions of the libraries are prioritized in your `$LD_LIBRARY_PATH` |
| 59 | +
|
| 60 | +1. You need to download the tarball distributions of TensorRT and cuDNN from the NVIDIA website. |
| 61 | + - https://developer.nvidia.com/cudnn |
| 62 | + - https://developer.nvidia.com/tensorrt |
| 63 | +2. Place these files in a directory (the directories `third_party/dist_dir/[x86_64-linux-gnu | aarch64-linux-gnu]` exist for this purpose) |
| 64 | +3. Compile using: |
| 65 | + |
| 66 | +``` shell |
| 67 | +bazel build //:libtorchtrt --compilation_mode opt --distdir third_party/dist_dir/[x86_64-linux-gnu | aarch64-linux-gnu] |
| 68 | +``` |
| 69 | + |
| 70 | +#### 2. Building using locally installed cuDNN & TensorRT |
| 71 | + |
| 72 | +> If you find bugs and you compiled using this method please disclose you used this method in the issue |
| 73 | +> (an `ldd` dump would be nice too) |
| 74 | +
|
| 75 | +1. Install TensorRT, CUDA and cuDNN on the system before starting to compile. |
| 76 | +2. In `WORKSPACE` comment out |
| 77 | + |
| 78 | +```py |
| 79 | +# Downloaded distributions to use with --distdir |
| 80 | +http_archive( |
| 81 | + name = "cudnn", |
| 82 | + urls = ["<URL>",], |
| 83 | + |
| 84 | + build_file = "@//third_party/cudnn/archive:BUILD", |
| 85 | + sha256 = "<TAR SHA256>", |
| 86 | + strip_prefix = "cuda" |
| 87 | +) |
| 88 | + |
| 89 | +http_archive( |
| 90 | + name = "tensorrt", |
| 91 | + urls = ["<URL>",], |
| 92 | + |
| 93 | + build_file = "@//third_party/tensorrt/archive:BUILD", |
| 94 | + sha256 = "<TAR SHA256>", |
| 95 | + strip_prefix = "TensorRT-<VERSION>" |
| 96 | +) |
| 97 | +``` |
| 98 | + |
| 99 | +and uncomment |
| 100 | + |
| 101 | +```py |
| 102 | +# Locally installed dependencies |
| 103 | +new_local_repository( |
| 104 | + name = "cudnn", |
| 105 | + path = "/usr/", |
| 106 | + build_file = "@//third_party/cudnn/local:BUILD" |
| 107 | +) |
| 108 | + |
| 109 | +new_local_repository( |
| 110 | + name = "tensorrt", |
| 111 | + path = "/usr/", |
| 112 | + build_file = "@//third_party/tensorrt/local:BUILD" |
| 113 | +) |
| 114 | +``` |
| 115 | + |
| 116 | +3. Compile using: |
| 117 | + |
| 118 | +``` shell |
| 119 | +bazel build //:libtorchtrt --compilation_mode opt |
| 120 | +``` |
| 121 | + |
| 122 | +### FX path (Python only) installation |
| 123 | +If the user plans to try FX path (Python only) and would like to avoid bazel build. Please follow the steps below. |
| 124 | +``` shell |
| 125 | +cd py && python3 setup.py install --fx-only |
| 126 | +``` |
| 127 | + |
| 128 | +### Debug build |
| 129 | + |
| 130 | +``` shell |
| 131 | +bazel build //:libtorchtrt --compilation_mode=dbg |
| 132 | +``` |
| 133 | + |
| 134 | +### Native compilation on NVIDIA Jetson AGX |
| 135 | +We performed end to end testing on Jetson platform using Jetpack SDK 4.6. |
| 136 | + |
| 137 | +``` shell |
| 138 | +bazel build //:libtorchtrt --platforms //toolchains:jetpack_4.6 |
| 139 | +``` |
| 140 | + |
| 141 | +> Note: Please refer [installation](docs/tutorials/installation.html) instructions for Pre-requisites |
| 142 | +
|
| 143 | +A tarball with the include files and library can then be found in bazel-bin |
| 144 | + |
| 145 | +### Running Torch-TensorRT on a JIT Graph |
| 146 | + |
| 147 | +> Make sure to add LibTorch to your LD_LIBRARY_PATH <br> |
| 148 | +> `export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$(pwd)/bazel-TensorRT/external/libtorch/lib` |
| 149 | +
|
| 150 | +``` shell |
| 151 | +bazel run //cpp/bin/torchtrtc -- $(realpath <PATH TO GRAPH>) out.ts <input-size> |
| 152 | +``` |
| 153 | + |
| 154 | +## Compiling the Python Package |
| 155 | + |
| 156 | +To compile the python package for your local machine, just run `python3 setup.py install` in the `//py` directory. |
| 157 | +To build wheel files for different python versions, first build the Dockerfile in ``//py`` then run the following |
| 158 | +command |
| 159 | + |
| 160 | +``` |
| 161 | +docker run -it -v$(pwd)/..:/workspace/Torch-TensorRT build_torch_tensorrt_wheel /bin/bash /workspace/Torch-TensorRT/py/build_whl.sh |
| 162 | +``` |
| 163 | + |
| 164 | +Python compilation expects using the tarball based compilation strategy from above. |
| 165 | + |
0 commit comments