Skip to content

Commit 4e00ca4

Browse files
Naren Dasannarendasan
Naren Dasan
authored andcommitted
docs: Update installation documentation for 2.0
Signed-off-by: Naren Dasan <[email protected]> Signed-off-by: Naren Dasan <[email protected]>
1 parent 3e612c1 commit 4e00ca4

File tree

2 files changed

+129
-165
lines changed

2 files changed

+129
-165
lines changed

docsrc/getting_started/installation.rst

Lines changed: 109 additions & 155 deletions
Original file line numberDiff line numberDiff line change
@@ -6,33 +6,52 @@ Installation
66
Precompiled Binaries
77
*********************
88

9+
Torch-TensorRT 2.x is centered primarily around Python. As such, precompiled releases can be found on pypi.org
10+
911
Dependencies
1012
---------------
1113

12-
You need to have either PyTorch or LibTorch installed based on if you are using Python or C++
13-
and you must have CUDA, cuDNN and TensorRT installed.
14+
You need to have CUDA, PyTorch, and TensorRT (python package is sufficient) installed to use Torch-TensorRT
1415

15-
* https://www.pytorch.org
1616
* https://developer.nvidia.com/cuda
17-
* https://developer.nvidia.com/cudnn
18-
* https://developer.nvidia.com/tensorrt
17+
* https://pytorch.org
1918

2019

21-
Python Package
22-
---------------
20+
Installing Torch-TensorRT
21+
---------------------------
2322

2423
You can install the python package using
2524

2625
.. code-block:: sh
2726
28-
pip3 install nvidia-pyindex
29-
pip3 install nvidia-tensorrt
30-
pip3 install torch-tensorrt==<VERSION> -f https://github.com/pytorch/TensorRT/releases/expanded_assets/<VERSION>
27+
python -m pip install torch torch-tensorrt tensorrt
28+
29+
Installing Torch-TensorRT for a specific CUDA version
30+
--------------------------------------------------------
31+
32+
Similar to PyTorch, Torch-TensorRT has builds compiled for different versions of CUDA. These are distributed on PyTorch's package index
33+
34+
For example CUDA 11.8
35+
36+
.. code-block:: sh
37+
38+
python -m pip install torch torch-tensorrt tensorrt --extra-index-url https://download.pytorch.org/whl/cu118
39+
40+
Installing Nightly Builds
41+
---------------------------
42+
43+
Torch-TensorRT distributed nightlies targeting the PyTorch nightly. These can be installed from the PyTorch nightly package index (separated by CUDA version)
44+
45+
.. code-block:: sh
46+
47+
python -m pip install --pre torch torch-tensorrt tensorrt --extra-index-url https://download.pytorch.org/whl/nightly/cu121
48+
49+
3150
3251
.. _bin-dist:
3352

34-
C++ Binary Distribution
35-
------------------------
53+
C++ Precompiled Binaries (TorchScript Only)
54+
--------------------------------------------------
3655

3756
Precompiled tarballs for releases are provided here: https://github.com/pytorch/TensorRT/releases
3857

@@ -46,7 +65,7 @@ Compiling From Source
4665
Dependencies for Compilation
4766
-------------------------------
4867

49-
Torch-TensorRT is built with Bazel, so begin by installing it.
68+
* Torch-TensorRT is built with **Bazel**, so begin by installing it.
5069

5170
* The easiest way is to install bazelisk using the method of your choosing https://github.com/bazelbuild/bazelisk
5271
* Otherwise you can use the following instructions to install binaries https://docs.bazel.build/versions/master/install.html
@@ -63,73 +82,77 @@ Torch-TensorRT is built with Bazel, so begin by installing it.
6382
cp output/bazel /usr/local/bin/
6483
6584
66-
You will also need to have CUDA installed on the system (or if running in a container, the system must have
85+
* You will also need to have **CUDA** installed on the system (or if running in a container, the system must have
6786
the CUDA driver installed and the container must have CUDA)
6887

69-
The correct LibTorch version will be pulled down for you by bazel.
88+
* Specify your CUDA version here if not the version used in the branch being built: https://github.com/pytorch/TensorRT/blob/4e5b0f6e860910eb510fa70a76ee3eb9825e7a4d/WORKSPACE#L46
7089

71-
NOTE: For best compatability with official PyTorch, use torch==1.10.0+cuda113, TensorRT 8.0 and cuDNN 8.2 for CUDA 11.3 however Torch-TensorRT itself supports
72-
TensorRT and cuDNN for other CUDA versions for usecases such as using NVIDIA compiled distributions of PyTorch that use other versions of CUDA
73-
e.g. aarch64 or custom compiled version of PyTorch.
7490

75-
.. _abis:
91+
* The correct **LibTorch** version will be pulled down for you by bazel.
7692

77-
Choosing the Right ABI
78-
^^^^^^^^^^^^^^^^^^^^^^^^
93+
NOTE: By default bazel will pull the latest nightly from pytorch.org. For building main, this is usually sufficient however if there is a specific PyTorch you are targeting,
94+
edit these locations with updated URLs/paths:
7995

80-
Likely the most complicated thing about compiling Torch-TensorRT is selecting the correct ABI. There are two options
81-
which are incompatible with each other, pre-cxx11-abi and the cxx11-abi. The complexity comes from the fact that while
82-
the most popular distribution of PyTorch (wheels downloaded from pytorch.org/pypi directly) use the pre-cxx11-abi, most
83-
other distributions you might encounter (e.g. ones from NVIDIA - NGC containers, and builds for Jetson as well as certain
84-
libtorch builds and likely if you build PyTorch from source) use the cxx11-abi. It is important you compile Torch-TensorRT
85-
using the correct ABI to function properly. Below is a table with general pairings of PyTorch distribution sources and the
86-
recommended commands:
96+
* https://github.com/pytorch/TensorRT/blob/4e5b0f6e860910eb510fa70a76ee3eb9825e7a4d/WORKSPACE#L53C1-L53C1
8797

88-
+-------------------------------------------------------------+----------------------------------------------------------+--------------------------------------------------------------------+
89-
| PyTorch Source | Recommended C++ Compilation Command | Recommended Python Compilation Command |
90-
+=============================================================+==========================================================+====================================================================+
91-
| PyTorch whl file from PyTorch.org | bazel build //:libtorchtrt -c opt --config pre_cxx11_abi | python3 setup.py bdist_wheel |
92-
+-------------------------------------------------------------+----------------------------------------------------------+--------------------------------------------------------------------+
93-
| libtorch-shared-with-deps-*.zip from PyTorch.org | bazel build //:libtorchtrt -c opt --config pre_cxx11_abi | python3 setup.py bdist_wheel |
94-
+-------------------------------------------------------------+----------------------------------------------------------+--------------------------------------------------------------------+
95-
| libtorch-cxx11-abi-shared-with-deps-*.zip from PyTorch.org | bazel build //:libtorchtrt -c opt | python3 setup.py bdist_wheel --use-cxx11-abi |
96-
+-------------------------------------------------------------+----------------------------------------------------------+--------------------------------------------------------------------+
97-
| PyTorch preinstalled in an NGC container | bazel build //:libtorchtrt -c opt | python3 setup.py bdist_wheel --use-cxx11-abi |
98-
+-------------------------------------------------------------+----------------------------------------------------------+--------------------------------------------------------------------+
99-
| PyTorch from the NVIDIA Forums for Jetson | bazel build //:libtorchtrt -c opt | python3 setup.py bdist_wheel --jetpack-version 4.6 --use-cxx11-abi |
100-
+-------------------------------------------------------------+----------------------------------------------------------+--------------------------------------------------------------------+
101-
| PyTorch built from Source | bazel build //:libtorchtrt -c opt | python3 setup.py bdist_wheel --use-cxx11-abi |
102-
+-------------------------------------------------------------+----------------------------------------------------------+--------------------------------------------------------------------+
10398

104-
NOTE: For all of the above cases you must correctly declare the source of PyTorch you intend to use in your WORKSPACE file for both Python and C++ builds. See below for more information
99+
* **cuDNN and TensorRT** are not required to be installed on the system to build Torch-TensorRT, in fact this is preferable to ensure reproducable builds. Download the tarballs
100+
for cuDNN and TensorRT from https://developer.nvidia.com and update the paths in the WORKSPACE file here https://github.com/pytorch/TensorRT/blob/4e5b0f6e860910eb510fa70a76ee3eb9825e7a4d/WORKSPACE#L71
101+
102+
For example:
103+
104+
.. code-block:: python
105+
106+
http_archive(
107+
name = "cudnn",
108+
build_file = "@//third_party/cudnn/archive:BUILD",
109+
sha256 = "79d77a769c7e7175abc7b5c2ed5c494148c0618a864138722c887f95c623777c",
110+
strip_prefix = "cudnn-linux-x86_64-8.8.1.3_cuda12-archive",
111+
urls = [
112+
#"https://developer.nvidia.com/downloads/compute/cudnn/secure/8.8.1/local_installers/12.0/cudnn-linux-x86_64-8.8.1.3_cuda12-archive.tar.xz",
113+
"file:///<ABSOLUTE PATH TO FILE>/cudnn-linux-x86_64-8.8.1.3_cuda12-archive.tar.xz"
114+
],
115+
)
105116
106-
You then have two compilation options:
117+
http_archive(
118+
name = "tensorrt",
119+
build_file = "@//third_party/tensorrt/archive:BUILD",
120+
sha256 = "0f8157a5fc5329943b338b893591373350afa90ca81239cdadd7580cd1eba254",
121+
strip_prefix = "TensorRT-8.6.1.6",
122+
urls = [
123+
#"https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/secure/8.6.1/tars/TensorRT-8.6.1.6.Linux.x86_64-gnu.cuda-12.0.tar.gz",
124+
"file:///<ABSOLUTE PATH TO FILE>/TensorRT-8.6.1.6.Linux.x86_64-gnu.cuda-12.0.tar.gz"
125+
],
126+
)
107127
108-
.. _build-from-archive:
128+
If you have a local version of cuDNN and TensorRT installed, this can be used as well by commenting out the above lines and uncommenting the following lines https://github.com/pytorch/TensorRT/blob/4e5b0f6e860910eb510fa70a76ee3eb9825e7a4d/WORKSPACE#L114C1-L124C3
109129

110-
**Building using cuDNN & TensorRT tarball distributions**
111-
--------------------------------------------------------------
112130

113-
This is recommended so as to build Torch-TensorRT hermetically and insures any compilation errors are not caused by version issues
131+
Building the Package
132+
---------------------
114133

115-
Make sure when running Torch-TensorRT that these versions of the libraries are prioritized in your ``$LD_LIBRARY_PATH``
134+
Once the WORKSPACE has been configured properly, all that is required to build torch-tensorrt is the following command
116135

117-
You need to download the tarball distributions of TensorRT and cuDNN from the NVIDIA website.
118-
* https://developer.nvidia.com/cudnn
119-
* https://developer.nvidia.com/tensorrt
136+
.. code-block:: sh
120137
121-
Place these files in a directory (the directories ``third_party/distdir/[x86_64-linux-gnu | aarch64-linux-gnu]`` exist for this purpose)
138+
python -m pip install --pre . --extra-index-url https://download.pytorch.org/whl/nightly/cu121
122139
123-
Then compile referencing the directory with the tarballs
140+
To build the wheel file
124141

125-
If you get errors regarding the packages, check their sha256 hashes and make sure they match the ones listed in ``WORKSPACE``
142+
.. code-bloc:: sh
143+
144+
python -m pip wheel --no-deps --pre . --extra-index-url https://download.pytorch.org/whl/nightly/cu121 -w dist
145+
146+
147+
Building the C++ Library (TorchScript Only)
148+
------------------------------
126149

127150
Release Build
128151
^^^^^^^^^^^^^^^^^^^^^^^^
129152

130153
.. code-block:: shell
131154
132-
bazel build //:libtorchtrt -c opt --distdir third_party/distdir/[x86_64-linux-gnu | aarch64-linux-gnu]
155+
bazel build //:libtorchtrt -c opt
133156
134157
A tarball with the include files and library can then be found in ``bazel-bin``
135158

@@ -142,7 +165,7 @@ To build with debug symbols use the following command
142165

143166
.. code-block:: shell
144167
145-
bazel build //:libtorchtrt -c dbg --distdir third_party/distdir/[x86_64-linux-gnu | aarch64-linux-gnu]
168+
bazel build //:libtorchtrt -c dbg
146169
147170
A tarball with the include files and library can then be found in ``bazel-bin``
148171

@@ -153,93 +176,44 @@ To build using the pre-CXX11 ABI use the ``pre_cxx11_abi`` config
153176

154177
.. code-block:: shell
155178
156-
bazel build //:libtorchtrt --config pre_cxx11_abi -c [dbg/opt] --distdir third_party/distdir/[x86_64-linux-gnu | aarch64-linux-gnu]
157-
158-
A tarball with the include files and library can then be found in ``bazel-bin``
159-
160-
.. _build-from-local:
161-
162-
**Building using locally installed cuDNN & TensorRT**
163-
--------------------------------------------------------------
164-
165-
If you encounter bugs and you compiled using this method please disclose that you used local sources in the issue (an ldd dump would be nice too)
166-
167-
Install TensorRT, CUDA and cuDNN on the system before starting to compile.
168-
169-
In WORKSPACE comment out:
170-
171-
.. code-block:: python
172-
173-
# Downloaded distributions to use with --distdir
174-
http_archive(
175-
name="cudnn",
176-
urls=[
177-
"<URL>",
178-
],
179-
build_file="@//third_party/cudnn/archive:BUILD",
180-
sha256="<TAR SHA256>",
181-
strip_prefix="cuda",
182-
)
183-
184-
http_archive(
185-
name="tensorrt",
186-
urls=[
187-
"<URL>",
188-
],
189-
build_file="@//third_party/tensorrt/archive:BUILD",
190-
sha256="<TAR SHA256>",
191-
strip_prefix="TensorRT-<VERSION>",
192-
)
193-
194-
and uncomment
195-
196-
.. code-block:: python
197-
198-
# Locally installed dependencies
199-
new_local_repository(
200-
name="cudnn", path="/usr/", build_file="@//third_party/cudnn/local:BUILD"
201-
)
202-
203-
new_local_repository(
204-
name="tensorrt", path="/usr/", build_file="@//third_party/tensorrt/local:BUILD"
205-
)
206-
207-
Release Build
208-
^^^^^^^^^^^^^^^^^^^^^^^^
209-
210-
Compile using:
211-
212-
.. code-block:: shell
213-
214-
bazel build //:libtorchtrt -c opt
179+
bazel build //:libtorchtrt --config pre_cxx11_abi -c [dbg/opt]
215180
216181
A tarball with the include files and library can then be found in ``bazel-bin``
217182

218-
.. _build-from-local-debug:
219-
220-
Debug Build
221-
^^^^^^^^^^^^
222183

223-
To build with debug symbols use the following command
224-
225-
.. code-block:: shell
226-
227-
bazel build //:libtorchtrt -c dbg
228-
229-
230-
A tarball with the include files and library can then be found in ``bazel-bin``
184+
.. _abis:
231185

232-
Pre CXX11 ABI Build
186+
Choosing the Right ABI
233187
^^^^^^^^^^^^^^^^^^^^^^^^
234188

235-
To build using the pre-CXX11 ABI use the ``pre_cxx11_abi`` config
189+
Likely the most complicated thing about compiling Torch-TensorRT is selecting the correct ABI. There are two options
190+
which are incompatible with each other, pre-cxx11-abi and the cxx11-abi. The complexity comes from the fact that while
191+
the most popular distribution of PyTorch (wheels downloaded from pytorch.org/pypi directly) use the pre-cxx11-abi, most
192+
other distributions you might encounter (e.g. ones from NVIDIA - NGC containers, and builds for Jetson as well as certain
193+
libtorch builds and likely if you build PyTorch from source) use the cxx11-abi. It is important you compile Torch-TensorRT
194+
using the correct ABI to function properly. Below is a table with general pairings of PyTorch distribution sources and the
195+
recommended commands:
236196

237-
.. code-block:: shell
197+
+-------------------------------------------------------------+----------------------------------------------------------+--------------------------------------------------------------------+
198+
| PyTorch Source | Recommended Python Compilation Command | Recommended C++ Compilation Command |
199+
+=============================================================+==========================================================+====================================================================+
200+
| PyTorch whl file from PyTorch.org | python -m pip install . | bazel build //:libtorchtrt -c opt --config pre_cxx11_abi |
201+
+-------------------------------------------------------------+----------------------------------------------------------+--------------------------------------------------------------------+
202+
| libtorch-shared-with-deps-*.zip from PyTorch.org | python -m pip install . | bazel build //:libtorchtrt -c opt --config pre_cxx11_abi |
203+
+-------------------------------------------------------------+----------------------------------------------------------+--------------------------------------------------------------------+
204+
| libtorch-cxx11-abi-shared-with-deps-*.zip from PyTorch.org | python setup.py bdist_wheel --use-cxx11-abi | bazel build //:libtorchtrt -c opt |
205+
+-------------------------------------------------------------+----------------------------------------------------------+--------------------------------------------------------------------+
206+
| PyTorch preinstalled in an NGC container | python setup.py bdist_wheel --use-cxx11-abi | bazel build //:libtorchtrt -c opt |
207+
+-------------------------------------------------------------+----------------------------------------------------------+--------------------------------------------------------------------+
208+
| PyTorch from the NVIDIA Forums for Jetson | python setup.py bdist_wheel --use-cxx11-abi | bazel build //:libtorchtrt -c opt |
209+
+-------------------------------------------------------------+----------------------------------------------------------+--------------------------------------------------------------------+
210+
| PyTorch built from Source | python setup.py bdist_wheel --use-cxx11-abi | bazel build //:libtorchtrt -c opt |
211+
+-------------------------------------------------------------+----------------------------------------------------------+--------------------------------------------------------------------+
238212

239-
bazel build //:libtorchtrt --config pre_cxx11_abi -c [dbg/opt]
213+
NOTE: For all of the above cases you must correctly declare the source of PyTorch you intend to use in your WORKSPACE file for both Python and C++ builds. See below for more information
240214

241-
**Building with CMake**
242-
-----------------------
215+
**Building with CMake** (TorchScript Only)
216+
-------------------------------------------
243217

244218
It is possible to build the API libraries (in cpp/) and the torchtrtc executable using CMake instead of Bazel.
245219
Currently, the python API and the tests cannot be built with CMake.
@@ -267,26 +241,6 @@ A few useful CMake options include:
267241
[-DCMAKE_BUILD_TYPE=Debug|Release]
268242
cmake --build <build directory>
269243
270-
**Building the Python package**
271-
--------------------------------
272-
273-
Begin by installing ``ninja``
274-
275-
You can build the Python package using ``setup.py`` (this will also build the correct version of ``libtorchtrt.so``)
276-
277-
.. code-block:: shell
278-
279-
python3 setup.py [install/bdist_wheel]
280-
281-
Debug Build
282-
^^^^^^^^^^^^
283-
284-
.. code-block:: shell
285-
286-
python3 setup.py develop [--user]
287-
288-
This also compiles a debug build of ``libtorchtrt.so``
289-
290244
**Building Natively on aarch64 (Jetson)**
291245
-------------------------------------------
292246

0 commit comments

Comments
 (0)