You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Torch-TensorRT distributed nightlies targeting the PyTorch nightly. These can be installed from the PyTorch nightly package index (separated by CUDA version)
Precompiled tarballs for releases are provided here: https://github.com/pytorch/TensorRT/releases
38
57
@@ -46,7 +65,7 @@ Compiling From Source
46
65
Dependencies for Compilation
47
66
-------------------------------
48
67
49
-
Torch-TensorRT is built with Bazel, so begin by installing it.
68
+
* Torch-TensorRT is built with **Bazel**, so begin by installing it.
50
69
51
70
* The easiest way is to install bazelisk using the method of your choosing https://github.com/bazelbuild/bazelisk
52
71
* Otherwise you can use the following instructions to install binaries https://docs.bazel.build/versions/master/install.html
@@ -63,73 +82,77 @@ Torch-TensorRT is built with Bazel, so begin by installing it.
63
82
cp output/bazel /usr/local/bin/
64
83
65
84
66
-
You will also need to have CUDA installed on the system (or if running in a container, the system must have
85
+
* You will also need to have **CUDA** installed on the system (or if running in a container, the system must have
67
86
the CUDA driver installed and the container must have CUDA)
68
87
69
-
The correct LibTorch version will be pulled down for you by bazel.
88
+
* Specify your CUDA version here if not the version used in the branch being built: https://github.com/pytorch/TensorRT/blob/4e5b0f6e860910eb510fa70a76ee3eb9825e7a4d/WORKSPACE#L46
70
89
71
-
NOTE: For best compatability with official PyTorch, use torch==1.10.0+cuda113, TensorRT 8.0 and cuDNN 8.2 for CUDA 11.3 however Torch-TensorRT itself supports
72
-
TensorRT and cuDNN for other CUDA versions for usecases such as using NVIDIA compiled distributions of PyTorch that use other versions of CUDA
73
-
e.g. aarch64 or custom compiled version of PyTorch.
74
90
75
-
.. _abis:
91
+
* The correct **LibTorch** version will be pulled down for you by bazel.
76
92
77
-
Choosing the Right ABI
78
-
^^^^^^^^^^^^^^^^^^^^^^^^
93
+
NOTE: By default bazel will pull the latest nightly from pytorch.org. For building main, this is usually sufficient however if there is a specific PyTorch you are targeting,
94
+
edit these locations with updated URLs/paths:
79
95
80
-
Likely the most complicated thing about compiling Torch-TensorRT is selecting the correct ABI. There are two options
81
-
which are incompatible with each other, pre-cxx11-abi and the cxx11-abi. The complexity comes from the fact that while
82
-
the most popular distribution of PyTorch (wheels downloaded from pytorch.org/pypi directly) use the pre-cxx11-abi, most
83
-
other distributions you might encounter (e.g. ones from NVIDIA - NGC containers, and builds for Jetson as well as certain
84
-
libtorch builds and likely if you build PyTorch from source) use the cxx11-abi. It is important you compile Torch-TensorRT
85
-
using the correct ABI to function properly. Below is a table with general pairings of PyTorch distribution sources and the
NOTE: For all of the above cases you must correctly declare the source of PyTorch you intend to use in your WORKSPACE file for both Python and C++ builds. See below for more information
99
+
* **cuDNN and TensorRT** are not required to be installed on the system to build Torch-TensorRT, in fact this is preferable to ensure reproducable builds. Download the tarballs
100
+
for cuDNN and TensorRT from https://developer.nvidia.com and update the paths in the WORKSPACE file here https://github.com/pytorch/TensorRT/blob/4e5b0f6e860910eb510fa70a76ee3eb9825e7a4d/WORKSPACE#L71
"file:///<ABSOLUTE PATH TO FILE>/TensorRT-8.6.1.6.Linux.x86_64-gnu.cuda-12.0.tar.gz"
125
+
],
126
+
)
107
127
108
-
.. _build-from-archive:
128
+
If you have a local version of cuDNN and TensorRT installed, this can be used as well by commenting out the above lines and uncommenting the following lines https://github.com/pytorch/TensorRT/blob/4e5b0f6e860910eb510fa70a76ee3eb9825e7a4d/WORKSPACE#L114C1-L124C3
109
129
110
-
**Building using cuDNN & TensorRT tarball distributions**
NOTE: For all of the above cases you must correctly declare the source of PyTorch you intend to use in your WORKSPACE file for both Python and C++ builds. See below for more information
240
214
241
-
**Building with CMake**
242
-
-----------------------
215
+
**Building with CMake** (TorchScript Only)
216
+
-------------------------------------------
243
217
244
218
It is possible to build the API libraries (in cpp/) and the torchtrtc executable using CMake instead of Bazel.
245
219
Currently, the python API and the tests cannot be built with CMake.
@@ -267,26 +241,6 @@ A few useful CMake options include:
267
241
[-DCMAKE_BUILD_TYPE=Debug|Release]
268
242
cmake --build <build directory>
269
243
270
-
**Building the Python package**
271
-
--------------------------------
272
-
273
-
Begin by installing ``ninja``
274
-
275
-
You can build the Python package using ``setup.py`` (this will also build the correct version of ``libtorchtrt.so``)
276
-
277
-
.. code-block:: shell
278
-
279
-
python3 setup.py [install/bdist_wheel]
280
-
281
-
Debug Build
282
-
^^^^^^^^^^^^
283
-
284
-
.. code-block:: shell
285
-
286
-
python3 setup.py develop [--user]
287
-
288
-
This also compiles a debug build of ``libtorchtrt.so``
0 commit comments