Skip to content

Commit a61a72a

Browse files
committed
updated readme, added separate installation instructions
1 parent 221449e commit a61a72a

File tree

3 files changed

+237
-297
lines changed

3 files changed

+237
-297
lines changed

CONTRIBUTING.md

Lines changed: 59 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,4 +54,62 @@ pip install pre-commit
5454
go install github.com/bazelbuild/buildtools/buildifier@latest
5555
```
5656

57-
Thanks in advance for your patience as we review your contributions; we do appreciate them!
57+
## Testing using Python backend
58+
59+
Torch-TensorRT supports testing in Python using [nox](https://nox.thea.codes/en/stable)
60+
61+
To install the nox using python-pip
62+
63+
```
64+
python3 -m pip install --upgrade nox
65+
```
66+
67+
To list supported nox sessions:
68+
69+
```
70+
nox --session -l
71+
```
72+
73+
Environment variables supported by nox
74+
75+
```
76+
PYT_PATH - To use different PYTHONPATH than system installed Python packages
77+
TOP_DIR - To set the root directory of the noxfile
78+
USE_CXX11 - To use cxx11_abi (Defaults to 0)
79+
USE_HOST_DEPS - To use host dependencies for tests (Defaults to 0)
80+
```
81+
82+
Usage example
83+
84+
```
85+
nox --session l0_api_tests
86+
```
87+
88+
Supported Python versions:
89+
```
90+
["3.7", "3.8", "3.9", "3.10"]
91+
```
92+
93+
## How do I add support for a new op...
94+
95+
### In Torch-TensorRT?
96+
97+
Thanks for wanting to contribute! There are two main ways to handle supporting a new op. Either you can write a converter for the op from scratch and register it in the NodeConverterRegistry or if you can map the op to a set of ops that already have converters you can write a graph rewrite pass which will replace your new op with an equivalent subgraph of supported ops. Its preferred to use graph rewriting because then we do not need to maintain a large library of op converters. Also do look at the various op support trackers in the [issues](https://github.com/pytorch/TensorRT/issues) for information on the support status of various operators.
98+
99+
### In my application?
100+
101+
> The Node Converter Registry is not exposed in the top level API but in the internal headers shipped with the tarball.
102+
103+
You can register a converter for your op using the `NodeConverterRegistry` inside your application.
104+
105+
## Structure of the repo
106+
107+
| Component | Description |
108+
| ------------------------ | ------------------------------------------------------------ |
109+
| [**core**](core) | Main JIT ingest, lowering, conversion and runtime implementations |
110+
| [**cpp**](cpp) | C++ API and CLI source |
111+
| [**examples**](examples) | Example applications to show different features of Torch-TensorRT |
112+
| [**py**](py) | Python API for Torch-TensorRT |
113+
| [**tests**](tests) | Unit tests for Torch-TensorRT |
114+
115+
Thanks in advance for your patience as we review your contributions; we do appreciate them!

INSTALLATION.md

Lines changed: 165 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,165 @@
1+
## Pre-built wheels
2+
Stable versions of Torch-TensorRT are published on PyPI
3+
```bash
4+
pip install torch-tensorrt
5+
```
6+
7+
Nightly versions of Torch-TensorRT are published on the PyTorch package index
8+
```bash
9+
pip install --pre torch-tensorrt --index-url https://download.pytorch.org/whl/nightly/cu121
10+
```
11+
12+
Torch-TensorRT is also distributed in the ready-to-run [NVIDIA NGC PyTorch Container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch) which has all dependencies with the proper versions and example notebooks included.
13+
14+
## Building a docker container for Torch-TensorRT
15+
16+
We provide a `Dockerfile` in `docker/` directory. It expects a PyTorch NGC container as a base but can easily be modified to build on top of any container that provides, PyTorch, CUDA, cuDNN and TensorRT. The dependency libraries in the container can be found in the <a href="https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/index.html">release notes</a>.
17+
18+
Please follow this instruction to build a Docker container.
19+
20+
```bash
21+
docker build --build-arg BASE=<CONTAINER VERSION e.g. 21.11> -f docker/Dockerfile -t torch_tensorrt:latest .
22+
```
23+
24+
In the case of building on top of a custom base container, you first must determine the
25+
version of the PyTorch C++ ABI. If your source of PyTorch is pytorch.org, likely this is the pre-cxx11-abi in which case you must modify `//docker/dist-build.sh` to not build the
26+
C++11 ABI version of Torch-TensorRT.
27+
28+
You can then build the container using the build command in the [docker README](docker/README.md#instructions)
29+
30+
## Compiling Torch-TensorRT
31+
32+
### Installing Dependencies
33+
34+
#### 0. Install Bazel
35+
36+
If you don't have bazel installed, the easiest way is to install bazelisk using the method of you choosing https://github.com/bazelbuild/bazelisk
37+
38+
Otherwise you can use the following instructions to install binaries https://docs.bazel.build/versions/master/install.html
39+
40+
Finally if you need to compile from source (e.g. aarch64 until bazel distributes binaries for the architecture) you can use these instructions
41+
42+
```sh
43+
export BAZEL_VERSION=<VERSION>
44+
mkdir bazel
45+
cd bazel
46+
curl -fSsL -O https://github.com/bazelbuild/bazel/releases/download/$BAZEL_VERSION/bazel-$BAZEL_VERSION-dist.zip
47+
unzip bazel-$BAZEL_VERSION-dist.zip
48+
bash ./compile.sh
49+
```
50+
51+
You need to start by having CUDA installed on the system, LibTorch will automatically be pulled for you by bazel,
52+
then you have two options.
53+
54+
#### 1. Building using cuDNN & TensorRT tarball distributions
55+
56+
> This is recommended so as to build Torch-TensorRT hermetically and insures any bugs are not caused by version issues
57+
58+
> Make sure when running Torch-TensorRT that these versions of the libraries are prioritized in your `$LD_LIBRARY_PATH`
59+
60+
1. You need to download the tarball distributions of TensorRT and cuDNN from the NVIDIA website.
61+
- https://developer.nvidia.com/cudnn
62+
- https://developer.nvidia.com/tensorrt
63+
2. Place these files in a directory (the directories `third_party/dist_dir/[x86_64-linux-gnu | aarch64-linux-gnu]` exist for this purpose)
64+
3. Compile using:
65+
66+
``` shell
67+
bazel build //:libtorchtrt --compilation_mode opt --distdir third_party/dist_dir/[x86_64-linux-gnu | aarch64-linux-gnu]
68+
```
69+
70+
#### 2. Building using locally installed cuDNN & TensorRT
71+
72+
> If you find bugs and you compiled using this method please disclose you used this method in the issue
73+
> (an `ldd` dump would be nice too)
74+
75+
1. Install TensorRT, CUDA and cuDNN on the system before starting to compile.
76+
2. In `WORKSPACE` comment out
77+
78+
```py
79+
# Downloaded distributions to use with --distdir
80+
http_archive(
81+
name = "cudnn",
82+
urls = ["<URL>",],
83+
84+
build_file = "@//third_party/cudnn/archive:BUILD",
85+
sha256 = "<TAR SHA256>",
86+
strip_prefix = "cuda"
87+
)
88+
89+
http_archive(
90+
name = "tensorrt",
91+
urls = ["<URL>",],
92+
93+
build_file = "@//third_party/tensorrt/archive:BUILD",
94+
sha256 = "<TAR SHA256>",
95+
strip_prefix = "TensorRT-<VERSION>"
96+
)
97+
```
98+
99+
and uncomment
100+
101+
```py
102+
# Locally installed dependencies
103+
new_local_repository(
104+
name = "cudnn",
105+
path = "/usr/",
106+
build_file = "@//third_party/cudnn/local:BUILD"
107+
)
108+
109+
new_local_repository(
110+
name = "tensorrt",
111+
path = "/usr/",
112+
build_file = "@//third_party/tensorrt/local:BUILD"
113+
)
114+
```
115+
116+
3. Compile using:
117+
118+
``` shell
119+
bazel build //:libtorchtrt --compilation_mode opt
120+
```
121+
122+
### FX path (Python only) installation
123+
If the user plans to try FX path (Python only) and would like to avoid bazel build. Please follow the steps below.
124+
``` shell
125+
cd py && python3 setup.py install --fx-only
126+
```
127+
128+
### Debug build
129+
130+
``` shell
131+
bazel build //:libtorchtrt --compilation_mode=dbg
132+
```
133+
134+
### Native compilation on NVIDIA Jetson AGX
135+
We performed end to end testing on Jetson platform using Jetpack SDK 4.6.
136+
137+
``` shell
138+
bazel build //:libtorchtrt --platforms //toolchains:jetpack_4.6
139+
```
140+
141+
> Note: Please refer [installation](docs/tutorials/installation.html) instructions for Pre-requisites
142+
143+
A tarball with the include files and library can then be found in bazel-bin
144+
145+
### Running Torch-TensorRT on a JIT Graph
146+
147+
> Make sure to add LibTorch to your LD_LIBRARY_PATH <br>
148+
> `export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$(pwd)/bazel-TensorRT/external/libtorch/lib`
149+
150+
``` shell
151+
bazel run //cpp/bin/torchtrtc -- $(realpath <PATH TO GRAPH>) out.ts <input-size>
152+
```
153+
154+
## Compiling the Python Package
155+
156+
To compile the python package for your local machine, just run `python3 setup.py install` in the `//py` directory.
157+
To build wheel files for different python versions, first build the Dockerfile in ``//py`` then run the following
158+
command
159+
160+
```
161+
docker run -it -v$(pwd)/..:/workspace/Torch-TensorRT build_torch_tensorrt_wheel /bin/bash /workspace/Torch-TensorRT/py/build_whl.sh
162+
```
163+
164+
Python compilation expects using the tarball based compilation strategy from above.
165+

0 commit comments

Comments
 (0)