Skip to content

Commit 25945ff

Browse files
zewenli98laikhtewari
authored andcommitted
chore: Remove CUDNN dependencies (#2804)
1 parent 3c14855 commit 25945ff

32 files changed

+29
-620
lines changed

.github/workflows/docker_builder.yml

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -44,18 +44,16 @@ jobs:
4444
username: ${{ github.actor }}
4545
password: ${{ secrets.GITHUB_TOKEN }}
4646

47-
# Automatically detect TensorRT and cuDNN default versions for Torch-TRT build
47+
# Automatically detect TensorRT default versions for Torch-TRT build
4848
- name: Build Docker image
4949
env:
5050
DOCKER_TAG: ${{ env.DOCKER_REGISTRY }}/${{ steps.fix_slashes.outputs.container_name }}
5151
run: |
5252
python3 -m pip install pyyaml
5353
TRT_VERSION=$(python3 -c "import versions; versions.tensorrt_version()")
5454
echo "TRT VERSION = ${TRT_VERSION}"
55-
CUDNN_VERSION=$(python3 -c "import versions; versions.cudnn_version()")
56-
echo "CUDNN VERSION = ${CUDNN_VERSION}"
5755
58-
DOCKER_BUILDKIT=1 docker build --build-arg TENSORRT_VERSION=$TRT_VERSION --build-arg CUDNN_VERSION=$CUDNN_VERSION -f docker/Dockerfile --tag $DOCKER_TAG .
56+
DOCKER_BUILDKIT=1 docker build --build-arg TENSORRT_VERSION=$TRT_VERSION -f docker/Dockerfile --tag $DOCKER_TAG .
5957
6058
- name: Push Docker image
6159
env:

README.md

Lines changed: 0 additions & 219 deletions
Original file line numberDiff line numberDiff line change
@@ -85,27 +85,9 @@ model(*inputs)
8585
#include "torch/script.h"
8686
#include "torch_tensorrt/torch_tensorrt.h"
8787

88-
<<<<<<< HEAD
8988
auto trt_mod = torch::jit::load("trt.ts");
9089
auto input_tensor = [...]; // fill this with your inputs
9190
auto results = trt_mod.forward({input_tensor});
92-
=======
93-
...
94-
// Set input datatypes. Allowed options torch::{kFloat, kHalf, kChar, kInt32, kBool}
95-
// Size of input_dtypes should match number of inputs to the network.
96-
// If input_dtypes is not set, default precision follows traditional PyT / TRT rules
97-
auto input = torch_tensorrt::Input(dims, torch::kHalf);
98-
auto compile_settings = torch_tensorrt::ts::CompileSpec({input});
99-
// FP16 execution
100-
compile_settings.enabled_precisions = {torch::kHalf};
101-
// Compile module
102-
auto trt_mod = torch_tensorrt::ts::compile(ts_mod, compile_settings);
103-
// Run like normal
104-
auto results = trt_mod.forward({in_tensor});
105-
// Save module for later
106-
trt_mod.save("trt_torchscript_module.ts");
107-
...
108-
>>>>>>> 1a89aea5b (Fix minor grammatical corrections (#2779))
10991
```
11092
11193
## Further resources
@@ -142,208 +124,7 @@ These are the following dependencies used to verify the testcases. Torch-TensorR
142124
143125
Deprecation is used to inform developers that some APIs and tools are no longer recommended for use. Beginning with version 2.3, Torch-TensorRT has the following deprecation policy:
144126
145-
<<<<<<< HEAD
146127
Deprecation notices are communicated in the Release Notes. Deprecated API functions will have a statement in the source documenting when they were deprecated. Deprecated methods and classes will issue deprecation warnings at runtime, if they are used. Torch-TensorRT provides a 6-month migration period after the deprecation. APIs and tools continue to work during the migration period. After the migration period ends, APIs and tools are removed in a manner consistent with semantic versioning.
147-
=======
148-
```
149-
pip install tensorrt torch-tensorrt
150-
```
151-
152-
## Compiling Torch-TensorRT
153-
154-
### Installing Dependencies
155-
156-
#### 0. Install Bazel
157-
158-
If you don't have bazel installed, the easiest way is to install bazelisk using the method of you choosing https://github.com/bazelbuild/bazelisk
159-
160-
Otherwise you can use the following instructions to install binaries https://docs.bazel.build/versions/master/install.html
161-
162-
Finally if you need to compile from source (e.g. aarch64 until bazel distributes binaries for the architecture) you can use these instructions
163-
164-
```sh
165-
export BAZEL_VERSION=<VERSION>
166-
mkdir bazel
167-
cd bazel
168-
curl -fSsL -O https://github.com/bazelbuild/bazel/releases/download/$BAZEL_VERSION/bazel-$BAZEL_VERSION-dist.zip
169-
unzip bazel-$BAZEL_VERSION-dist.zip
170-
bash ./compile.sh
171-
```
172-
173-
You need to start by having CUDA installed on the system, LibTorch will automatically be pulled for you by bazel,
174-
then you have two options.
175-
176-
#### 1. Building using cuDNN & TensorRT tarball distributions
177-
178-
> This is recommended so as to build Torch-TensorRT hermetically and insures any bugs are not caused by version issues
179-
180-
> Make sure when running Torch-TensorRT that these versions of the libraries are prioritized in your `$LD_LIBRARY_PATH`
181-
182-
1. You need to download the tarball distributions of TensorRT and cuDNN from the NVIDIA website.
183-
- https://developer.nvidia.com/cudnn
184-
- https://developer.nvidia.com/tensorrt
185-
2. Place these files in a directory (the directories `third_party/dist_dir/[x86_64-linux-gnu | aarch64-linux-gnu]` exist for this purpose)
186-
3. Compile using:
187-
188-
``` shell
189-
bazel build //:libtorchtrt --compilation_mode opt --distdir third_party/dist_dir/[x86_64-linux-gnu | aarch64-linux-gnu]
190-
```
191-
192-
#### 2. Building using locally installed cuDNN & TensorRT
193-
194-
> If you find bugs and you compiled using this method please disclose you used this method in the issue
195-
> (an `ldd` dump would be nice too)
196-
197-
1. Install TensorRT, CUDA and cuDNN on the system before starting to compile.
198-
2. In `WORKSPACE` comment out
199-
200-
```py
201-
# Downloaded distributions to use with --distdir
202-
http_archive(
203-
name = "cudnn",
204-
urls = ["<URL>",],
205-
206-
build_file = "@//third_party/cudnn/archive:BUILD",
207-
sha256 = "<TAR SHA256>",
208-
strip_prefix = "cuda"
209-
)
210-
211-
http_archive(
212-
name = "tensorrt",
213-
urls = ["<URL>",],
214-
215-
build_file = "@//third_party/tensorrt/archive:BUILD",
216-
sha256 = "<TAR SHA256>",
217-
strip_prefix = "TensorRT-<VERSION>"
218-
)
219-
```
220-
221-
and uncomment
222-
223-
```py
224-
# Locally installed dependencies
225-
new_local_repository(
226-
name = "cudnn",
227-
path = "/usr/",
228-
build_file = "@//third_party/cudnn/local:BUILD"
229-
)
230-
231-
new_local_repository(
232-
name = "tensorrt",
233-
path = "/usr/",
234-
build_file = "@//third_party/tensorrt/local:BUILD"
235-
)
236-
```
237-
238-
3. Compile using:
239-
240-
``` shell
241-
bazel build //:libtorchtrt --compilation_mode opt
242-
```
243-
244-
### FX path (Python only) installation
245-
If the user plans to try FX path (Python only) and would like to avoid bazel build. Please follow the steps below.
246-
``` shell
247-
cd py && python3 setup.py install --fx-only
248-
```
249-
250-
### Debug build
251-
252-
``` shell
253-
bazel build //:libtorchtrt --compilation_mode=dbg
254-
```
255-
256-
### Native compilation on NVIDIA Jetson AGX
257-
We performed end to end testing on Jetson platform using Jetpack SDK 4.6.
258-
259-
``` shell
260-
bazel build //:libtorchtrt --platforms //toolchains:jetpack_4.6
261-
```
262-
263-
> Note: Please refer [installation](docs/tutorials/installation.html) instructions for Pre-requisites
264-
265-
A tarball with the include files and library can then be found in bazel-bin
266-
267-
### Running Torch-TensorRT on a JIT Graph
268-
269-
> Make sure to add LibTorch to your LD_LIBRARY_PATH <br>
270-
> `export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$(pwd)/bazel-TensorRT/external/libtorch/lib`
271-
272-
``` shell
273-
bazel run //cpp/bin/torchtrtc -- $(realpath <PATH TO GRAPH>) out.ts <input-size>
274-
```
275-
276-
## Compiling the Python Package
277-
278-
To compile the python package for your local machine, just run `python3 setup.py install` in the `//py` directory.
279-
To build wheel files for different python versions, first build the Dockerfile in ``//py`` then run the following
280-
command
281-
282-
```
283-
docker run -it -v$(pwd)/..:/workspace/Torch-TensorRT build_torch_tensorrt_wheel /bin/bash /workspace/Torch-TensorRT/py/build_whl.sh
284-
```
285-
286-
Python compilation expects using the tarball based compilation strategy from above.
287-
288-
289-
## Testing using Python backend
290-
291-
Torch-TensorRT supports testing in Python using [nox](https://nox.thea.codes/en/stable)
292-
293-
To install the nox using python-pip
294-
295-
```
296-
python3 -m pip install --upgrade nox
297-
```
298-
299-
To list supported nox sessions:
300-
301-
```
302-
nox --session -l
303-
```
304-
305-
Environment variables supported by nox
306-
307-
```
308-
PYT_PATH - To use different PYTHONPATH than system installed Python packages
309-
TOP_DIR - To set the root directory of the noxfile
310-
USE_CXX11 - To use cxx11_abi (Defaults to 0)
311-
USE_HOST_DEPS - To use host dependencies for tests (Defaults to 0)
312-
```
313-
314-
Usage example
315-
316-
```
317-
nox --session l0_api_tests
318-
```
319-
320-
Supported Python versions:
321-
```
322-
["3.7", "3.8", "3.9", "3.10"]
323-
```
324-
325-
## How do I add support for a new op...
326-
327-
### In Torch-TensorRT?
328-
329-
Thanks for wanting to contribute! There are two main ways to handle supporting a new op. Either you can write a converter for the op from scratch and register it in the NodeConverterRegistry or if you can map the op to a set of ops that already have converters you can write a graph rewrite pass which will replace your new op with an equivalent subgraph of supported ops. It's preferred to use graph rewriting because then we do not need to maintain a large library of op converters. Also do look at the various op support trackers in the [issues](https://github.com/pytorch/TensorRT/issues) for information on the support status of various operators.
330-
331-
### In my application?
332-
333-
> The Node Converter Registry is not exposed in the top level API but in the internal headers shipped with the tarball.
334-
335-
You can register a converter for your op using the `NodeConverterRegistry` inside your application.
336-
337-
## Structure of the repo
338-
339-
| Component | Description |
340-
| ------------------------ | ------------------------------------------------------------ |
341-
| [**core**](core) | Main JIT ingest, lowering, conversion and runtime implementations |
342-
| [**cpp**](cpp) | C++ API and CLI source |
343-
| [**examples**](examples) | Example applications to show different features of Torch-TensorRT |
344-
| [**py**](py) | Python API for Torch-TensorRT |
345-
| [**tests**](tests) | Unit tests for Torch-TensorRT |
346-
>>>>>>> 1a89aea5b (Fix minor grammatical corrections (#2779))
347128
348129
## Contributing
349130

0 commit comments

Comments
 (0)