Skip to content

Commit 9460715

Browse files
committed
cherrypick of #2804
1 parent a983064 commit 9460715

29 files changed

+27
-408
lines changed

.github/workflows/docker_builder.yml

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -44,18 +44,16 @@ jobs:
4444
username: ${{ github.actor }}
4545
password: ${{ secrets.GITHUB_TOKEN }}
4646

47-
# Automatically detect TensorRT and cuDNN default versions for Torch-TRT build
47+
# Automatically detect TensorRT default versions for Torch-TRT build
4848
- name: Build Docker image
4949
env:
5050
DOCKER_TAG: ${{ env.DOCKER_REGISTRY }}/${{ steps.fix_slashes.outputs.container_name }}
5151
run: |
5252
python3 -m pip install pyyaml
5353
TRT_VERSION=$(python3 -c "import versions; versions.tensorrt_version()")
5454
echo "TRT VERSION = ${TRT_VERSION}"
55-
CUDNN_VERSION=$(python3 -c "import versions; versions.cudnn_version()")
56-
echo "CUDNN VERSION = ${CUDNN_VERSION}"
5755
58-
DOCKER_BUILDKIT=1 docker build --build-arg TENSORRT_VERSION=$TRT_VERSION --build-arg CUDNN_VERSION=$CUDNN_VERSION -f docker/Dockerfile --tag $DOCKER_TAG .
56+
DOCKER_BUILDKIT=1 docker build --build-arg TENSORRT_VERSION=$TRT_VERSION -f docker/Dockerfile --tag $DOCKER_TAG .
5957
6058
- name: Push Docker image
6159
env:

README.md

Lines changed: 5 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ Torch-TensorRT is distributed in the ready-to-run NVIDIA [NGC PyTorch Container]
1919

2020
## Building a docker container for Torch-TensorRT
2121

22-
We provide a `Dockerfile` in `docker/` directory. It expects a PyTorch NGC container as a base but can easily be modified to build on top of any container that provides, PyTorch, CUDA, cuDNN and TensorRT. The dependency libraries in the container can be found in the <a href="https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/index.html">release notes</a>.
22+
We provide a `Dockerfile` in `docker/` directory. It expects a PyTorch NGC container as a base but can easily be modified to build on top of any container that provides, PyTorch, CUDA, and TensorRT. The dependency libraries in the container can be found in the <a href="https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/index.html">release notes</a>.
2323

2424
Please follow this instruction to build a Docker container.
2525

@@ -152,14 +152,13 @@ bash ./compile.sh
152152
You need to start by having CUDA installed on the system, LibTorch will automatically be pulled for you by bazel,
153153
then you have two options.
154154

155-
#### 1. Building using cuDNN & TensorRT tarball distributions
155+
#### 1. Building using TensorRT tarball distributions
156156

157157
> This is recommended so as to build Torch-TensorRT hermetically and insures any bugs are not caused by version issues
158158
159159
> Make sure when running Torch-TensorRT that these versions of the libraries are prioritized in your `$LD_LIBRARY_PATH`
160160
161-
1. You need to download the tarball distributions of TensorRT and cuDNN from the NVIDIA website.
162-
- https://developer.nvidia.com/cudnn
161+
1. You need to download the tarball distributions of TensorRT from the NVIDIA website.
163162
- https://developer.nvidia.com/tensorrt
164163
2. Place these files in a directory (the directories `third_party/dist_dir/[x86_64-linux-gnu | aarch64-linux-gnu]` exist for this purpose)
165164
3. Compile using:
@@ -168,25 +167,16 @@ then you have two options.
168167
bazel build //:libtorchtrt --compilation_mode opt --distdir third_party/dist_dir/[x86_64-linux-gnu | aarch64-linux-gnu]
169168
```
170169

171-
#### 2. Building using locally installed cuDNN & TensorRT
170+
#### 2. Building using locally installed TensorRT
172171

173172
> If you find bugs and you compiled using this method please disclose you used this method in the issue
174173
> (an `ldd` dump would be nice too)
175174
176-
1. Install TensorRT, CUDA and cuDNN on the system before starting to compile.
175+
1. Install TensorRT and CUDA on the system before starting to compile.
177176
2. In `WORKSPACE` comment out
178177

179178
```py
180179
# Downloaded distributions to use with --distdir
181-
http_archive(
182-
name = "cudnn",
183-
urls = ["<URL>",],
184-
185-
build_file = "@//third_party/cudnn/archive:BUILD",
186-
sha256 = "<TAR SHA256>",
187-
strip_prefix = "cuda"
188-
)
189-
190180
http_archive(
191181
name = "tensorrt",
192182
urls = ["<URL>",],
@@ -201,12 +191,6 @@ and uncomment
201191

202192
```py
203193
# Locally installed dependencies
204-
new_local_repository(
205-
name = "cudnn",
206-
path = "/usr/",
207-
build_file = "@//third_party/cudnn/local:BUILD"
208-
)
209-
210194
new_local_repository(
211195
name = "tensorrt",
212196
path = "/usr/",

cmake/Modules/FindcuDNN.cmake

Lines changed: 0 additions & 243 deletions
This file was deleted.

cmake/dependencies.cmake

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,11 +7,9 @@ endif()
77

88
# If the custom finders are needed at this point, there are good chances that they will be needed when consuming the library as well
99
install(FILES "${CMAKE_SOURCE_DIR}/cmake/Modules/FindTensorRT.cmake" DESTINATION "${CMAKE_INSTALL_LIBDIR}/cmake/torchtrt/Modules")
10-
install(FILES "${CMAKE_SOURCE_DIR}/cmake/Modules/FindcuDNN.cmake" DESTINATION "${CMAKE_INSTALL_LIBDIR}/cmake/torchtrt/Modules")
1110

1211
# CUDA
1312
find_package(CUDAToolkit REQUIRED)
14-
find_package(cuDNN REQUIRED) # Headers are needed somewhere
1513

1614
# libtorch
1715
find_package(Torch REQUIRED)

core/plugins/CMakeLists.txt

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,6 @@ target_link_libraries(${lib_name}
2323
TensorRT::nvinfer_plugin
2424
torch
2525
core_util
26-
cuDNN::cuDNN
2726
PRIVATE
2827
Threads::Threads
2928
)

dev_dep_versions.yml

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,5 @@
11
__version__: "2.3.0"
22
__cuda_version__: "12.1"
3-
__cudnn_version__: "8.9"
43
__tensorrt_version__: "10.0.1"
54
__torch_version__: "2.3.0"
65
# torchvision version here is not a direct dependency but the one used during testing

docker/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
* Use `Dockerfile` to build a container which provides the exact development environment that our master branch is usually tested against.
44

55
* The `Dockerfile` currently uses <a href="https://github.com/bazelbuild/bazelisk">Bazelisk</a> to select the Bazel version, and uses the exact library versions of Torch and CUDA listed in <a href="https://github.com/pytorch/TensorRT#dependencies">dependencies</a>.
6-
* The desired versions of TensorRT must be specified as build-args, with major and minor versions as in: `--build-arg TENSORRT_VERSION=a.b`
6+
* The desired version of TensorRT must be specified as build-args, with major and minor versions as in: `--build-arg TENSORRT_VERSION=a.b`
77
* [**Optional**] The desired base image be changed by explicitly setting a base image, as in `--build-arg BASE_IMG=nvidia/cuda:11.8.0-devel-ubuntu22.04`, though this is optional
88
* [**Optional**] Additionally, the desired Python version can be changed by explicitly setting a version, as in `--build-arg PYTHON_VERSION=3.10`, though this is optional as well.
99

docker/WORKSPACE.docker

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -67,12 +67,6 @@ new_local_repository(
6767
# Locally installed dependencies (use in cases of custom dependencies or aarch64)
6868
####################################################################################
6969

70-
new_local_repository(
71-
name = "cudnn",
72-
path = "/usr/",
73-
build_file = "@//third_party/cudnn/local:BUILD"
74-
)
75-
7670
new_local_repository(
7771
name = "tensorrt",
7872
path = "/usr/",

docker/WORKSPACE.ngc

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -69,12 +69,6 @@ new_local_repository(
6969
build_file = "third_party/libtorch/BUILD"
7070
)
7171

72-
new_local_repository(
73-
name = "cudnn",
74-
path = "/usr/",
75-
build_file = "@//third_party/cudnn/local:BUILD"
76-
)
77-
7872
new_local_repository(
7973
name = "tensorrt",
8074
path = "/usr/",

0 commit comments

Comments
 (0)