Skip to content

Commit ebb9042

Browse files
committed
remove cudnn dependencies
1 parent 4dc9acf commit ebb9042

21 files changed

+22
-141
lines changed

.github/workflows/docker_builder.yml

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -44,18 +44,16 @@ jobs:
4444
username: ${{ github.actor }}
4545
password: ${{ secrets.GITHUB_TOKEN }}
4646

47-
# Automatically detect TensorRT and cuDNN default versions for Torch-TRT build
47+
# Automatically detect TensorRT default versions for Torch-TRT build
4848
- name: Build Docker image
4949
env:
5050
DOCKER_TAG: ${{ env.DOCKER_REGISTRY }}/${{ steps.fix_slashes.outputs.container_name }}
5151
run: |
5252
python3 -m pip install pyyaml
5353
TRT_VERSION=$(python3 -c "import versions; versions.tensorrt_version()")
5454
echo "TRT VERSION = ${TRT_VERSION}"
55-
CUDNN_VERSION=$(python3 -c "import versions; versions.cudnn_version()")
56-
echo "CUDNN VERSION = ${CUDNN_VERSION}"
5755
58-
DOCKER_BUILDKIT=1 docker build --build-arg TENSORRT_VERSION=$TRT_VERSION --build-arg CUDNN_VERSION=$CUDNN_VERSION -f docker/Dockerfile --tag $DOCKER_TAG .
56+
DOCKER_BUILDKIT=1 docker build --build-arg TENSORRT_VERSION=$TRT_VERSION -f docker/Dockerfile --tag $DOCKER_TAG .
5957
6058
- name: Push Docker image
6159
env:

README.md

Lines changed: 5 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ Torch-TensorRT is distributed in the ready-to-run NVIDIA [NGC PyTorch Container]
1919

2020
## Building a docker container for Torch-TensorRT
2121

22-
We provide a `Dockerfile` in `docker/` directory. It expects a PyTorch NGC container as a base but can easily be modified to build on top of any container that provides, PyTorch, CUDA, cuDNN and TensorRT. The dependency libraries in the container can be found in the <a href="https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/index.html">release notes</a>.
22+
We provide a `Dockerfile` in `docker/` directory. It expects a PyTorch NGC container as a base but can easily be modified to build on top of any container that provides, PyTorch, CUDA, and TensorRT. The dependency libraries in the container can be found in the <a href="https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/index.html">release notes</a>.
2323

2424
Please follow this instruction to build a Docker container.
2525

@@ -152,14 +152,13 @@ bash ./compile.sh
152152
You need to start by having CUDA installed on the system, LibTorch will automatically be pulled for you by bazel,
153153
then you have two options.
154154

155-
#### 1. Building using cuDNN & TensorRT tarball distributions
155+
#### 1. Building using TensorRT tarball distributions
156156

157157
> This is recommended so as to build Torch-TensorRT hermetically and insures any bugs are not caused by version issues
158158
159159
> Make sure when running Torch-TensorRT that these versions of the libraries are prioritized in your `$LD_LIBRARY_PATH`
160160
161-
1. You need to download the tarball distributions of TensorRT and cuDNN from the NVIDIA website.
162-
- https://developer.nvidia.com/cudnn
161+
1. You need to download the tarball distributions of TensorRT from the NVIDIA website.
163162
- https://developer.nvidia.com/tensorrt
164163
2. Place these files in a directory (the directories `third_party/dist_dir/[x86_64-linux-gnu | aarch64-linux-gnu]` exist for this purpose)
165164
3. Compile using:
@@ -168,25 +167,16 @@ then you have two options.
168167
bazel build //:libtorchtrt --compilation_mode opt --distdir third_party/dist_dir/[x86_64-linux-gnu | aarch64-linux-gnu]
169168
```
170169

171-
#### 2. Building using locally installed cuDNN & TensorRT
170+
#### 2. Building using locally installed TensorRT
172171

173172
> If you find bugs and you compiled using this method please disclose you used this method in the issue
174173
> (an `ldd` dump would be nice too)
175174
176-
1. Install TensorRT, CUDA and cuDNN on the system before starting to compile.
175+
1. Install TensorRT and CUDA on the system before starting to compile.
177176
2. In `WORKSPACE` comment out
178177

179178
```py
180179
# Downloaded distributions to use with --distdir
181-
http_archive(
182-
name = "cudnn",
183-
urls = ["<URL>",],
184-
185-
build_file = "@//third_party/cudnn/archive:BUILD",
186-
sha256 = "<TAR SHA256>",
187-
strip_prefix = "cuda"
188-
)
189-
190180
http_archive(
191181
name = "tensorrt",
192182
urls = ["<URL>",],
@@ -201,12 +191,6 @@ and uncomment
201191

202192
```py
203193
# Locally installed dependencies
204-
new_local_repository(
205-
name = "cudnn",
206-
path = "/usr/",
207-
build_file = "@//third_party/cudnn/local:BUILD"
208-
)
209-
210194
new_local_repository(
211195
name = "tensorrt",
212196
path = "/usr/",

dev_dep_versions.yml

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,3 @@
11
__version__: "2.4.0.dev0"
22
__cuda_version__: "12.1"
3-
__cudnn_version__: "8.9"
43
__tensorrt_version__: "10.0.1"

docker/Dockerfile

Lines changed: 1 addition & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -8,9 +8,6 @@ ENV BASE_IMG=nvidia/cuda:12.1.1-devel-ubuntu22.04
88
ARG TENSORRT_VERSION
99
ENV TENSORRT_VERSION=${TENSORRT_VERSION}
1010
RUN test -n "$TENSORRT_VERSION" || (echo "No tensorrt version specified, please use --build-arg TENSORRT_VERSION=x.y to specify a version." && exit 1)
11-
ARG CUDNN_VERSION
12-
ENV CUDNN_VERSION=${CUDNN_VERSION}
13-
RUN test -n "$CUDNN_VERSION" || (echo "No cudnn version specified, please use --build-arg CUDNN_VERSION=x.y to specify a version." && exit 1)
1411

1512
ARG PYTHON_VERSION=3.10
1613
ENV PYTHON_VERSION=${PYTHON_VERSION}
@@ -35,13 +32,12 @@ RUN wget -L https://github.com/pyenv/pyenv-installer/raw/master/bin/pyenv-instal
3532
RUN pyenv install -v ${PYTHON_VERSION}
3633
RUN pyenv global ${PYTHON_VERSION}
3734

38-
# Install CUDNN + TensorRT + dependencies
35+
# Install TensorRT + dependencies
3936
RUN wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-ubuntu2204.pin
4037
RUN mv cuda-ubuntu2204.pin /etc/apt/preferences.d/cuda-repository-pin-600
4138
RUN apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/7fa2af80.pub
4239
RUN add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/ /"
4340
RUN apt-get update
44-
RUN apt-get install -y libcudnn8=${CUDNN_VERSION}* libcudnn8-dev=${CUDNN_VERSION}*
4541

4642
RUN apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/3bf863cc.pub
4743
RUN add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/ /"

docker/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
* Use `Dockerfile` to build a container which provides the exact development environment that our master branch is usually tested against.
44

55
* The `Dockerfile` currently uses <a href="https://github.com/bazelbuild/bazelisk">Bazelisk</a> to select the Bazel version, and uses the exact library versions of Torch and CUDA listed in <a href="https://github.com/pytorch/TensorRT#dependencies">dependencies</a>.
6-
* The desired versions of CUDNN and TensorRT must be specified as build-args, with major and minor versions as in: `--build-arg TENSORRT_VERSION=a.b --build-arg CUDNN_VERSION=x.y`
6+
* The desired version of TensorRT must be specified as build-args, with major and minor versions as in: `--build-arg TENSORRT_VERSION=a.b`
77
* [**Optional**] The desired base image be changed by explicitly setting a base image, as in `--build-arg BASE_IMG=nvidia/cuda:11.8.0-devel-ubuntu22.04`, though this is optional
88
* [**Optional**] Additionally, the desired Python version can be changed by explicitly setting a version, as in `--build-arg PYTHON_VERSION=3.10`, though this is optional as well.
99

@@ -17,14 +17,14 @@ Note: By default the container uses the `pre-cxx11-abi` version of Torch + Torch
1717

1818
### Instructions
1919

20-
- The example below uses CUDNN 8.9 and TensorRT 8.6
20+
- The example below uses TensorRT 8.6
2121
- See <a href="https://github.com/pytorch/TensorRT#dependencies">dependencies</a> for a list of current default dependencies.
2222

2323
> From root of Torch-TensorRT repo
2424
2525
Build:
2626
```
27-
DOCKER_BUILDKIT=1 docker build --build-arg TENSORRT_VERSION=8.6 --build-arg CUDNN_VERSION=8.9 -f docker/Dockerfile -t torch_tensorrt:latest .
27+
DOCKER_BUILDKIT=1 docker build --build-arg TENSORRT_VERSION=8.6 -f docker/Dockerfile -t torch_tensorrt:latest .
2828
```
2929

3030
Run:

docker/WORKSPACE.docker

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -67,12 +67,6 @@ new_local_repository(
6767
# Locally installed dependencies (use in cases of custom dependencies or aarch64)
6868
####################################################################################
6969

70-
new_local_repository(
71-
name = "cudnn",
72-
path = "/usr/",
73-
build_file = "@//third_party/cudnn/local:BUILD"
74-
)
75-
7670
new_local_repository(
7771
name = "tensorrt",
7872
path = "/usr/",

docker/WORKSPACE.ngc

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -69,12 +69,6 @@ new_local_repository(
6969
build_file = "third_party/libtorch/BUILD"
7070
)
7171

72-
new_local_repository(
73-
name = "cudnn",
74-
path = "/usr/",
75-
build_file = "@//third_party/cudnn/local:BUILD"
76-
)
77-
7872
new_local_repository(
7973
name = "tensorrt",
8074
path = "/usr/",

docsrc/getting_started/getting_started_with_windows.rst

Lines changed: 5 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,6 @@ Prerequisite:
1111
* LibTorch
1212
* TensorRT
1313
* CUDA
14-
* cuDNN
1514

1615

1716
Build configuration
@@ -90,27 +89,24 @@ Building With Visual Studio Code
9089
> Also allows using Build Tools to develop and test Open Source Dependencies, to the minor extend of ensuring compatibility with Build Tools
9190

9291
3. Install CUDA (e.g. 11.7.1)
93-
4. Install cuDNN (e.g. 8.5.0.96)
9492

95-
- Set ``cuDNN_ROOT_DIR``
96-
97-
5. Install `TensorRT` (e.g 8.5.1.7)
93+
4. Install `TensorRT` (e.g 8.5.1.7)
9894

9995
- Set ``TensorRT_ROOT``
10096
- Add ``TensorRT_ROOT\lib`` to ``PATH``
10197

102-
6. Install "libtorch-win-shared-with-deps-latest.zip"
98+
5. Install "libtorch-win-shared-with-deps-latest.zip"
10399

104100
- Select build targeting the appropriate CUDA version
105101
- Set ``Torch_DIR``
106102
- Add ``Torch_DIR\lib`` to ``PATH``
107103

108-
7. Clone TensorRT repo
109-
8. Install C++ and CMake Tools extensions from MS
104+
6. Clone TensorRT repo
105+
7. Install C++ and CMake Tools extensions from MS
110106

111107
- Change build to ``RelWithDebInfo``
112108

113-
9. Update ``.vscode\settings.json``
109+
8. Update ``.vscode\settings.json``
114110

115111
- Clean, configure, build
116112

@@ -137,10 +133,6 @@ e.g. /.vscode/settings.json
137133
"type": "FILEPATH",
138134
"value": "X:\\path\\to\\tensorrt"
139135
},
140-
"cuDNN_ROOT_DIR": {
141-
"type": "FILEPATH",
142-
"value": "X:\\path\\to\\cudnn"
143-
},
144136
"CMAKE_CUDA_FLAGS": "-allow-unsupported-compiler"
145137
},
146138
"cmake.buildDirectory": "${workspaceFolder}/torch_tensorrt_build"

docsrc/getting_started/installation.rst

Lines changed: 5 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -87,33 +87,21 @@ Dependencies for Compilation
8787
* Specify your CUDA version here if not the version used in the branch being built: https://github.com/pytorch/TensorRT/blob/4e5b0f6e860910eb510fa70a76ee3eb9825e7a4d/WORKSPACE#L46
8888

8989

90-
* The correct **LibTorch**, **cuDNN** and **TensorRT** versions will be pulled down for you by bazel.
90+
* The correct **LibTorch** and **TensorRT** versions will be pulled down for you by bazel.
9191

9292
NOTE: By default bazel will pull the latest nightly from pytorch.org. For building main, this is usually sufficient however if there is a specific PyTorch you are targeting,
9393
edit these locations with updated URLs/paths:
9494

9595
* https://github.com/pytorch/TensorRT/blob/4e5b0f6e860910eb510fa70a76ee3eb9825e7a4d/WORKSPACE#L53C1-L53C1
9696

9797

98-
* **cuDNN and TensorRT** are not required to be installed on the system to build Torch-TensorRT, in fact this is preferable to ensure reproducable builds. If versions other than the default are needed
99-
point the WORKSPACE file to the URL of the tarball or download the tarballs for cuDNN and TensorRT from https://developer.nvidia.com and update the paths in the WORKSPACE file here https://github.com/pytorch/TensorRT/blob/4e5b0f6e860910eb510fa70a76ee3eb9825e7a4d/WORKSPACE#L71
98+
* **TensorRT** is not required to be installed on the system to build Torch-TensorRT, in fact this is preferable to ensure reproducable builds. If versions other than the default are needed
99+
point the WORKSPACE file to the URL of the tarball or download the tarball for TensorRT from https://developer.nvidia.com and update the paths in the WORKSPACE file here https://github.com/pytorch/TensorRT/blob/4e5b0f6e860910eb510fa70a76ee3eb9825e7a4d/WORKSPACE#L71
100100

101101
For example:
102102

103103
.. code-block:: python
104104
105-
http_archive(
106-
name = "cudnn",
107-
build_file = "@//third_party/cudnn/archive:BUILD",
108-
sha256 = "<CUDNN SHA256>", # Optional but recommended
109-
strip_prefix = "cudnn-linux-x86_64-<CUDNN VERSION>_<CUDA VERSION>-archive",
110-
urls = [
111-
"https://developer.nvidia.com/downloads/compute/cudnn/<CUDNN DOWNLOAD PATH>",
112-
# OR
113-
"file:///<ABSOLUTE PATH TO FILE>/cudnn-linux-x86_64-<CUDNN VERSION>_<CUDA VERSION>-archive.tar.xz"
114-
],
115-
)
116-
117105
http_archive(
118106
name = "tensorrt",
119107
build_file = "@//third_party/tensorrt/archive:BUILD",
@@ -128,7 +116,7 @@ Dependencies for Compilation
128116
129117
Remember at runtime, these libraries must be added to your ``LD_LIBRARY_PATH`` explicity
130118

131-
If you have a local version of cuDNN and TensorRT installed, this can be used as well by commenting out the above lines and uncommenting the following lines https://github.com/pytorch/TensorRT/blob/4e5b0f6e860910eb510fa70a76ee3eb9825e7a4d/WORKSPACE#L114C1-L124C3
119+
If you have a local version of TensorRT installed, this can be used as well by commenting out the above lines and uncommenting the following lines https://github.com/pytorch/TensorRT/blob/4e5b0f6e860910eb510fa70a76ee3eb9825e7a4d/WORKSPACE#L114C1-L124C3
132120

133121

134122
Building the Package
@@ -228,7 +216,7 @@ Begin by installing CMake.
228216

229217
A few useful CMake options include:
230218

231-
* CMake finders for TensorRT and cuDNN are provided in `cmake/Modules`. In order for CMake to use them, pass
219+
* CMake finders for TensorRT are provided in `cmake/Modules`. In order for CMake to use them, pass
232220
`-DCMAKE_MODULE_PATH=cmake/Modules` when configuring the project with CMake.
233221
* Libtorch provides its own CMake finder. In case CMake doesn't find it, pass the path to your install of
234222
libtorch with `-DTorch_DIR=<path to libtorch>/share/cmake/Torch`

notebooks/WORKSPACE.notebook

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -69,12 +69,6 @@ http_archive(
6969
# Locally installed dependencies (use in cases of custom dependencies or aarch64)
7070
####################################################################################
7171

72-
new_local_repository(
73-
name = "cudnn",
74-
path = "/usr/",
75-
build_file = "@//third_party/cudnn/local:BUILD"
76-
)
77-
7872
new_local_repository(
7973
name = "tensorrt",
8074
path = "/usr/",

py/ci/build_whl.sh

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -100,7 +100,6 @@ libtorchtrt() {
100100
CUDA_VERSION=$(cd ${PROJECT_DIR} && ${PY_DIR}/bin/python3 -c "import versions; versions.cuda_version()")
101101
TORCHTRT_VERSION=$(cd ${PROJECT_DIR} && ${PY_DIR}/bin/python3 -c "import versions; versions.torch_tensorrt_version_release()")
102102
TRT_VERSION=$(cd ${PROJECT_DIR} && ${PY_DIR}/bin/python3 -c "import versions; versions.tensorrt_version()")
103-
CUDNN_VERSION=$(cd ${PROJECT_DIR} && ${PY_DIR}/bin/python3 -c "import versions; versions.cudnn_version()")
104103
TORCH_VERSION=$(${PY_DIR}/bin/python -c "from torch import __version__;print(__version__.split('+')[0])")
105104
cp ${PROJECT_DIR}/bazel-bin/libtorchtrt.tar.gz ${PROJECT_DIR}/py/wheelhouse/libtorchtrt-${TORCHTRT_VERSION}-cudnn${CUDNN_VERSION}-tensorrt${TRT_VERSION}-cuda${CUDA_VERSION}-libtorch${TORCH_VERSION}-x86_64-linux.tar.gz
106105
}
@@ -120,7 +119,6 @@ libtorchtrt_pre_cxx11_abi() {
120119
CUDA_VERSION=$(cd ${PROJECT_DIR} && ${PY_DIR}/bin/python3 -c "import versions; versions.cuda_version()")
121120
TORCHTRT_VERSION=$(cd ${PROJECT_DIR} && ${PY_DIR}/bin/python3 -c "import versions; versions.torch_tensorrt_version_release()")
122121
TRT_VERSION=$(cd ${PROJECT_DIR} && ${PY_DIR}/bin/python3 -c "import versions; versions.tensorrt_version()")
123-
CUDNN_VERSION=$(cd ${PROJECT_DIR} && ${PY_DIR}/bin/python3 -c "import versions; versions.cudnn_version()")
124122
TORCH_VERSION=$(${PY_DIR}/bin/python -c "from torch import __version__;print(__version__.split('+')[0])")
125-
cp ${PROJECT_DIR}/bazel-bin/libtorchtrt.tar.gz ${PROJECT_DIR}/py/wheelhouse/libtorchtrt-${TORCHTRT_VERSION}-pre-cxx11-abi-cudnn${CUDNN_VERSION}-tensorrt${TRT_VERSION}-cuda${CUDA_VERSION}-libtorch${TORCH_VERSION}-x86_64-linux.tar.gz
123+
cp ${PROJECT_DIR}/bazel-bin/libtorchtrt.tar.gz ${PROJECT_DIR}/py/wheelhouse/libtorchtrt-${TORCHTRT_VERSION}-pre-cxx11-abi-tensorrt${TRT_VERSION}-cuda${CUDA_VERSION}-libtorch${TORCH_VERSION}-x86_64-linux.tar.gz
126124
}

py/torch_tensorrt/__init__.py

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,6 @@
66

77
from torch_tensorrt._version import ( # noqa: F401
88
__cuda_version__,
9-
__cudnn_version__,
109
__tensorrt_version__,
1110
__version__,
1211
)
@@ -40,11 +39,9 @@ def _find_lib(name: str, paths: List[str]) -> str:
4039
import tensorrt # noqa: F401
4140
except ImportError:
4241
cuda_version = _parse_semver(__cuda_version__)
43-
cudnn_version = _parse_semver(__cudnn_version__)
4442
tensorrt_version = _parse_semver(__tensorrt_version__)
4543

4644
CUDA_MAJOR = cuda_version["major"]
47-
CUDNN_MAJOR = cudnn_version["major"]
4845
TENSORRT_MAJOR = tensorrt_version["major"]
4946

5047
if sys.platform.startswith("win"):

setup.py

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,6 @@
2626

2727
__version__: str = "0.0.0"
2828
__cuda_version__: str = "0.0"
29-
__cudnn_version__: str = "0.0"
3029
__tensorrt_version__: str = "0.0"
3130

3231
LEGACY_BASE_VERSION_SUFFIX_PATTERN = re.compile("a0$")
@@ -62,7 +61,6 @@ def get_base_version() -> str:
6261

6362
def load_dep_info():
6463
global __cuda_version__
65-
global __cudnn_version__
6664
global __tensorrt_version__
6765
with open("dev_dep_versions.yml", "r") as stream:
6866
versions = yaml.safe_load(stream)
@@ -72,7 +70,6 @@ def load_dep_info():
7270
)
7371
else:
7472
__cuda_version__ = versions["__cuda_version__"]
75-
__cudnn_version__ = versions["__cudnn_version__"]
7673
__tensorrt_version__ = versions["__tensorrt_version__"]
7774

7875

@@ -230,7 +227,6 @@ def gen_version_file():
230227
print("creating version file")
231228
f.write('__version__ = "' + __version__ + '"\n')
232229
f.write('__cuda_version__ = "' + __cuda_version__ + '"\n')
233-
f.write('__cudnn_version__ = "' + __cudnn_version__ + '"\n')
234230
f.write('__tensorrt_version__ = "' + __tensorrt_version__ + '"\n')
235231

236232

toolchains/ci_workspaces/WORKSPACE.sbsa

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -74,12 +74,6 @@ new_local_repository(
7474
build_file = "third_party/libtorch/BUILD"
7575
)
7676

77-
new_local_repository(
78-
name = "cudnn",
79-
path = "/usr/",
80-
build_file = "@//third_party/cudnn/local:BUILD"
81-
)
82-
8377
new_local_repository(
8478
name = "tensorrt",
8579
path = "/usr/",

toolchains/ci_workspaces/WORKSPACE.x86_64

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -75,12 +75,6 @@ new_local_repository(
7575
build_file = "third_party/libtorch/BUILD"
7676
)
7777

78-
new_local_repository(
79-
name = "cudnn",
80-
path = "/usr/",
81-
build_file = "@//third_party/cudnn/local:BUILD"
82-
)
83-
8478
new_local_repository(
8579
name = "tensorrt",
8680
path = "/usr/",

toolchains/ci_workspaces/WORKSPACE.x86_64.cu118.release.rhel

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -75,12 +75,6 @@ http_archive(
7575
# Locally installed dependencies (use in cases of custom dependencies or aarch64)
7676
####################################################################################
7777

78-
new_local_repository(
79-
name = "cudnn",
80-
path = "/usr/",
81-
build_file = "@//third_party/cudnn/local:BUILD"
82-
)
83-
8478
new_local_repository(
8579
name = "tensorrt",
8680
path = "/usr/",

0 commit comments

Comments
 (0)