Skip to content

Commit b157bbf

Browse files
tanmayv25mc-nv
authored andcommitted
Add cusparseLt in the installation to support 24.06 (#132)
* Add cusparseLt in the installation to support 24.06 * Fix the arm build
1 parent 8d14a80 commit b157bbf

File tree

2 files changed

+6
-2
lines changed

2 files changed

+6
-2
lines changed

CMakeLists.txt

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -229,6 +229,8 @@ if (${TRITON_PYTORCH_DOCKER_BUILD})
229229
COMMAND docker cp pytorch_backend_ptlib:/usr/local/lib/python3.10/dist-packages/torch/lib/libtorch_cuda_linalg.so libtorch_cuda_linalg.so
230230
COMMAND docker cp pytorch_backend_ptlib:/usr/local/lib/python3.10/dist-packages/torch/lib/libtorch_global_deps.so libtorch_global_deps.so
231231
COMMAND docker cp pytorch_backend_ptlib:/usr/local/lib/python3.10/dist-packages/torch/lib/libcaffe2_nvrtc.so libcaffe2_nvrtc.so
232+
# TODO: Revisit when not needed by making it part of cuda base container.
233+
COMMAND docker cp -L pytorch_backend_ptlib:/usr/local/cuda-12.5/targets/${LIBS_ARCH}-linux/lib/libcusparseLt.so libcusparseLt.so
232234
COMMAND docker cp pytorch_backend_ptlib:/usr/local/lib/libtorchvision.so libtorchvision.so
233235
COMMAND /bin/sh -c "if [ ${TRITON_PYTORCH_ENABLE_TORCHTRT} = 'ON' ]; then docker cp pytorch_backend_ptlib:/usr/local/lib/python3.10/dist-packages/torch_tensorrt/lib/libtorchtrt_runtime.so libtorchtrt_runtime.so; fi"
234236
COMMAND docker cp pytorch_backend_ptlib:/usr/local/lib/python3.10/dist-packages/torch_tensorrt/bin/torchtrtc torchtrtc || echo "error ignored..." || true
@@ -434,6 +436,7 @@ if (${TRITON_PYTORCH_DOCKER_BUILD})
434436
install(
435437
FILES
436438
${PT_LIB_PATHS}
439+
${CMAKE_CURRENT_BINARY_DIR}/libcusparseLt.so
437440
${CMAKE_CURRENT_BINARY_DIR}/LICENSE.pytorch
438441
DESTINATION ${CMAKE_INSTALL_PREFIX}/backends/pytorch
439442
)
@@ -474,6 +477,7 @@ if (${TRITON_PYTORCH_DOCKER_BUILD})
474477
COMMAND ln -sf libopencv_flann.so libopencv_flann.so.${OPENCV_VERSION}
475478
COMMAND ln -sf libpng16.so libpng16.so.16
476479
COMMAND ln -sf libjpeg.so libjpeg.so.8
480+
COMMAND ln -sf libcusparseLt.so libcusparseLt.so.0
477481
RESULT_VARIABLE LINK_STATUS
478482
WORKING_DIRECTORY ${CMAKE_INSTALL_PREFIX}/backends/pytorch)
479483
if(LINK_STATUS AND NOT LINK_STATUS EQUAL 0)

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -146,11 +146,11 @@ key: "INFERENCE_MODE"
146146

147147
* `DISABLE_CUDNN`: Boolean flag to disable the cuDNN library. By default, cuDNN is enabled.
148148

149-
[cuDNN](https://developer.nvidia.com/cudnn) is a GPU-accelerated library of primitives for
149+
[cuDNN](https://developer.nvidia.com/cudnn) is a GPU-accelerated library of primitives for
150150
deep neural networks. cuDNN provides highly tuned implementations for standard routines.
151151

152152
Typically, models run with cuDNN enabled are faster. However there are some exceptions
153-
where using cuDNN can be slower, cause higher memory usage or result in errors.
153+
where using cuDNN can be slower, cause higher memory usage or result in errors.
154154

155155

156156
The section of model config file specifying this parameter will look like:

0 commit comments

Comments
 (0)