You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Revert "Reapply "Revert debug changes under .github/workflows""
This reverts commit 8f69f83.
* Add names of all CTK 12.8.1 x86_64-linux libraries (.so) as `path_finder.SUPPORTED_LIBNAMES`
https://chatgpt.com/share/67f98d0b-148c-8008-9951-9995cf5d860c
* Add `SUPPORTED_WINDOWS_DLLS`
* Add copyright notice
* Move SUPPORTED_LIBNAMES, SUPPORTED_WINDOWS_DLLS to _path_finder/supported_libs.py
* Use SUPPORTED_WINDOWS_DLLS in _windows_load_with_dll_basename()
* Change "Set up mini CTK" to use `method: local`, remove `sub-packages` line.
* Use Jimver/[email protected] also under Linux, `method: local`, no `sub-packages`.
* Add more `nvidia-*-cu12` wheels to get as many of the supported shared libraries as possible.
* Revert "Use Jimver/[email protected] also under Linux, `method: local`, no `sub-packages`."
This reverts commit d499806.
Problem observed:
```
/usr/bin/docker exec 1b42cd4ea3149ac3f2448eae830190ee62289b7304a73f8001e90cead5005102 sh -c "cat /etc/*release | grep ^ID"
Warning: Failed to restore: Cache service responded with 422
/usr/bin/tar --posix -cf cache.tgz --exclude cache.tgz -P -C /__w/cuda-python/cuda-python --files-from manifest.txt -z
Failed to save: Unable to reserve cache with key cuda_installer-linux-5.15.0-135-generic-x64-12.8.0, another job may be creating this cache. More details: This legacy service is shutting down, effective April 15, 2025. Migrate to the new service ASAP. For more information: https://gh.io/gha-cache-sunset
Warning: Error during installation: Error: Unable to locate executable file: sudo. Please verify either the file path exists or the file can be found within a directory specified by the PATH environment variable. Also check the file mode to verify the file is executable.
Error: Error: Unable to locate executable file: sudo. Please verify either the file path exists or the file can be found within a directory specified by the PATH environment variable. Also check the file mode to verify the file is executable.
```
* Change test_path_finder::test_find_and_load() to skip cufile on Windows, and report exceptions as failures, except for cudart
* Add nvidia-cuda-runtime-cu12 to pyproject.toml (for libname cudart)
* test_path_finder.py: before loading cusolver, load nvJitLink, cusparse, cublas (experiment to see if that resolves the only Windows failure)
Test (win-64, Python 3.12, CUDA 12.8.0, Runner default, CTK wheels) / test
```
================================== FAILURES ===================================
________________________ test_find_and_load[cusolver] _________________________
libname = 'cusolver'
@pytest.mark.parametrize("libname", path_finder.SUPPORTED_LIBNAMES)
def test_find_and_load(libname):
if sys.platform == "win32" and libname == "cufile":
pytest.skip(f'test_find_and_load("{libname}") not supported on this platform')
print(f'\ntest_find_and_load("{libname}")')
failures = []
for algo, func in (
("find", path_finder.find_nvidia_dynamic_library),
("load", path_finder.load_nvidia_dynamic_library),
):
try:
out = func(libname)
except Exception as e:
out = f"EXCEPTION: {type(e)} {str(e)}"
failures.append(algo)
print(out)
print()
> assert not failures
E AssertionError: assert not ['load']
tests\test_path_finder.py:29: AssertionError
```
* test_path_finder.py: load *only* nvJitLink before loading cusolver
* Run each test_find_or_load_nvidia_dynamic_library() subtest in a subprocess
* Add cublasLt to supported_libs.py and load deps for cusolver, cusolverMg, cusparse in test_path_finder.py. Also restrict test_path_finder.py to test load only for now.
* Add supported_libs.DIRECT_DEPENDENCIES
* Remove cufile_rdma from supported libs (comment out).
https://chatgpt.com/share/68033a33-385c-8008-a293-4c8cc3ea23ae
* Split out `PARTIALLY_SUPPORTED_LIBNAMES`. Fix up test code.
* Reduce public API to only load_nvidia_dynamic_library, SUPPORTED_LIBNAMES
* Set CUDA_BINDINGS_PATH_FINDER_TEST_ALL_LIBNAMES=1 to match expected availability of nvidia shared libraries.
* Refactor as `class _find_nvidia_dynamic_library`
* Strict wheel, conda, system rule: try using the platform-specific dynamic loader search mechanisms only last
* Introduce _load_and_report_path_linux(), add supported_libs.EXPECTED_LIB_SYMBOLS
* Plug in ctypes.windll.kernel32.GetModuleFileNameW()
* Keep track of nvrtc-related GitHub comment
* Factor out `_find_dll_under_dir(dirpath, file_wild)` and reuse from `_find_dll_using_nvidia_bin_dirs()`, `_find_dll_using_cudalib_dir()` (to fix loading nvrtc64_120_0.dll from local CTK)
* Minimal "is already loaded" code.
* Add THIS FILE NEEDS TO BE REVIEWED/UPDATED FOR EACH CTK RELEASE comment in _path_finder/supported_libs.py
* Add SUPPORTED_LINUX_SONAMES in _path_finder/supported_libs.py
* Update SUPPORTED_WINDOWS_DLLS in _path_finder/supported_libs.py based on DLLs found in cuda_*win*.exe files.
* Remove `os.add_dll_directory()` and `os.environ["PATH"]` manipulations from find_nvidia_dynamic_library.py. Add `supported_libs.LIBNAMES_REQUIRING_OS_ADD_DLL_DIRECTORY` and use from `load_nvidia_dynamic_library()`.
* Move nvrtc-specific code from find_nvidia_dynamic_library.py to `supported_libs.is_suppressed_dll_file()`
* Introduce dataclass LoadedDL as return type for load_nvidia_dynamic_library()
* Factor out _abs_path_for_dynamic_library_* and use on handle obtained through "is already loaded" checks
* Factor out _load_nvidia_dynamic_library_no_cache() and use for exercising LoadedDL.was_already_loaded_from_elsewhere
* _check_nvjitlink_usable() in test_path_finder.py
* Undo changes in .github/workflows/ and cuda_bindings/pyproject.toml
* Move cuda_bindings/tests/path_finder.py -> toolshed/run_cuda_bindings_path_finder.py
* Add bandit suppressions in test_path_finder.py
* Add pytest info_summary_append fixture and use from test_path_finder.py to report the absolute paths of the loaded libraries.
0 commit comments