Skip to content

refactor: Upgrade bazel and move to MODULE.bazel #3012

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jul 31, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 5 additions & 4 deletions .bazelrc
Original file line number Diff line number Diff line change
Expand Up @@ -20,10 +20,11 @@
# +------------------------------------------------------------+
# | Build Configurations |
# +------------------------------------------------------------+
# Enable colorful output of GCC
build:default --cxxopt="-fdiagnostics-color=always"
build:default --cxxopt='-std=c++17'
#build:default --linkopt="-Wl,--no-as-needed"

common --enable_platform_specific_config

build:linux --cxxopt="-std=c++17"
build:linux --cxxopt="-fdiagnostics-color=always"

build:windows --cxxopt="/GS-" --cxxopt="/std:c++17" --cxxopt="/permissive-"
build:windows --cxxopt="/wd4244" --cxxopt="/wd4267" --cxxopt="/wd4819"
Expand Down
2 changes: 1 addition & 1 deletion .bazelversion
Original file line number Diff line number Diff line change
@@ -1 +1 @@
6.3.2
7.2.1
66 changes: 23 additions & 43 deletions WORKSPACE → MODULE.bazel
Original file line number Diff line number Diff line change
@@ -1,44 +1,37 @@
workspace(name = "Torch-TensorRT")

load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")

http_archive(
name = "rules_python",
sha256 = "863ba0fa944319f7e3d695711427d9ad80ba92c6edd0b7c7443b84e904689539",
strip_prefix = "rules_python-0.22.0",
url = "https://github.com/bazelbuild/rules_python/releases/download/0.22.0/rules_python-0.22.0.tar.gz",
module(
name = "torch_tensorrt",
version = "2.5.0a0",
repo_name = "org_pytorch_tensorrt",
)

load("@rules_python//python:repositories.bzl", "py_repositories")

py_repositories()
bazel_dep(name = "googletest", version = "1.14.0")
bazel_dep(name = "platforms", version = "0.0.10")
bazel_dep(name = "rules_cc", version = "0.0.9")
bazel_dep(name = "rules_python", version = "0.34.0")

http_archive(
name = "rules_pkg",
sha256 = "8f9ee2dc10c1ae514ee599a8b42ed99fa262b757058f65ad3c384289ff70c4b8",
urls = [
"https://mirror.bazel.build/github.com/bazelbuild/rules_pkg/releases/download/0.9.1/rules_pkg-0.9.1.tar.gz",
"https://github.com/bazelbuild/rules_pkg/releases/download/0.9.1/rules_pkg-0.9.1.tar.gz",
],
python = use_extension("@rules_python//python/extensions:python.bzl", "python")
python.toolchain(
ignore_root_user_error = True,
python_version = "3.11",
)

load("@rules_pkg//:deps.bzl", "rules_pkg_dependencies")

rules_pkg_dependencies()

http_archive(
name = "googletest",
sha256 = "755f9a39bc7205f5a0c428e920ddad092c33c8a1b46997def3f1d4a82aded6e1",
strip_prefix = "googletest-5ab508a01f9eb089207ee87fd547d290da39d015",
urls = ["https://github.com/google/googletest/archive/5ab508a01f9eb089207ee87fd547d290da39d015.zip"],
bazel_dep(name = "rules_pkg", version = "1.0.1")
git_override(
module_name = "rules_pkg",
commit = "17c57f4",
remote = "https://github.com/narendasan/rules_pkg",
)
Comment on lines +19 to +22
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any reason for overriding specific commit and for usage of your own repo ?

Copy link
Collaborator Author

@narendasan narendasan Jul 31, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Theres an issue with building as root in the container without the patches in my fork


local_repository = use_repo_rule("@bazel_tools//tools/build_defs/repo:local.bzl", "local_repository")

# External dependency for torch_tensorrt if you already have precompiled binaries.
local_repository(
name = "torch_tensorrt",
path = "/opt/conda/lib/python3.8/site-packages/torch_tensorrt",
)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this still be python3.8?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we just use this for DLFW CI so @apbose might know

Copy link
Collaborator

@apbose apbose Jul 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The DLFW CI uses the WORKSPACE.ngc within the docker folder. python3.10 is used there-

# External dependency for torch_tensorrt if you already have precompiled binaries.
# This is currently used in pytorch NGC container CI testing.
local_repository(
    name = "torch_tensorrt",
    path = "/usr/local/lib/python3.10/dist-packages/torch_tensorrt/"
)

@narendasan based on the MODULE.bazel it will use bazel_dep to handle torchTRT dependencies. So what are the use cases when it will use the local_repository rule as compared to the bazel_dep dependencies?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Local repositories still exist for MODULE.bazel, there is just now a bazel package repository for common packages that we use


new_local_repository = use_repo_rule("@bazel_tools//tools/build_defs/repo:local.bzl", "new_local_repository")

# CUDA should be installed on the system locally
new_local_repository(
name = "cuda",
Expand All @@ -52,6 +45,8 @@ new_local_repository(
path = "C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.4/",
)

http_archive = use_repo_rule("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")

#############################################################################################################
# Tarballs and fetched dependencies (default - use in cases when building from precompiled bin and tarballs)
#############################################################################################################
Expand Down Expand Up @@ -129,18 +124,3 @@ http_archive(
# path = "/usr/",
# build_file = "@//third_party/tensorrt/local:BUILD"
#)

#########################################################################
# Development Dependencies (optional - comment out on aarch64)
#########################################################################

load("@rules_python//python:pip.bzl", "pip_parse")

pip_parse(
name = "devtools_deps",
requirements = "//:requirements-dev.txt",
)

load("@devtools_deps//:requirements.bzl", "install_deps")

install_deps()
127 changes: 127 additions & 0 deletions MODULE.bazel.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion cpp/include/torch_tensorrt/macros.h
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@
#define STR(x) XSTR(x)

#define TORCH_TENSORRT_MAJOR_VERSION 2
#define TORCH_TENSORRT_MINOR_VERSION 4
#define TORCH_TENSORRT_MINOR_VERSION 5
#define TORCH_TENSORRT_PATCH_VERSION 0
#define TORCH_TENSORRT_VERSION \
STR(TORCH_TENSORRT_MAJOR_VERSION) \
Expand Down
3 changes: 2 additions & 1 deletion packaging/pre_build_script.sh
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ wget https://github.com/bazelbuild/bazelisk/releases/download/v1.17.0/bazelisk-l
&& chmod +x /usr/bin/bazel

export TORCH_BUILD_NUMBER=$(python -c "import torch, urllib.parse as ul; print(ul.quote_plus(torch.__version__))")
export TORCH_INSTALL_PATH=$(python -c "import torch, os; print(os.path.dirname(torch.__file__))")

cat toolchains/ci_workspaces/WORKSPACE.x86_64.release.rhel.tmpl | envsubst > WORKSPACE
cat toolchains/ci_workspaces/MODULE.bazel.tmpl | envsubst > MODULE.bazel
export CI_BUILD=1
6 changes: 3 additions & 3 deletions packaging/pre_build_script_windows.sh
Original file line number Diff line number Diff line change
Expand Up @@ -8,11 +8,11 @@ pip install tensorrt==${TRT_VERSION} tensorrt-${CU_VERSION::4}-bindings==${TRT_V

choco install bazelisk -y

curl -Lo TensorRT.zip https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.1.0/zip/TensorRT-10.1.0.27.Windows.win10.cuda-12.4.zip
unzip -o TensorRT.zip -d C:/
#curl -Lo TensorRT.zip https://developer.download.nvidia.com/compute/machine-learning/tensorrt/10.0.1/zip/TensorRT-10.0.1.6.Windows10.win10.cuda-12.4.zip
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

doesn't need to change to TRT 10.0 I think, although it's commented out.

#unzip -o TensorRT.zip -d C:/

export CUDA_HOME="$(echo ${CUDA_PATH} | sed -e 's#\\#\/#g')"

cat toolchains/ci_workspaces/WORKSPACE.win.release.tmpl | envsubst > WORKSPACE
cat toolchains/ci_workspaces/MODULE.bazel.tmpl | envsubst > MODULE.bazel

echo "RELEASE=1" >> ${GITHUB_ENV}
2 changes: 1 addition & 1 deletion py/torch_tensorrt/dynamo/_compiler.py
Original file line number Diff line number Diff line change
Expand Up @@ -651,6 +651,6 @@ def convert_module_to_trt_engine(

with io.BytesIO() as engine_bytes:
engine_bytes.write(interpreter_result.engine)
engine_bytearray = engine_bytes.getvalue()
engine_bytearray: bytes = engine_bytes.getvalue()

return engine_bytearray
21 changes: 20 additions & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -202,7 +202,7 @@ def build_libtorchtrt_pre_cxx11_abi(
if IS_WINDOWS:
cmd.append("--config=windows")
else:
cmd.append("--config=default")
cmd.append("--config=linux")

if JETPACK_VERSION == "4.5":
cmd.append("--platforms=//toolchains:jetpack_4.5")
Expand Down Expand Up @@ -482,6 +482,23 @@ def run(self):
package_data = {}

if not (PY_ONLY or NO_TS):
tensorrt_linux_external_dir = (
lambda: subprocess.check_output(
["bazel", "query", "@tensorrt//:nvinfer", "--output", "location"]
)
.decode("ascii")
.strip()
.split("/BUILD.bazel")[0]
)
tensorrt_windows_external_dir = (
lambda: subprocess.check_output(
["bazel", "query", "@tensorrt_win//:nvinfer", "--output", "location"]
)
.decode("ascii")
.strip()
.split("/BUILD.bazel")[0]
)

ext_modules += [
CUDAExtension(
"torch_tensorrt._C",
Expand Down Expand Up @@ -513,13 +530,15 @@ def run(self):
+ "/../bazel-Torch-TensorRT/external/tensorrt_win/include",
dir_path + "/../bazel-TensorRT/external/tensorrt_win/include",
dir_path + "/../bazel-tensorrt/external/tensorrt_win/include",
f"{tensorrt_windows_external_dir()}/include",
]
if IS_WINDOWS
else [
dir_path + "/../bazel-TRTorch/external/tensorrt/include",
dir_path + "/../bazel-Torch-TensorRT/external/tensorrt/include",
dir_path + "/../bazel-TensorRT/external/tensorrt/include",
dir_path + "/../bazel-tensorrt/external/tensorrt/include",
f"{tensorrt_linux_external_dir()}/include",
]
)
),
Expand Down
Loading
Loading