Skip to content

build: DIS-148 use the tensorrt_llm public wheel from pypi by default in container build #1525

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

richardhuo-nv
Copy link
Contributor

@richardhuo-nv richardhuo-nv commented Jun 13, 2025

Overview:

change the build.sh to use the tensorrt_llm public release version from pypi by default. Since we don't own the CI pipeline of tensorrt llm, guiding our user to use a random commit from Tensorrt-LLM might cause sadness.

Details:

  1. if users run ./container/build.sh --framework tensorrtllm, it will install the default tensorrt llm public pip wheel from the pypi
  2. Users could attach the --use-default-experimental-tensorrtllm-commit arg to the build.sh to pick up the tensorrt llm build we are currently experimenting.
  3. users could still run build.sh with --tensorrtllm-commit to specify any commit.

build pipeline with the public tensorrt llm pip wheel:
https://gitlab-master.nvidia.com/dl/ai-dynamo/dynamo-ci/-/pipelines/30052494

Unit tests on tensorrt llm examples:

root@ptyche0016:/workspace# pytest -v -m tensorrtllm --ignore=benchmarks/data_generator/tests/ --ignore=lib/bindings/python/tests/ --ignore=tutorials/advanced_source/torch_script_custom_classes --ignore=tutorials/advanced_source/torch_script_custom_ops/
=========================================================================================================================== test session starts ===========================================================================================================================
platform linux -- Python 3.12.3, pytest-8.4.0, pluggy-1.5.0 -- /usr/bin/python
cachedir: .pytest_cache
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase(PosixPath('/workspace/.hypothesis/examples'))
benchmark: 5.1.0 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
rootdir: /workspace
configfile: pyproject.toml
plugins: anyio-4.9.0, hypothesis-6.130.8, asyncio-1.0.0, benchmark-5.1.0, pytest_codeblocks-0.17.0, cov-6.2.1, flakefinder-1.1.0, md-report-0.7.0, mypy-1.0.1, rerunfailures-15.0, shard-0.1.2, timeout-2.4.0, xdist-3.6.1, xdoctest-1.0.2, typeguard-4.3.0
asyncio: mode=Mode.AUTO, asyncio_default_fixture_loop_scope=None, asyncio_default_test_loop_scope=function
collected 298 items / 294 deselected / 4 selected
Running 4 items in this shard: tests/serve/test_dynamo_serve.py::test_serve_deployment[trtllm_agg], tests/serve/test_dynamo_serve.py::test_serve_deployment[trtllm_agg_router], tests/serve/test_dynamo_serve.py::test_serve_deployment[trtllm_disagg], tests/serve/test_dynamo_serve.py::test_serve_deployment[trtllm_disagg_router]

tests/serve/test_dynamo_serve.py::test_serve_deployment[trtllm_agg] PASSED                                                                                                                                                                                          [ 25%]
tests/serve/test_dynamo_serve.py::test_serve_deployment[trtllm_agg_router] PASSED                                                                                                                                                                                   [ 50%]
tests/serve/test_dynamo_serve.py::test_serve_deployment[trtllm_disagg] PASSED                                                                                                                                                                                       [ 75%]
tests/serve/test_dynamo_serve.py::test_serve_deployment[trtllm_disagg_router] PASSED                                                                                                                                                                                [100%]

============================================================================================================== 4 passed, 294 deselected in 586.99s (0:09:46) ==============================================================================================================

Test with the DS R1 multi node example in ptyche:

data: [DONE]

200rihuo@ptyche0011:~$ curl -w "%{http_code}" ${HOST}:${PORT}/v1/chat/completions   -H "Content-Type: application/json"   -d '{
  "model": "'${MODEL}'",
  "messages": [
  {
    "role": "user",
    "content": "Tell me a story as if we were playing dungeons and dragons."
  }
  ],
  "stream": true,
  "max_tokens": 30
}'
 0: 2025-06-13 13:13:08,398 - INFO - flashinfer.jit: Loading JIT ops: rope
 0: 2025-06-13 13:13:32,993 - INFO - flashinfer.jit: Finished loading JIT ops: rope
data: {"id":"chatcmpl-33eb5724-c3ff-4a10-87cc-912e48004682","choices":[{"index":0,"delta":{"content":"Okay","function_call":null,"tool_calls":null,"role":"assistant","refusal":null},"finish_reason":null,"logprobs":null}],"created":1749845588,"model":"hf-574fdb8-nim_fp4","service_tier":null,"system_fingerprint":null,"object":"chat.completion.chunk","usage":{"prompt_tokens":19,"completion_tokens":1,"total_tokens":0,"prompt_tokens_details":null,"completion_tokens_details":null}}

Perf benchmark pareto graph:
https://gitlab-master.nvidia.com/dl/ai-dynamo/dynamo-ci/-/jobs/178323320/artifacts/file/benchmark_results/deepseek-ai/DeepSeek-R1-FP4/pareto_plot.png

Where should the reviewer start?

Related Issues: (use one of the action keywords Closes / Fixes / Resolves / Relates to)

  • closes GitHub issue: #xxx

Summary by CodeRabbit

Summary by CodeRabbit

  • New Features
    • Added a new build option to use an experimental TensorRT-LLM commit when building containers.
  • Documentation
    • Updated instructions to explain the new build flag and its usage, including warnings about experimental status.
    • Clarified requirements for Multi-Token Prediction (MTP) and provided example usage with the new flag.

Copy link

copy-pr-bot bot commented Jun 13, 2025

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

Copy link
Contributor

coderabbitai bot commented Jun 13, 2025

Walkthrough

A new command-line flag, --use-default-experimental-tensorrtllm-commit, was added to the container build script to enable building with an experimental TensorRT-LLM commit. The script enforces mutual exclusivity with other related options, updates default commit and wheel variables, and README documentation was revised to guide users on this new workflow.

Changes

File(s) Change Summary
container/build.sh Added --use-default-experimental-tensorrtllm-commit flag, new default commit/wheel variables, option validation, and updated help text.
examples/tensorrt_llm/README.md Updated build instructions and usage notes to document the new experimental commit flag and its implications for MTP.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant BuildScript
    participant TensorRTLLM

    User->>BuildScript: Run with --use-default-experimental-tensorrtllm-commit
    BuildScript->>BuildScript: Validate no conflicting options
    BuildScript->>BuildScript: Set TRTLLM_COMMIT to default experimental commit
    BuildScript->>TensorRTLLM: Build with experimental commit
Loading

Possibly related PRs

Poem

A flag for the bold, a commit for the brave,
Experimental code—just mind how you behave!
With scripts now aware of what you select,
The README will guide you—no need to reflect.
🐇 Happy building, may your tests all connect!


📜 Recent review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9470c01 and 4c6081b.

📒 Files selected for processing (1)
  • container/build.sh (4 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • container/build.sh
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: Build and Test - vllm

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (4)
container/build.sh (3)

91-93: Variable naming consistency
The variable DEFAULT_EXPERIMENTAL_TRTLLM_COMMIT (missing “ensor”) does not align with the full framework name in the CLI flag (--use-default-experimental-tensorrtllm-commit). Consider renaming to DEFAULT_EXPERIMENTAL_TENSORRTLLM_COMMIT and USE_DEFAULT_EXPERIMENTAL_TENSORRTLLM_COMMIT for clarity.


159-165: Error message typo in flag name
The block for --use-default-experimental-tensorrtllm-commit outputs errors referring to --use-default-experimental-trtllm-commit (missing “ensor”). Update these echoes to match the full flag.


482-488: Mutual exclusivity error message
The mutual-exclusion check echoes --use-default-experimental-trtllm-commit again. It should reference --use-default-experimental-tensorrtllm-commit to stay consistent.

examples/tensorrt_llm/README.md (1)

143-147: Standardize example label formatting
The MTP note uses lowercase "ex:", whereas other sections use "Example:". For consistency, switch to "Example:".

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 99e67e6 and 00724cc.

📒 Files selected for processing (2)
  • container/build.sh (4 hunks)
  • examples/tensorrt_llm/README.md (3 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: Build and Test - vllm
🔇 Additional comments (4)
container/build.sh (2)

94-97: Default pip wheel version verification
You've set DEFAULT_TENSORRTLLM_PIP_WHEEL="tensorrt_llm==0.21.0rc1". Ensure that using the RC as the default is intentional and stable; update to a GA release if required.


490-494: Default pip wheel fallback logic looks correct
The fallback to assign TENSORRTLLM_PIP_WHEEL="$DEFAULT_TENSORRTLLM_PIP_WHEEL" when neither wheel nor commit is set aligns with the PR’s goal to default to PyPI.

examples/tensorrt_llm/README.md (2)

65-70: Experimental build example is clear
This snippet accurately demonstrates how to use the new flag for experimental commits.


287-289: MTP note is consistent here
This section correctly reiterates that MTP requires the experimental commit flag.

@richardhuo-nv richardhuo-nv changed the title build: use the tensorrt_llm public release version from pypi by default build: DIS-148 use the tensorrt_llm public release version from pypi by default Jun 13, 2025
@richardhuo-nv richardhuo-nv changed the title build: DIS-148 use the tensorrt_llm public release version from pypi by default build: DIS-148 use the tensorrt_llm public wheel from pypi by default in container build Jun 13, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants