-
Notifications
You must be signed in to change notification settings - Fork 419
fix: Throw exception if inadequate GPUs #1508
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
WalkthroughThe change updates the GPU allocation logic in the Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant ResourceAllocator
User->>ResourceAllocator: assign_gpus(requested_count)
ResourceAllocator->>ResourceAllocator: Check remaining GPUs
alt Requested > Remaining
ResourceAllocator-->>User: Raise ResourceError (insufficient GPUs)
else Requested <= Remaining
ResourceAllocator->>ResourceAllocator: Assign GPUs
ResourceAllocator-->>User: Return assigned GPUs
end
Possibly related issues
Poem
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🔭 Outside diff range comments (1)
deploy/sdk/src/dynamo/sdk/cli/allocator.py (1)
80-86
: 🛠️ Refactor suggestionCorrect move from soft-warning to hard failure, but emit a log entry before raising.
Switching from
logger.warning
to an immediateResourceError
fulfils the PR goal 👍.
However, because the raise short-circuits the method, the message is no longer captured in the logs unless the caller logs uncaught exceptions. A singlelogger.error(...)
right before the raise keeps parity with the previous observability.if count > self.remaining_gpus: - raise ResourceError( + logger.error( # visible even when the caller swallows the exception + f"Requested {count} GPUs, but only {self.remaining_gpus} are remaining. " + f"Serving may fail due to inadequate GPUs. Set {DYN_DISABLE_AUTO_GPU_ALLOCATION}=1 " + "to disable automatic allocation and allocate GPUs manually." + ) + raise ResourceError( f"Requested {count} GPUs, but only {self.remaining_gpus} are remaining. " f"Serving may fail due to inadequate GPUs. Set {DYN_DISABLE_AUTO_GPU_ALLOCATION}=1 " "to disable automatic allocation and allocate GPUs manually." )
🧹 Nitpick comments (1)
deploy/sdk/src/dynamo/sdk/cli/allocator.py (1)
86-87
: Integer cast drops fractional capacity.
self.remaining_gpus = int(max(0, self.remaining_gpus - count))
silently floors fractional
remainders. Two consecutiveassign_gpus(0.5, …)
calls on a 1-GPU system would mark the
machine as fully consumed after the first call (1 GPU → 1.5 →int()
→ 1).Either keep
remaining_gpus
as a float or track whole & fractional capacity separately.-self.remaining_gpus = int(max(0, self.remaining_gpus - count)) +self.remaining_gpus = max(0.0, self.remaining_gpus - count) # keep precision
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
deploy/sdk/src/dynamo/sdk/cli/allocator.py
(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: Build and Test - vllm
- GitHub Check: Build and Test - vllm
Overview:
Throw exception if inadequate GPUs in
dynamo serve
.Details:
3 GPUs are needed for disagg EPD serving. 1 gpu for each E P and D. If the number of GPUs available to the user is less than the number of GPUs that an EPD service needs, the service "starts up" normally and hangs when the user tries to make an inference call, without any sign or error that the user doesn't have enough GPUs available to run the service.
Expected Behavior
When serving, an error should be thrown at/around the GPU allocation step.
Where should the reviewer start?
Related Issues: (use one of the action keywords Closes / Fixes / Resolves / Relates to)
Summary by CodeRabbit