Skip to content

Fix issue using count/min/max of inference-accelerators #414

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

dims
Copy link
Member

@dims dims commented Feb 22, 2025

Issue #, if available: #413

Description of changes: Do exactly what we do for GpuMemoryRange as that one works fine. which means we use the Int32RangeFilter

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

Here's the output:

❯ ./build/ec2-instance-selector --vcpus 8 --gpus 0 --memory 16GiB -a x86_64 --inference-accelerators 1
inf1.2xlarge

@dims dims requested a review from a team as a code owner February 22, 2025 15:46
@dims
Copy link
Member Author

dims commented Feb 22, 2025

Hola @bwagner5 long time no see :)

Copy link
Contributor

@bwagner5 bwagner5 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

I added a conversion if an in32 is passed to convert to an int64 to be checked if the instance spec value is an in64.

@dims
Copy link
Member Author

dims commented Feb 24, 2025

thanks @bwagner5 :)

@bwagner5 bwagner5 merged commit 68daa33 into aws:main Feb 24, 2025
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants