-
Notifications
You must be signed in to change notification settings - Fork 607
Add llama jobs on Arm64 and reduce llama jobs on MacOS #9251
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/9251
Note: Links to docs will display an error until the docs builds have been completed. ❌ 2 New Failures, 70 PendingAs of commit 1df852b with merge base 718aa6f ( NEW FAILURES - The following jobs have failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
dcc1ad6
to
08d7a79
Compare
08d7a79
to
5640823
Compare
5640823
to
1df852b
Compare
strategy: | ||
matrix: | ||
dtype: [fp32] | ||
mode: [mps, coreml, xnnpack+custom+quantize_kv] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about xnnpack+custom+qe
?
include: | ||
- dtype: bf16 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't it be useful to test portable?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is part of the trunk job now
strategy: | ||
matrix: | ||
dtype: [fp32] | ||
mode: [portable, xnnpack+kv+custom, mps, coreml, xnnpack+custom+quantize_kv] | ||
mode: [portable, xnnpack+custom] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jackzhxng here
strategy: | ||
matrix: | ||
dtype: [fp32] | ||
mode: [mps, coreml, xnnpack+custom+quantize_kv] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
basically, i'm reducing the test coverage on iOS, and relying on Arm64 runners
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And it is tested on the pull job
Reduce macos llama runners
Add arm64 llama runners: distribute into pull.yml and trunk.yml jobs.