-
Notifications
You must be signed in to change notification settings - Fork 606
Support more breakdown of latency metrics/stats for Llama #6072
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/6072
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (1 Unrelated Failure)As of commit c61ebe9 with merge base 83c95df ( FLAKY - The following job failed but was likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This pull request was exported from Phabricator. Differential Revision: D64139460 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, thanks!
297e595
to
024dbd7
Compare
Summary: Support more breakdown of latency metrics/stats for Llama - This is needed when we debugging the Frame-LLM project across teams Reviewed By: cccclai Differential Revision: D64139460
This pull request was exported from Phabricator. Differential Revision: D64139460 |
Summary: Support more breakdown of latency metrics/stats for Llama - This is needed when we debugging the Frame-LLM project across teams Reviewed By: cccclai Differential Revision: D64139460
024dbd7
to
74c5b66
Compare
This pull request was exported from Phabricator. Differential Revision: D64139460 |
Summary: Support more breakdown of latency metrics/stats for Llama - This is needed when we debugging the Frame-LLM project across teams Reviewed By: cccclai Differential Revision: D64139460
74c5b66
to
3f87a4c
Compare
This pull request was exported from Phabricator. Differential Revision: D64139460 |
Summary: Support more breakdown of latency metrics/stats for Llama - This is needed when we debugging the Frame-LLM project across teams Reviewed By: cccclai Differential Revision: D64139460
3f87a4c
to
8926eaf
Compare
This pull request was exported from Phabricator. Differential Revision: D64139460 |
Summary: Support more breakdown of latency metrics/stats for Llama - This is needed when we debugging the Frame-LLM project across teams Reviewed By: cccclai Differential Revision: D64139460
8926eaf
to
1c1970f
Compare
This pull request was exported from Phabricator. Differential Revision: D64139460 |
Summary: Support more breakdown of latency metrics/stats for Llama - This is needed when we debugging the Frame-LLM project across teams Reviewed By: cccclai Differential Revision: D64139460
1c1970f
to
c61ebe9
Compare
This pull request was exported from Phabricator. Differential Revision: D64139460 |
This pull request has been merged in d6aea3d. |
Summary:
Support more breakdown of latency metrics/stats for Llama
Differential Revision: D64139460