-
Notifications
You must be signed in to change notification settings - Fork 607
Move builder.py into extension #4188
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This PR does the following things: - rename `LlamaEdgeManager` to `LLMEdgeManager` - clean up some unused parameters: `weight_type`, `use_sdpa_with_kv_cache` - move `model.params.max_seq_len` out of `builder.py` - move `builder.py` into `extension/llm/export` Differential Revision: [D59493975](https://our.internmc.facebook.com/intern/diff/D59493975/) [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/4188
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ✅ You can merge normally! (1 Unrelated Failure)As of commit 9dc2fd7 with merge base b10b763 ( BROKEN TRUNK - The following job failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This pull request was exported from Phabricator. Differential Revision: D59493975 |
This PR does the following things: - rename `LlamaEdgeManager` to `LLMEdgeManager` - clean up some unused parameters: `weight_type`, `use_sdpa_with_kv_cache` - move `model.params.max_seq_len` out of `builder.py` - move `builder.py` into `extension/llm/export` Differential Revision: [D59493975](https://our.internmc.facebook.com/intern/diff/D59493975/) [ghstack-poisoned]
This pull request was exported from Phabricator. Differential Revision: D59493975 |
This pull request has been merged in 4fa7cfc. |
Pull Request resolved: pytorch/executorch#4188 This PR does the following things: - rename `LlamaEdgeManager` to `LLMEdgeManager` - clean up some unused parameters: `weight_type`, `use_sdpa_with_kv_cache` - move `model.params.max_seq_len` out of `builder.py` - move `builder.py` into `extension/llm/export` ghstack-source-id: 233306023 @exported-using-ghexport Differential Revision: [D59493975](https://our.internmc.facebook.com/intern/diff/D59493975/)
Stack from ghstack (oldest at bottom):
This PR does the following things:
LlamaEdgeManager
toLLMEdgeManager
weight_type
,use_sdpa_with_kv_cache
model.params.max_seq_len
out ofbuilder.py
builder.py
intoextension/llm/export
Differential Revision: D59493975