-
Notifications
You must be signed in to change notification settings - Fork 607
Add seq_len to llama runner for early stopping #2051
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/2051
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 65449cc with merge base ca6995b ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This pull request was exported from Phabricator. Differential Revision: D53873431 |
Summary: By default, the llama runner will continue generating until max_seq_len. This is a property embedded in the model metadata. We want a way to limit the number of tokens generated. Reviewed By: larryliu0820 Differential Revision: D53873431
91f989b
to
9328aa8
Compare
This pull request was exported from Phabricator. Differential Revision: D53873431 |
Summary: By default, the llama runner will continue generating until max_seq_len. This is a property embedded in the model metadata. We want a way to limit the number of tokens generated. Reviewed By: larryliu0820 Differential Revision: D53873431
9328aa8
to
d8dd4f2
Compare
This pull request was exported from Phabricator. Differential Revision: D53873431 |
Summary: By default, the llama runner will continue generating until max_seq_len. This is a property embedded in the model metadata. We want a way to limit the number of tokens generated. Reviewed By: larryliu0820 Differential Revision: D53873431
d8dd4f2
to
832a475
Compare
This pull request was exported from Phabricator. Differential Revision: D53873431 |
Summary: By default, the llama runner will continue generating until max_seq_len. This is a property embedded in the model metadata. We want a way to limit the number of tokens generated. Reviewed By: larryliu0820 Differential Revision: D53873431
832a475
to
b252899
Compare
This pull request was exported from Phabricator. Differential Revision: D53873431 |
Summary: By default, the llama runner will continue generating until max_seq_len. This is a property embedded in the model metadata. We want a way to limit the number of tokens generated. Reviewed By: larryliu0820 Differential Revision: D53873431
b252899
to
8117db4
Compare
Summary: By default, the llama runner will continue generating until max_seq_len. This is a property embedded in the model metadata. We want a way to limit the number of tokens generated. Reviewed By: larryliu0820 Differential Revision: D53873431
8117db4
to
4cb804f
Compare
This pull request was exported from Phabricator. Differential Revision: D53873431 |
1 similar comment
This pull request was exported from Phabricator. Differential Revision: D53873431 |
Summary: By default, the llama runner will continue generating until max_seq_len. This is a property embedded in the model metadata. We want a way to limit the number of tokens generated. Reviewed By: Jack-Khuu, larryliu0820 Differential Revision: D53873431
4cb804f
to
65449cc
Compare
This pull request was exported from Phabricator. Differential Revision: D53873431 |
This pull request has been merged in 33ba563. |
Summary: By default, the llama runner will continue generating until max_seq_len. This is a property embedded in the model metadata. We want a way to limit the number of tokens generated.
Reviewed By: larryliu0820
Differential Revision: D53873431