You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
add the ability to have multi-round conversation with llama (#6769)
* update llama runner to decode single token
Pull Request resolved: #6703
Right now, we don't print the generated response in the eager runner until all tokens are generated. This is not good experience as we need to wait until all tokens are generated to see the response.
This PR updates it to decode each new token immediately after it is generated.
ghstack-source-id: 252924039
Differential Revision: [D65578306](https://our.internmc.facebook.com/intern/diff/D65578306/)
* add the ability to have multi-round conversation with llama
Ad the ability to have multi-round conversations with LLM. This will be helpful for testing long context length.
Differential Revision: [D65771122](https://our.internmc.facebook.com/intern/diff/D65771122/)
ghstack-source-id: 252934165
Pull Request resolved: #6758
---------
Co-authored-by: Lunwen He <[email protected]>
0 commit comments