why prompt eval can not reuse #2175
Unanswered
callMeMakerRen
asked this question in
Q&A
Replies: 1 comment
-
You can reuse the prefix, just call
That's not how LLMs work. Every token depends on all previous tokens. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
hi, guys
from recent test,i found that longer prompt need a lot time to preprocess before it start gen new token, and if you use the same prompt again, it just start gen new token,but if we add a new word prefix the previous prompt ,then again it will process the full prompt , even if it just change a little, i wonder why we need do that,and can we just cache the old process result ,and just process the unknow token,in my example ,should't only the prefix is need process? thx guys
Beta Was this translation helpful? Give feedback.
All reactions