Skip to content

Commit 940b1e8

Browse files
committed
fix output counting comment
1 parent 4236ccc commit 940b1e8

File tree

1 file changed

+1
-2
lines changed

1 file changed

+1
-2
lines changed

llama.cpp

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12618,8 +12618,7 @@ static int llama_decode_internal(
1261812618
std::vector<llama_seq_id *> seq_id_arr;
1261912619
std::vector<std::vector<llama_seq_id>> seq_id;
1262012620

12621-
// this indicates we are doing pooling on an embedding model. non-embedding models always
12622-
// use "output_ids" so we need to preserve all outputs in that case (somewhat inefficiently)
12621+
// this indicates we are doing pooled embedding, so we ignore batch.logits and output all tokens
1262312622
bool embed_pooled = cparams.embeddings && cparams.pooling_type != LLAMA_POOLING_TYPE_NONE;
1262412623

1262512624
// count outputs

0 commit comments

Comments
 (0)