We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent b6b9a8e commit 11318d9Copy full SHA for 11318d9
llama.h
@@ -786,7 +786,7 @@ extern "C" {
786
// Get the number of threads used for prompt and batch processing (multiple token).
787
LLAMA_API uint32_t llama_n_threads_batch(struct llama_context * ctx);
788
789
- // Set whether the model is in embeddings model or not
+ // Set whether the model is in embeddings mode or not
790
// If true, embeddings will be returned but logits will not
791
LLAMA_API void llama_set_embeddings(struct llama_context * ctx, bool embeddings);
792
0 commit comments