Skip to content

Commit 3d6a687

Browse files
committed
Update readme to document multimodal in server
1 parent dd1af2e commit 3d6a687

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

examples/server/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -162,7 +162,7 @@ node index.js
162162

163163
`n_probs`: If greater than 0, the response also contains the probabilities of top N tokens for each generated token (default: 0)
164164

165-
`image_data`: An array of objects to hold base64-encoded image `data` and its `id`s to be reference in `prompt`. You can determine the place of the image in the prompt as in the following: `USER:[img-12]Describe the image in detail.\nASSISTANT:` In this case, `[img-12]` will be replaced by the embeddings of the image id 12 in the following `image_data` array: `{..., "image_data": ["data": "<BASE64_STRING>", "id": 12]}`. Use `image_data` only with multimodal models, e.g., LLaVA.
165+
`image_data`: An array of objects to hold base64-encoded image `data` and its `id`s to be reference in `prompt`. You can determine the place of the image in the prompt as in the following: `USER:[img-12]Describe the image in detail.\nASSISTANT:` In this case, `[img-12]` will be replaced by the embeddings of the image id 12 in the following `image_data` array: `{..., "image_data": [{"data": "<BASE64_STRING>", "id": 12}]}`. Use `image_data` only with multimodal models, e.g., LLaVA.
166166

167167
*Result JSON:*
168168

0 commit comments

Comments
 (0)