Replies: 3 comments
-
Here is my code on how to run the server in case it is helpful |
Beta Was this translation helpful? Give feedback.
0 replies
-
Hello,
I mean how can I send an image to a VLM? |
Beta Was this translation helpful? Give feedback.
0 replies
-
Any updates on this? @rezacopol |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I tried to follow the instruction on the server readme to host LLava, it says it support openAI format but there is slight difference after I read the readme. My goal is access the model similar to openAI sever so later I can replace my gpt4 calls with a local model I host with minimal code changes (changing base_url).
I thought to post this to learn how to do it and also I think it's useful for others with same question. Here is the code I have to encode and image and sending to server
I don't get any error but response has no clue about the image and it's pure hallucination by the LLM.
Couldn't find much info on the readme except formatting the code as above, anyone made it work with LLava can help me out on what am I doing wrong?
Here is what I see on server logs after sending the request in case it is helpful. I don't see any sign of 576 image tokens for the image by llava 1.5
Beta Was this translation helpful? Give feedback.
All reactions