You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: examples/llava/MobileVLM-README.md
+12-2Lines changed: 12 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -1,11 +1,13 @@
1
1
# MobileVLM
2
2
3
-
Currently this implementation supports [MobileVLM-v1.7](https://huggingface.co/mtgv/MobileVLM-1.7B) variants.
3
+
Currently this implementation supports [MobileVLM-1.7B](https://huggingface.co/mtgv/MobileVLM-1.7B) / [MobileVLM_V2-1.7B](https://huggingface.co/mtgv/MobileVLM_V2-1.7B) variants.
4
4
5
5
for more information, please go to [Meituan-AutoML/MobileVLM](https://github.com/Meituan-AutoML/MobileVLM)
6
6
7
7
The implementation is based on llava, and is compatible with llava and mobileVLM. The usage is basically same as llava.
8
8
9
+
Notice: The overall process of model inference for both **MobilVLM** and **MobilVLM_V2** models is the same, but the process of model conversion is a little different. Therefore, using MobiVLM as an example, the different conversion step will be shown.
10
+
9
11
## Usage
10
12
Build with cmake or run `make llava-cli` to build it.
3. Use `convert-image-encoder-to-gguf.py` with `--projector-type ldp` to convert the LLaVA image encoder to GGUF:
39
+
3. Use `convert-image-encoder-to-gguf.py` with `--projector-type ldp`(for **V2** the arg is `--projector-type ldpv2`) to convert the LLaVA image encoder to GGUF:
0 commit comments