You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: examples/models/llava_encoder/README.md
+1-4Lines changed: 1 addition & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -5,10 +5,7 @@ In this example, we initiate the process of running multi modality through Execu
5
5
6
6
## Instructions
7
7
Note that this folder does not host the pretrained LLava model.
8
-
- To have Llava available, follow the [Install instructions](https://github.com/haotian-liu/LLaVA?tab=readme-ov-file#install) in the LLava github. Follow the licence in the specific repo when using L
9
-
- Since the pytorch model version may not be updated, `cd executorch`, run `./install_requirements.sh`.
10
-
- If there is numpy compatibility issue, run `pip install bitsandbytes -I`.
11
-
- Alternatively, run `examples/models/llava_encoder/install_requirements.sh`, to replace the steps above.
8
+
- Run `examples/models/llava_encoder/install_requirements.sh`.
12
9
- Run `python3 -m examples.portable.scripts.export --model_name="llava_encoder"`. The llava_encoder.pte file will be generated.
13
10
- Run `./cmake-out/executor_runner --model_path ./llava_encoder.pte` to verify the exported model with ExecuTorch runtime with portable kernels. Note that the portable kernels are not performance optimized. Please refer to other examples like those in llama2 folder for optimization.
0 commit comments