Skip to content

Commit 2a1ae4f

Browse files
authored
Update Llava README.md (#3309)
Simplify the instruction.
1 parent d1cf0a6 commit 2a1ae4f

File tree

1 file changed

+1
-4
lines changed

1 file changed

+1
-4
lines changed

examples/models/llava_encoder/README.md

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -5,10 +5,7 @@ In this example, we initiate the process of running multi modality through Execu
55

66
## Instructions
77
Note that this folder does not host the pretrained LLava model.
8-
- To have Llava available, follow the [Install instructions](https://github.com/haotian-liu/LLaVA?tab=readme-ov-file#install) in the LLava github. Follow the licence in the specific repo when using L
9-
- Since the pytorch model version may not be updated, `cd executorch`, run `./install_requirements.sh`.
10-
- If there is numpy compatibility issue, run `pip install bitsandbytes -I`.
11-
- Alternatively, run `examples/models/llava_encoder/install_requirements.sh`, to replace the steps above.
8+
- Run `examples/models/llava_encoder/install_requirements.sh`.
129
- Run `python3 -m examples.portable.scripts.export --model_name="llava_encoder"`. The llava_encoder.pte file will be generated.
1310
- Run `./cmake-out/executor_runner --model_path ./llava_encoder.pte` to verify the exported model with ExecuTorch runtime with portable kernels. Note that the portable kernels are not performance optimized. Please refer to other examples like those in llama2 folder for optimization.
1411

0 commit comments

Comments
 (0)