Skip to content

Commit a89f1cf

Browse files
author
Guang Yang
committed
Clean up llava deps
1 parent 39aeff9 commit a89f1cf

File tree

3 files changed

+6
-8
lines changed

3 files changed

+6
-8
lines changed

.ci/scripts/test.sh

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -71,10 +71,9 @@ test_model() {
7171
if [[ "${MODEL_NAME}" == "llava" ]]; then
7272
# Install requirements for llava
7373
bash examples/models/llava/install_requirements.sh
74-
STRICT="--no-strict"
7574
fi
7675
# python3 -m examples.portable.scripts.export --model_name="llama2" should works too
77-
"${PYTHON_EXECUTABLE}" -m examples.portable.scripts.export --model_name="${MODEL_NAME}" "${STRICT}"
76+
"${PYTHON_EXECUTABLE}" -m examples.portable.scripts.export --model_name="${MODEL_NAME}"
7877
run_portable_executor_runner
7978
}
8079

examples/models/llava/README.md

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -7,14 +7,13 @@ In this example, we initiate the process of running multi modality through Execu
77
Note that this folder does not host the pretrained LLava model.
88
- To have Llava available, follow the [Install instructions](https://github.com/haotian-liu/LLaVA?tab=readme-ov-file#install) in the LLava github. Follow the licence in the specific repo when using L
99
- Since the pytorch model version may not be updated, `cd executorch`, run `./install_requirements.sh`.
10-
- If there is numpy compatibility issue, run `pip install bitsandbytes -I`.
11-
- Alternatively, run `examples/models/llava_encoder/install_requirements.sh`, to replace the steps above.
12-
- Run `python3 -m examples.portable.scripts.export --model_name="llava_encoder"`. The llava_encoder.pte file will be generated.
13-
- Run `./cmake-out/executor_runner --model_path ./llava_encoder.pte` to verify the exported model with ExecuTorch runtime with portable kernels. Note that the portable kernels are not performance optimized. Please refer to other examples like those in llama2 folder for optimization.
10+
- Run `examples/models/llava/install_requirements.sh`, to install llava specific deps.
11+
- Run `python3 -m examples.portable.scripts.export --model_name="llava"`. The llava.pte file will be generated.
12+
- Run `./cmake-out/executor_runner --model_path ./llava.pte` to verify the exported model with ExecuTorch runtime with portable kernels. Note that the portable kernels are not performance optimized. Please refer to other examples like those in llama2 folder for optimization.
1413

1514
## TODO
1615
- Write the pipeline in cpp
1716
- Have image and text prompts as inputs.
1817
- Call image processing functions to preprocess the image tensor.
19-
- Load the llava_encoder.pte model, run it using the image tensor.
18+
- Load the llava.pte model, run it using the image tensor.
2019
- The output of the encoder can be combined with the prompt, as inputs to the llama model. Call functions in llama_runner.cpp to run the llama model and get outputs. The ExecuTorch end to end flow for the llama model is located at `examples/models/llama2`.

examples/models/llava/install_requirements.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,6 @@
77

88
set -x
99

10-
pip install transformers accelerate
10+
pip install accelerate
1111

1212
pip list

0 commit comments

Comments
 (0)