Skip to content

Commit 3417c3b

Browse files
author
Guang Yang
committed
Clean up llava deps
1 parent 0e032c5 commit 3417c3b

File tree

3 files changed

+5
-18
lines changed

3 files changed

+5
-18
lines changed

.ci/scripts/test.sh

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -71,10 +71,9 @@ test_model() {
7171
if [[ "${MODEL_NAME}" == "llava" ]]; then
7272
# Install requirements for llava
7373
bash examples/models/llava/install_requirements.sh
74-
STRICT="--no-strict"
7574
fi
7675
# python3 -m examples.portable.scripts.export --model_name="llama2" should works too
77-
"${PYTHON_EXECUTABLE}" -m examples.portable.scripts.export --model_name="${MODEL_NAME}" "${STRICT}"
76+
"${PYTHON_EXECUTABLE}" -m examples.portable.scripts.export --model_name="${MODEL_NAME}"
7877
run_portable_executor_runner
7978
}
8079

examples/models/llava/README.md

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -7,14 +7,13 @@ In this example, we initiate the process of running multi modality through Execu
77
Note that this folder does not host the pretrained LLava model.
88
- To have Llava available, follow the [Install instructions](https://github.com/haotian-liu/LLaVA?tab=readme-ov-file#install) in the LLava github. Follow the licence in the specific repo when using L
99
- Since the pytorch model version may not be updated, `cd executorch`, run `./install_requirements.sh`.
10-
- If there is numpy compatibility issue, run `pip install bitsandbytes -I`.
11-
- Alternatively, run `examples/models/llava_encoder/install_requirements.sh`, to replace the steps above.
12-
- Run `python3 -m examples.portable.scripts.export --model_name="llava_encoder"`. The llava_encoder.pte file will be generated.
13-
- Run `./cmake-out/executor_runner --model_path ./llava_encoder.pte` to verify the exported model with ExecuTorch runtime with portable kernels. Note that the portable kernels are not performance optimized. Please refer to other examples like those in llama2 folder for optimization.
10+
- Run `examples/models/llava/install_requirements.sh`, to install llava specific deps.
11+
- Run `python3 -m examples.portable.scripts.export --model_name="llava"`. The llava.pte file will be generated.
12+
- Run `./cmake-out/executor_runner --model_path ./llava.pte` to verify the exported model with ExecuTorch runtime with portable kernels. Note that the portable kernels are not performance optimized. Please refer to other examples like those in llama2 folder for optimization.
1413

1514
## TODO
1615
- Write the pipeline in cpp
1716
- Have image and text prompts as inputs.
1817
- Call image processing functions to preprocess the image tensor.
19-
- Load the llava_encoder.pte model, run it using the image tensor.
18+
- Load the llava.pte model, run it using the image tensor.
2019
- The output of the encoder can be combined with the prompt, as inputs to the llama model. Call functions in llama_runner.cpp to run the llama model and get outputs. The ExecuTorch end to end flow for the llama model is located at `examples/models/llama2`.

examples/models/llava/install_requirements.sh

Lines changed: 0 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -13,19 +13,8 @@ pip install --force-reinstall -e examples/third-party/LLaVA
1313
# not included in the pip install package, but needed in llava
1414
pip install protobuf
1515

16-
# bitsandbytes depends on numpy 1.x, which is not compatible with numpy 2.x.
17-
# Reinstall bitsandbytes to make it compatible.
18-
pip install bitsandbytes -I
19-
20-
# numpy needs to be pin to 1.24. 1.26.4 will error out
21-
pip install numpy==1.24
22-
2316
# The deps of llava can have different versions than deps of ExecuTorch.
2417
# For example, torch version required from llava is older than ExecuTorch.
2518
# To make both work, recover ExecuTorch's original dependencies by rerunning
2619
# the install_requirements.sh.
2720
bash -x ./install_requirements.sh --pybind xnnpack
28-
29-
# Newer transformer will give TypeError: LlavaLlamaForCausalLM.forward() got an unexpected keyword argument 'cache_position'
30-
pip install timm==0.6.13
31-
pip install transformers==4.38.2

0 commit comments

Comments
 (0)