Skip to content

Clean up llava deps and consolidate all HF deps #4320

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 1 addition & 2 deletions .ci/scripts/test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -71,10 +71,9 @@ test_model() {
if [[ "${MODEL_NAME}" == "llava" ]]; then
# Install requirements for llava
bash examples/models/llava/install_requirements.sh
STRICT="--no-strict"
fi
# python3 -m examples.portable.scripts.export --model_name="llama2" should works too
"${PYTHON_EXECUTABLE}" -m examples.portable.scripts.export --model_name="${MODEL_NAME}" "${STRICT}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this work for Llava, does it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@larryliu0820 Yeah, the CI passes, and locally I tried the AOT a while ago and it works fine. Do you recall what was the issue, AOT or runtime?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@larryliu0820 it just works fine.

"${PYTHON_EXECUTABLE}" -m examples.portable.scripts.export --model_name="${MODEL_NAME}"
run_portable_executor_runner
}

Expand Down
9 changes: 4 additions & 5 deletions examples/models/llava/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,14 +7,13 @@ In this example, we initiate the process of running multi modality through Execu
Note that this folder does not host the pretrained LLava model.
- To have Llava available, follow the [Install instructions](https://github.com/haotian-liu/LLaVA?tab=readme-ov-file#install) in the LLava github. Follow the licence in the specific repo when using L
- Since the pytorch model version may not be updated, `cd executorch`, run `./install_requirements.sh`.
- If there is numpy compatibility issue, run `pip install bitsandbytes -I`.
- Alternatively, run `examples/models/llava_encoder/install_requirements.sh`, to replace the steps above.
- Run `python3 -m examples.portable.scripts.export --model_name="llava_encoder"`. The llava_encoder.pte file will be generated.
- Run `./cmake-out/executor_runner --model_path ./llava_encoder.pte` to verify the exported model with ExecuTorch runtime with portable kernels. Note that the portable kernels are not performance optimized. Please refer to other examples like those in llama2 folder for optimization.
- Run `examples/models/llava/install_requirements.sh`, to install llava specific deps.
- Run `python3 -m examples.portable.scripts.export --model_name="llava"`. The llava.pte file will be generated.
- Run `./cmake-out/executor_runner --model_path ./llava.pte` to verify the exported model with ExecuTorch runtime with portable kernels. Note that the portable kernels are not performance optimized. Please refer to other examples like those in llama2 folder for optimization.

## TODO
- Write the pipeline in cpp
- Have image and text prompts as inputs.
- Call image processing functions to preprocess the image tensor.
- Load the llava_encoder.pte model, run it using the image tensor.
- Load the llava.pte model, run it using the image tensor.
- The output of the encoder can be combined with the prompt, as inputs to the llama model. Call functions in llama_runner.cpp to run the llama model and get outputs. The ExecuTorch end to end flow for the llama model is located at `examples/models/llama2`.
2 changes: 1 addition & 1 deletion examples/models/llava/install_requirements.sh
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,6 @@

set -x

pip install transformers accelerate
pip install accelerate

pip list
Loading