Skip to content

Commit bf4ee71

Browse files
author
Guang Yang
committed
Clean up llava deps
1 parent caadd81 commit bf4ee71

File tree

5 files changed

+80
-11
lines changed

5 files changed

+80
-11
lines changed

.ci/scripts/test.sh

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -71,10 +71,9 @@ test_model() {
7171
if [[ "${MODEL_NAME}" == "llava" ]]; then
7272
# Install requirements for llava
7373
bash examples/models/llava/install_requirements.sh
74-
STRICT="--no-strict"
7574
fi
7675
# python3 -m examples.portable.scripts.export --model_name="llama2" should works too
77-
"${PYTHON_EXECUTABLE}" -m examples.portable.scripts.export --model_name="${MODEL_NAME}" "${STRICT}"
76+
"${PYTHON_EXECUTABLE}" -m examples.portable.scripts.export --model_name="${MODEL_NAME}"
7877
run_portable_executor_runner
7978
}
8079

examples/models/llava/README.md

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -7,14 +7,13 @@ In this example, we initiate the process of running multi modality through Execu
77
Note that this folder does not host the pretrained LLava model.
88
- To have Llava available, follow the [Install instructions](https://github.com/haotian-liu/LLaVA?tab=readme-ov-file#install) in the LLava github. Follow the licence in the specific repo when using L
99
- Since the pytorch model version may not be updated, `cd executorch`, run `./install_requirements.sh`.
10-
- If there is numpy compatibility issue, run `pip install bitsandbytes -I`.
11-
- Alternatively, run `examples/models/llava_encoder/install_requirements.sh`, to replace the steps above.
12-
- Run `python3 -m examples.portable.scripts.export --model_name="llava_encoder"`. The llava_encoder.pte file will be generated.
13-
- Run `./cmake-out/executor_runner --model_path ./llava_encoder.pte` to verify the exported model with ExecuTorch runtime with portable kernels. Note that the portable kernels are not performance optimized. Please refer to other examples like those in llama2 folder for optimization.
10+
- Run `examples/models/llava/install_requirements.sh`, to install llava specific deps.
11+
- Run `python3 -m examples.portable.scripts.export --model_name="llava"`. The llava.pte file will be generated.
12+
- Run `./cmake-out/executor_runner --model_path ./llava.pte` to verify the exported model with ExecuTorch runtime with portable kernels. Note that the portable kernels are not performance optimized. Please refer to other examples like those in llama2 folder for optimization.
1413

1514
## TODO
1615
- Write the pipeline in cpp
1716
- Have image and text prompts as inputs.
1817
- Call image processing functions to preprocess the image tensor.
19-
- Load the llava_encoder.pte model, run it using the image tensor.
18+
- Load the llava.pte model, run it using the image tensor.
2019
- The output of the encoder can be combined with the prompt, as inputs to the llama model. Call functions in llama_runner.cpp to run the llama model and get outputs. The ExecuTorch end to end flow for the llama model is located at `examples/models/llama2`.

examples/models/llava/install_requirements.sh

Lines changed: 24 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,28 @@
77

88
set -x
99

10-
pip install transformers accelerate
10+
# install llava from the submodule. We can't do pip install llava because it is packaged incorrectly.
11+
if [[ $OS != "Darwin" ]];
12+
then
13+
#This doesn't work for macos, on python 3.12, because torch 2.1.2 is missing.
14+
pip install --force-reinstall -e examples/third-party/LLaVA
15+
else
16+
# manually install dependencies
17+
pip install tokenizers==0.15.1 sentencepiece==0.1.99 \
18+
shortuuid accelerate==0.21.0 peft \
19+
pydantic markdown2[all] scikit-learn==1.2.2 \
20+
requests httpx==0.24.0 uvicorn fastapi \
21+
einops==0.6.1 einops-exts==0.0.4 timm==0.6.13
1122

12-
pip list
23+
pip install --force-reinstall -e examples/third-party/LLaVA --no-deps
24+
fi
25+
26+
# not included in the pip install package, but needed in llava
27+
pip install protobuf
28+
pip install triton==3.0.0
29+
30+
# The deps of llava can have different versions than deps of ExecuTorch.
31+
# For example, torch version required from llava is older than ExecuTorch.
32+
# To make both work, recover ExecuTorch's original dependencies by rerunning
33+
# the install_requirements.sh. Notice this won't install executorch.
34+
bash -x ./install_requirements.sh --pybind xnnpack

examples/models/llava/model.py

Lines changed: 50 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,8 @@
1010

1111
import re
1212

13-
from typing import Any, Dict, Optional
13+
from dataclasses import dataclass
14+
from typing import Any, Dict, List, Optional
1415

1516
import requests
1617
import torch
@@ -271,6 +272,54 @@ def __init__(self, use_sdpa_with_kv_cache_op=True):
271272
self.input = None
272273
self.resized_image = None
273274

275+
def forward(
276+
self,
277+
input_ids: torch.LongTensor = None,
278+
attention_mask: Optional[torch.Tensor] = None,
279+
position_ids: Optional[torch.LongTensor] = None,
280+
past_key_values: Optional[List[torch.FloatTensor]] = None,
281+
inputs_embeds: Optional[torch.FloatTensor] = None,
282+
labels: Optional[torch.LongTensor] = None,
283+
use_cache: Optional[bool] = None,
284+
output_attentions: Optional[bool] = None,
285+
output_hidden_states: Optional[bool] = None,
286+
images: Optional[torch.FloatTensor] = None,
287+
image_sizes: Optional[List[List[int]]] = None,
288+
return_dict: Optional[bool] = None,
289+
cache_position: Optional[torch.LongTensor] = None,
290+
):
291+
"""
292+
An adapter to the llava_llama.forward(), making it compatible with latest HF interface.
293+
"""
294+
# Do not pass 'cache_position' down to forward() as this old third-party llava can not recongize it.
295+
return self.model.forward(
296+
input_ids=input_ids,
297+
attention_mask=attention_mask,
298+
position_ids=position_ids,
299+
past_key_values=past_key_values,
300+
inputs_embeds=inputs_embeds,
301+
labels=labels,
302+
use_cache=use_cache,
303+
output_attentions=output_attentions,
304+
output_hidden_states=output_hidden_states,
305+
return_dict=return_dict,
306+
)
307+
308+
@torch.no_grad()
309+
def generate(
310+
self,
311+
inputs: Optional[torch.Tensor] = None,
312+
images: Optional[torch.Tensor] = None,
313+
image_sizes: Optional[torch.Tensor] = None,
314+
**kwargs,
315+
):
316+
"""
317+
A adapter to the llava_llama.generate(), make it compatible with latest HF interface.
318+
"""
319+
return self.model.generate(
320+
inputs=inputs, images=images, image_sizes=image_sizes, **kwargs
321+
)
322+
274323
def get_eager_model(self):
275324
model = Llava(
276325
self.model,

examples/models/llava/test/test_llava.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ def test_generated_output(self):
4747
# source of truth, using HF llava
4848
preprocessed = self.llava.image_preprocess(self.resized)
4949
with torch.inference_mode():
50-
output_ids = self.llava_model.model.generate(
50+
output_ids = self.llava_model.generate(
5151
self.llava_model.input_ids,
5252
pixel_values=preprocessed,
5353
do_sample=False,

0 commit comments

Comments
 (0)