-
Notifications
You must be signed in to change notification settings - Fork 608
[llava] Use huggingface LLaVA instead of depending on third-party/LLaVa #4687
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
3fe984e
56a0494
fb70535
7e4094c
72db9fc
db9210c
ae8eaf4
051ce2d
87a3a8b
2e8b7de
bd2b2c5
e6d1553
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -6,39 +6,7 @@ | |
# LICENSE file in the root directory of this source tree. | ||
|
||
set -x | ||
OS=$(uname) | ||
|
||
# install llava from the submodule. We can't do pip install llava because it is packaged incorrectly. | ||
if [[ $OS != "Darwin" ]]; | ||
then | ||
#This doesn't work for macos, on python 3.12, because torch 2.1.2 is missing. | ||
pip install --force-reinstall -e examples/third-party/LLaVA | ||
else | ||
# manually install dependencies | ||
pip install tokenizers==0.15.1 sentencepiece==0.1.99 \ | ||
shortuuid accelerate==0.21.0 peft \ | ||
pydantic markdown2[all] scikit-learn==1.2.2 \ | ||
requests httpx==0.24.0 uvicorn fastapi \ | ||
einops==0.6.1 einops-exts==0.0.4 timm==0.6.13 | ||
|
||
pip install --force-reinstall -e examples/third-party/LLaVA --no-deps | ||
fi | ||
|
||
# not included in the pip install package, but needed in llava | ||
pip install protobuf | ||
|
||
# bitsandbytes depends on numpy 1.x, which is not compatible with numpy 2.x. | ||
# Reinstall bitsandbytes to make it compatible. | ||
pip install bitsandbytes -I | ||
|
||
# The deps of llava can have different versions than deps of ExecuTorch. | ||
# For example, torch version required from llava is older than ExecuTorch. | ||
# To make both work, recover ExecuTorch's original dependencies by rerunning | ||
# the install_requirements.sh. Notice this won't install executorch. | ||
bash -x ./install_requirements.sh --pybind xnnpack | ||
|
||
# Newer transformer (4.38) will give TypeError: LlavaLlamaForCausalLM.forward() got an unexpected keyword argument 'cache_position' | ||
pip install timm==0.6.13 | ||
pip install transformers==4.37.2 | ||
pip install transformers | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Should we also run There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think user should call it outside There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I see, the ./install_requirements makes sense, but at least for the llama2, I don't think its immediately obvious that you have to install the requirements for llama2 in order for llava to work. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @larryliu0820 great to me we are moving to the HF model. I think we should have no blocker to clean up all the ad-hoc setup requirements #4320? |
||
|
||
pip list |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why don't we run export_llava.py anymore? looks like test_llava still requires the model to have already been exported?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The last test in test_llava.py exports and tests llava