-
Notifications
You must be signed in to change notification settings - Fork 607
Arm backend: Add initial Llama model test case #8679
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Adds Llama model test case for TOSA-0.80+MI. Handles Add and Mul where inputs have different ranks. New unit test parameters --llama_inputs added, without it test will be skipped. Tested with smaller stories, see examples/models/llama/UTILS.md. Adds get_llama_model() to export_llama_lib used in test case. Change-Id: I003bbcee8f0cc35193d793a4af9b031453114e71
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/8679
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 Cancelled JobAs of commit d523b6f with merge base 022a946 ( CANCELLED JOB - The following job was cancelled. Please retry:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
@pytorchbot label partner/arm |
Didn't find following labels among repository labels: partner/arm |
@pytorchbot label "partner: arm" |
@pytorchbot label ciflow/trunk |
To add these label(s) (ciflow/trunk) to the PR, please first approve the workflows that are awaiting approval (scroll to the bottom of this page). This helps ensure we don't trigger CI on this PR until it is actually authorized to do so. Please ping one of the reviewers if you do not have access to approve and run workflows. |
To add the ciflow label This helps ensure we don't trigger CI on this PR until it is actually authorized to do so. Please ping one of the reviewers if you do not have access to approve and run workflows. |
topic: not user facing |
topic: "not user facing" |
@@ -0,0 +1,125 @@ | |||
# Copyright (c) Meta Platforms, Inc. and affiliates. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will take a look but just a nit
s/test_llama_arm.py/test_llama.py/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will take a look ...
Merge time? :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Renamed to test_llama.py
MacOs fails are unrelated |
@@ -9,3 +9,5 @@ | |||
git config --global user.email "[email protected]" | |||
git config --global user.name "Github Executorch" | |||
bash examples/arm/setup.sh --i-agree-to-the-contained-eula | |||
|
|||
./examples/models/llama3_2_vision/install_requirements.sh |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this the best place to put this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The idea was to have it in place before the testing but thinking about it now I think you are right
Having it as part of all the steps that use pytest in the file
backends/arm/test/test_arm_baremetal.sh
(or create a prepare test step)
Might be better and easier for anyone to just run the test and they will install needed dependencies them self.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
backends/arm/operators/op_slice.py
Outdated
@@ -32,9 +32,11 @@ def define_node( | |||
output: TosaArg, | |||
) -> None: | |||
|
|||
# See slice_copy_support.py | |||
assert len(inputs) == 4 or (len(inputs) == 5 and inputs[4].number == 1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: add an error message
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
.check_count({"torch.ops.higher_order.executorch_call_delegate": 14}) | ||
.to_executorch() | ||
.run_method_and_compare_outputs( | ||
inputs=llama_inputs, atol=1.8, rtol=0.01 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how did we come up with atol? Also this is running on MI + tosa-ref w/o quantization, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The tolerance is really bad here. This is just to get the test passing. It it something we need to investigate. Added a TODO.
Yes this is testing reference vs TOSA without quantization.
ping @digantdesai |
@digantdesai how do we want to proceed with this? |
Digant is on pat leave right now, so expect delays |
LGTM. |
MacOS fails unrelated |
Sorry @zingo we probably need to revert this and import and run internal tests before landing. executorch/backends/arm/test:cast_int64_pass - main
This line:
probably need to use getattr to check? |
This reverts commit 4c54bab.
@kirklandsign @digantdesai is it possible to add the tests into |
Oh are these tests triggered on GH right now? Seems like some config missing issue? |
Created #9476 to track this. |
Adds Llama model test case for TOSA-0.80+MI. Handles Add and Mul where inputs have different ranks. New unit test parameters --llama_inputs added, without it test will be skipped. Tested with smaller stories, see examples/models/llama/UTILS.md. Adds get_llama_model() to export_llama_lib used in test case.
Adds Llama model test case for TOSA-0.80+MI. Handles Add and Mul where inputs have different ranks. New unit test parameters --llama_inputs added, without it test will be skipped. Tested with smaller stories, see examples/models/llama/UTILS.md. Adds get_llama_model() to export_llama_lib used in test case.
Adds Llama model test case for TOSA-0.80+MI.
Handles Add and Mul where inputs have different ranks. New unit test parameters --llama_inputs added, without it test will be skipped. Tested with smaller stories, see examples/models/llama/UTILS.md. Adds get_llama_model() to export_llama_lib used in test case.
cc @digantdesai @freddan80 @per @zingo @oscarandersson8218