-
Notifications
You must be signed in to change notification settings - Fork 608
Use Android llm benchmark runner #5094
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/5094
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 0f45f39 with merge base ee752f0 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
- | | ||
adb -s $DEVICEFARM_DEVICE_UDID shell am start -n com.example.executorchllamademo/.Benchmarking \ | ||
--es "model_dir" "/data/local/tmp/llama" \ | ||
--es "tokenizer_path" "/data/local/tmp/llama/tokenizer.bin" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you! Exactly what i want to do :D
Need to find a way to wait for the file to result file to appear. shell am start is async
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, got it. TIL. I'm still working on this to make the command works, so stay tuned :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this kind of stuff work? 🤔
adb shell while [ ! -f /data/local/tmp/result.txt ]; do sleep 1; done
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
adb shell doesn't like the way I write bash script, so I need to look for a work around by cat-ing the results. It works nonetheless, so I guess we are good :)
Please format 🥲 https://github.com/google/google-java-format/releases download the binary and run... sorry no built in tool right now |
# TODO (huydhn): Polling like this looks brittle, figure out if there is a better way to wait | ||
# for the benchmark results | ||
adb -s $DEVICEFARM_DEVICE_UDID shell run-as com.example.executorchllamademo \ | ||
while ! test -f files/benchmark_results.json; do echo "Waiting for benchmark results..."; sleep 30; done |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we have a maximum timeout? Or just rely on GH to timeout?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup, we can have a maximum timeout, as GH action timeout is 1 hour which is a bit too long.
- adb -s $DEVICEFARM_DEVICE_UDID shell "ls -la /data/local/tmp/llama/" | ||
- echo "Wait for the results" | ||
- | | ||
# TODO (huydhn): Polling like this looks brittle, figure out if there is a better way to wait |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Curious what could be brittle with this approach?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just feel that there might be a better approach out there, so put a TODO here to remind myself for now. I don't like pooling for results in general, feel like a waste of requests :)
@huydhn has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
The current solution is to:
Testing
https://github.com/pytorch/executorch/actions/runs/10731052861/job/29761525913
Download the artifacts from AWS and confirm that the
benchmark_results.json
file are there together with theinstrument.log