Skip to content

Qualcomm AI Engine Direct - Mimi Enablement Stage 1 #9570

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Mar 26, 2025

Conversation

winskuo-quic
Copy link
Collaborator

Summary

This is the stage 1 of Mimi Enablement.
Stage 2 will consist of actual model enablement.

  • Support OP:
    • exp
    • expm1
    • elu
    • transpose conv1d
    • bitwise_and
    • scalar_tensor
    • stack
    • unbind

@winskuo-quic winskuo-quic requested a review from cccclai as a code owner March 25, 2025 09:02
Copy link

pytorch-bot bot commented Mar 25, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/9570

Note: Links to docs will display an error until the docs builds have been completed.

❌ 2 New Failures

As of commit f277d44 with merge base 90f0843 (image):

NEW FAILURES - The following jobs have failed:

  • pull / android / build-llm-demo / linux-job (gh)
    RuntimeError: Command docker exec -t 017c2ffa40c46acfc35ab9ee6017c55b76f8093f9ed7ad1fecc3a9876658887a /exec failed with exit code 1
  • pull / test-models-linux (phi_4_mini, portable, linux.4xlarge.memory) / linux-job (gh)
    RuntimeError: Model phi_4_mini is not a valid name. Available models are ['mul', 'linear', 'add', 'add_mul', 'softmax', 'dl3', 'edsr', 'emformer_transcribe', 'emformer_predict', 'emformer_join', 'llama2', 'llama', 'llama3_2_vision_encoder', 'lstm', 'mobilebert', 'mv2', 'mv2_untrained', 'mv3', 'vit', 'w2l', 'ic3', 'ic4', 'resnet18', 'resnet50', 'llava', 'efficient_sam', 'qwen2_5', 'phi-4-mini'].

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Mar 25, 2025
@winskuo-quic winskuo-quic force-pushed the dev1/winskuo/mimi_stage1 branch from 7cf55cd to 795f263 Compare March 25, 2025 12:13
@winskuo-quic winskuo-quic force-pushed the dev1/winskuo/mimi_stage1 branch from 795f263 to f277d44 Compare March 25, 2025 12:35
@winskuo-quic
Copy link
Collaborator Author

Hi @cccclai, @iseeyuan, @billmguo,
This PR is the stage 1 of Mimi Enablement, which is mainly focusing on supporting operations and adding some passes.

Please notice the change under examples/models/llama/export_llama_lib.py is to preserve the original behavior. The pass AnnotateDecomposed is now disabled by default. However, since the export_llama_lib.py actually needs this pass to be on, I have manually enabled the pass.

Please have a look.
Thanks

@facebook-github-bot
Copy link
Contributor

@cccclai has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@cccclai
Copy link
Contributor

cccclai commented Mar 25, 2025

Hi @cccclai, @iseeyuan, @billmguo, This PR is the stage 1 of Mimi Enablement, which is mainly focusing on supporting operations and adding some passes.

Please notice the change under examples/models/llama/export_llama_lib.py is to preserve the original behavior. The pass AnnotateDecomposed is now disabled by default. However, since the export_llama_lib.py actually needs this pass to be on, I have manually enabled the pass.

Please have a look. Thanks

Mind adding this note as part of the code command, so people know the context in the future? Thanks.

@facebook-github-bot
Copy link
Contributor

@cccclai has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@winskuo-quic
Copy link
Collaborator Author

Hi @cccclai, @iseeyuan, @billmguo, This PR is the stage 1 of Mimi Enablement, which is mainly focusing on supporting operations and adding some passes.
Please notice the change under examples/models/llama/export_llama_lib.py is to preserve the original behavior. The pass AnnotateDecomposed is now disabled by default. However, since the export_llama_lib.py actually needs this pass to be on, I have manually enabled the pass.
Please have a look. Thanks

Mind adding this note as part of the code command, so people know the context in the future? Thanks.

Thanks for reviewing and the reminder.
I will add this during stage2 of Mimi Enablement.

@winskuo-quic
Copy link
Collaborator Author

@pytorchbot label "release notes: qualcomm"

@pytorch-bot pytorch-bot bot added the release notes: qualcomm Changes to the Qualcomm backend delegate label Mar 26, 2025
@facebook-github-bot
Copy link
Contributor

@cccclai has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

Copy link
Contributor

@cccclai cccclai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me, thank you!

@cccclai cccclai merged commit c18e5f6 into pytorch:main Mar 26, 2025
80 of 83 checks passed
kirklandsign pushed a commit that referenced this pull request Apr 11, 2025
### Summary
This is the stage 1 of Mimi Enablement.
Stage 2 will consist of actual model enablement.
- Support OP:
  - exp
  - expm1
  - elu
  - transpose conv1d
  - bitwise_and
  - scalar_tensor
  - stack
  - unbind
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. release notes: qualcomm Changes to the Qualcomm backend delegate
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants