Skip to content

[ET-VK][LlaMa] Split SDPA + KV cache operator into SDPA operator and KV cache update operator #8068

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from

Conversation

SS-JIA
Copy link
Contributor

@SS-JIA SS-JIA commented Jan 30, 2025

Stack from ghstack (oldest at bottom):

Context

#7413 and #7412 split the sdpa_with_kv_cache operator into two separate operators, update_cache and custom_sdpa to decouple the cache update step from the actual SDPA computation.

As a result, SDPA is no longer being delegated on Vulkan because of this interface change. To rectify this, Vulkan must also split sdpa_with_kv_cache into two operators.

Note that during this diff the new operators are not partitioned yet because of complications caused by assertion ops in the graph. The next diff adds a pass to remove such assertion ops which allows the new operators to be partitioned.

Differential Revision: D68919676

…KV cache update operator

## Context

#7413 and #7412 split the `sdpa_with_kv_cache` operator into two separate operators, `update_cache` and `custom_sdpa` to decouple the cache update step from the actual SDPA computation.

As a result, SDPA is no longer being delegated on Vulkan because of this interface change. To rectify this, Vulkan must also split `sdpa_with_kv_cache` into two operators.

Note that during this diff the new operators are not partitioned yet because of complications caused by assertion ops in the graph. The next diff adds a pass to remove such assertion ops which allows the new operators to be partitioned.

Differential Revision: [D68919676](https://our.internmc.facebook.com/intern/diff/D68919676/)

[ghstack-poisoned]
Copy link

pytorch-bot bot commented Jan 30, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/8068

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 New Failure, 1 Pending

As of commit 2271103 with merge base afc5a50 (image):

NEW FAILURE - The following job has failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jan 30, 2025
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D68919676

SS-JIA added a commit that referenced this pull request Jan 30, 2025
…KV cache update operator

## Context

#7413 and #7412 split the `sdpa_with_kv_cache` operator into two separate operators, `update_cache` and `custom_sdpa` to decouple the cache update step from the actual SDPA computation.

As a result, SDPA is no longer being delegated on Vulkan because of this interface change. To rectify this, Vulkan must also split `sdpa_with_kv_cache` into two operators.

Note that during this diff the new operators are not partitioned yet because of complications caused by assertion ops in the graph. The next diff adds a pass to remove such assertion ops which allows the new operators to be partitioned.

Differential Revision: [D68919676](https://our.internmc.facebook.com/intern/diff/D68919676/)

ghstack-source-id: 263930059
Pull Request resolved: #8068
@SS-JIA SS-JIA added the release notes: vulkan Changes to the Vulkan backend delegate label Jan 30, 2025
SS-JIA added a commit that referenced this pull request Jan 30, 2025
…KV cache update operator + Add `RemoveAsserts` pass and apply it during LlaMa export

**Note**: This diff is a combination of D68919676 (#8068) and D68919678 (no pull request). I decided to combine the two because of problems with `ghexport`, which was having some problems exporting the second diff, as well as the fact that both diffs are needed for `export_llama` to work so it makes more sense to just have a single diff.

## Context

#7413 and #7412 split the `sdpa_with_kv_cache` operator into two separate operators, `update_cache` and `custom_sdpa` to decouple the cache update step from the actual SDPA computation.

As a result, SDPA is no longer being delegated on Vulkan because of this interface change. To rectify this, Vulkan must also split `sdpa_with_kv_cache` into two operators.

Note that during this diff the new operators are not partitioned yet because of complications caused by assertion ops in the graph. The next diff adds a pass to remove such assertion ops which allows the new operators to be partitioned.
## Context

Recently, some assertion ops were added to the Llama source code.

Unfortunately, this causes issues for the Vulkan delegate because runtime assertions are not yet supported in Vulkan and the assertion ops cause graph breaks due to not being supported.

To prevent graph breaks when delegating to Vulkan, apply a pass to remove assertion ops during the llama export.

Differential Revision: [D68922404](https://our.internmc.facebook.com/intern/diff/D68922404/)

[ghstack-poisoned]
@SS-JIA SS-JIA closed this Jan 30, 2025
@SS-JIA SS-JIA deleted the gh/SS-JIA/183/head branch January 30, 2025 22:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported release notes: vulkan Changes to the Vulkan backend delegate
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants