Skip to content

[ExecuTorch][Llama] Split custom sdpa op and kv cache #7412

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 8 commits into from
Jan 16, 2025

Conversation

kimishpatel
Copy link
Contributor

@kimishpatel kimishpatel commented Dec 20, 2024

Stack from ghstack (oldest at bottom):

Summary:
This enables us to do more easier module swap with model definitions
from torchtune

Test Plan:
CI

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: D67914056

Summary:
This enables us to do more easier module swap with model definitions
from torchtune

Test Plan:
CI

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
Copy link

pytorch-bot bot commented Dec 20, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/7412

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit d70b711 with merge base d1b33cb (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Dec 20, 2024
Summary:
This enables us to do more easier module swap with model definitions
from torchtune

Test Plan:
CI

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
…nd kv cache"

Summary:
This enables us to do more easier module swap with model definitions
from torchtune

Test Plan:
CI

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
Summary:
This enables us to do more easier module swap with model definitions
from torchtune

Test Plan:
CI

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
@kimishpatel
Copy link
Contributor Author

@kimishpatel has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

…nd kv cache"

Summary:
This enables us to do more easier module swap with model definitions
from torchtune

Test Plan:
CI

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D67914056](https://our.internmc.facebook.com/intern/diff/D67914056)

[ghstack-poisoned]
Summary:
This enables us to do more easier module swap with model definitions
from torchtune

Test Plan:
CI

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D67914056](https://our.internmc.facebook.com/intern/diff/D67914056)

[ghstack-poisoned]
@kimishpatel
Copy link
Contributor Author

@kimishpatel has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

5 similar comments
@kimishpatel
Copy link
Contributor Author

@kimishpatel has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@kimishpatel
Copy link
Contributor Author

@kimishpatel has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@kimishpatel
Copy link
Contributor Author

@kimishpatel has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@kimishpatel
Copy link
Contributor Author

@kimishpatel has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@kimishpatel
Copy link
Contributor Author

@kimishpatel has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

…nd kv cache"

Summary:
This enables us to do more easier module swap with model definitions
from torchtune

Test Plan:
CI

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D67914056](https://our.internmc.facebook.com/intern/diff/D67914056)

[ghstack-poisoned]
Summary:
This enables us to do more easier module swap with model definitions
from torchtune

Test Plan:
CI

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D67914056](https://our.internmc.facebook.com/intern/diff/D67914056)

[ghstack-poisoned]
@kimishpatel
Copy link
Contributor Author

@kimishpatel has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@kimishpatel kimishpatel changed the base branch from gh/kimishpatel/148/base to main January 16, 2025 17:57
@kimishpatel kimishpatel merged commit af7613c into main Jan 16, 2025
44 of 46 checks passed
@kimishpatel kimishpatel deleted the gh/kimishpatel/148/head branch January 16, 2025 17:59
YIWENX14 pushed a commit that referenced this pull request Jan 28, 2025
* [ExecuTorch][Llama] Split custom sdpa op and kv cache

Summary:
This enables us to do more easier module swap with model definitions
from torchtune

Test Plan:
CI

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]

* Update on "[ExecuTorch][Llama] Split custom sdpa op and kv cache"


Summary:
This enables us to do more easier module swap with model definitions
from torchtune

Test Plan:
CI

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
SS-JIA added a commit that referenced this pull request Jan 30, 2025
…KV cache update operator

## Context

#7413 and #7412 split the `sdpa_with_kv_cache` operator into two separate operators, `update_cache` and `custom_sdpa` to decouple the cache update step from the actual SDPA computation.

As a result, SDPA is no longer being delegated on Vulkan because of this interface change. To rectify this, Vulkan must also split `sdpa_with_kv_cache` into two operators.

Note that during this diff the new operators are not partitioned yet because of complications caused by assertion ops in the graph. The next diff adds a pass to remove such assertion ops which allows the new operators to be partitioned.

Differential Revision: [D68916952](https://our.internmc.facebook.com/intern/diff/D68916952/)

[ghstack-poisoned]
SS-JIA added a commit that referenced this pull request Jan 30, 2025
…KV cache update operator

## Context

#7413 and #7412 split the `sdpa_with_kv_cache` operator into two separate operators, `update_cache` and `custom_sdpa` to decouple the cache update step from the actual SDPA computation.

As a result, SDPA is no longer being delegated on Vulkan because of this interface change. To rectify this, Vulkan must also split `sdpa_with_kv_cache` into two operators.

Note that during this diff the new operators are not partitioned yet because of complications caused by assertion ops in the graph. The next diff adds a pass to remove such assertion ops which allows the new operators to be partitioned.

Differential Revision: [D68916952](https://our.internmc.facebook.com/intern/diff/D68916952/)

[ghstack-poisoned]
SS-JIA added a commit that referenced this pull request Jan 30, 2025
…KV cache update operator

## Context

#7413 and #7412 split the `sdpa_with_kv_cache` operator into two separate operators, `update_cache` and `custom_sdpa` to decouple the cache update step from the actual SDPA computation.

As a result, SDPA is no longer being delegated on Vulkan because of this interface change. To rectify this, Vulkan must also split `sdpa_with_kv_cache` into two operators.

Note that during this diff the new operators are not partitioned yet because of complications caused by assertion ops in the graph. The next diff adds a pass to remove such assertion ops which allows the new operators to be partitioned.

Differential Revision: [D68916952](https://our.internmc.facebook.com/intern/diff/D68916952/)

[ghstack-poisoned]
SS-JIA added a commit that referenced this pull request Jan 30, 2025
…KV cache update operator

## Context

#7413 and #7412 split the `sdpa_with_kv_cache` operator into two separate operators, `update_cache` and `custom_sdpa` to decouple the cache update step from the actual SDPA computation.

As a result, SDPA is no longer being delegated on Vulkan because of this interface change. To rectify this, Vulkan must also split `sdpa_with_kv_cache` into two operators.

Note that during this diff the new operators are not partitioned yet because of complications caused by assertion ops in the graph. The next diff adds a pass to remove such assertion ops which allows the new operators to be partitioned.

Differential Revision: [D68919676](https://our.internmc.facebook.com/intern/diff/D68919676/)

[ghstack-poisoned]
SS-JIA added a commit that referenced this pull request Jan 30, 2025
…KV cache update operator

## Context

#7413 and #7412 split the `sdpa_with_kv_cache` operator into two separate operators, `update_cache` and `custom_sdpa` to decouple the cache update step from the actual SDPA computation.

As a result, SDPA is no longer being delegated on Vulkan because of this interface change. To rectify this, Vulkan must also split `sdpa_with_kv_cache` into two operators.

Note that during this diff the new operators are not partitioned yet because of complications caused by assertion ops in the graph. The next diff adds a pass to remove such assertion ops which allows the new operators to be partitioned.

Differential Revision: [D68919676](https://our.internmc.facebook.com/intern/diff/D68919676/)

[ghstack-poisoned]
SS-JIA added a commit that referenced this pull request Jan 30, 2025
…KV cache update operator

## Context

#7413 and #7412 split the `sdpa_with_kv_cache` operator into two separate operators, `update_cache` and `custom_sdpa` to decouple the cache update step from the actual SDPA computation.

As a result, SDPA is no longer being delegated on Vulkan because of this interface change. To rectify this, Vulkan must also split `sdpa_with_kv_cache` into two operators.

Note that during this diff the new operators are not partitioned yet because of complications caused by assertion ops in the graph. The next diff adds a pass to remove such assertion ops which allows the new operators to be partitioned.

Differential Revision: [D68919676](https://our.internmc.facebook.com/intern/diff/D68919676/)

ghstack-source-id: 263930059
Pull Request resolved: #8068
SS-JIA added a commit that referenced this pull request Jan 30, 2025
…KV cache update operator + Add `RemoveAsserts` pass and apply it during LlaMa export

**Note**: This diff is a combination of D68919676 (#8068) and D68919678 (no pull request). I decided to combine the two because of problems with `ghexport`, which was having some problems exporting the second diff, as well as the fact that both diffs are needed for `export_llama` to work so it makes more sense to just have a single diff.

## Context

#7413 and #7412 split the `sdpa_with_kv_cache` operator into two separate operators, `update_cache` and `custom_sdpa` to decouple the cache update step from the actual SDPA computation.

As a result, SDPA is no longer being delegated on Vulkan because of this interface change. To rectify this, Vulkan must also split `sdpa_with_kv_cache` into two operators.

Note that during this diff the new operators are not partitioned yet because of complications caused by assertion ops in the graph. The next diff adds a pass to remove such assertion ops which allows the new operators to be partitioned.
## Context

Recently, some assertion ops were added to the Llama source code.

Unfortunately, this causes issues for the Vulkan delegate because runtime assertions are not yet supported in Vulkan and the assertion ops cause graph breaks due to not being supported.

To prevent graph breaks when delegating to Vulkan, apply a pass to remove assertion ops during the llama export.

Differential Revision: [D68922404](https://our.internmc.facebook.com/intern/diff/D68922404/)

[ghstack-poisoned]
kedarnath03 pushed a commit to kedarnath03/executorch that referenced this pull request Jun 25, 2025
Summary:
This enables us to do more easier module swap with model definitions
from torchtune

Test Plan:
CI

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: 9db180e
Pull Request resolved: pytorch/executorch#7412
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. topic: not user facing
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants