Skip to content

[XNNPACK][Partitioner] Migrate completely to new config based partitioner #4798

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 22 commits into from
Aug 20, 2024

Conversation

kirklandsign
Copy link
Contributor

Stack from ghstack (oldest at bottom):

New Config Based Partitioner maintains parity with old partitioners. I believe we can now replace all the old partitioners with the new partitioners

I also added pre configured partitioners like the ones most users use today, so that this would not cause any problems with people now. Despite having "multiple" partitioners It is the same partitioenr with different specified configs. The new xnnpack_partitioner.py file contains a lot of changes since it is essentially getting deleted and replaced with xnnpack_partitioner.py. But if you just look at the new file itself it is pretty straight forward.

https://www.internalfb.com/code/fbsource/[D61250577-V2]/fbcode/executorch/backends/xnnpack/partition/xnnpack_partitioner.py

Differential Revision: D61250577

Pull Request resolved: #4765

mcr229 and others added 19 commits August 16, 2024 15:47
In order to maintain parity with the current to_edge and to_backend lowering flow, we need to support source based partitioning.

We apply source-based partitioning to the AddMMConfig to partition all the nodes surrounding addmm so that it can be recomposed internally. While this is fine for to_edge and to_backend flow. For more robust flow, we will not have to use this when running to_edge_transform_and_lower.

Differential Revision: [D61250576](https://our.internmc.facebook.com/intern/diff/D61250576/)

[ghstack-poisoned]
Little bug got past partitioning, because 3d and transposed convolutions were now being partitioned.

We expand the scope of the ConvConfig's check_constraint to also fail when the convolutions are either transposed or 3d

Differential Revision: [D61368158](https://our.internmc.facebook.com/intern/diff/D61368158/)

[ghstack-poisoned]
We add the SDPA Config here for partitioner.

Currently there is an issue with SDPA when used from the FairSeq Multihead attention models, so I currently have it disabled for the base partitioner until we resolve that. Otherwise, for our tests, we can use the SDPA correctly from there. We have to track D60553559. Will follow up on this later.

Differential Revision: [D60323285](https://our.internmc.facebook.com/intern/diff/D60323285/)

[ghstack-poisoned]
…oner

New Config Based Partitioner maintains parity with old partitioners. I believe we can now replace all the old partitioners with the new partitioners

I also added pre configured partitioners like the ones most users use today, so that this would not cause any problems with people now. Despite having "multiple" partitioners It is the same partitioenr with different specified configs. The new xnnpack_partitioner.py file contains a lot of changes since it is essentially getting deleted and replaced with xnnpack_partitioner.py. But if you just look at the new file itself it is pretty straight forward.

https://www.internalfb.com/code/fbsource/[D61250577-V2]/fbcode/executorch/backends/xnnpack/partition/xnnpack_partitioner.py

Differential Revision: [D61250577](https://our.internmc.facebook.com/intern/diff/D61250577/)

[ghstack-poisoned]
We add the SDPA Config here for partitioner.

Currently there is an issue with SDPA when used from the FairSeq Multihead attention models, so I currently have it disabled for the base partitioner until we resolve that. Otherwise, for our tests, we can use the SDPA correctly from there. We have to track D60553559. Will follow up on this later.

Differential Revision: [D60323285](https://our.internmc.facebook.com/intern/diff/D60323285/)

[ghstack-poisoned]
…to new config based partitioner"

New Config Based Partitioner maintains parity with old partitioners. I believe we can now replace all the old partitioners with the new partitioners

I also added pre configured partitioners like the ones most users use today, so that this would not cause any problems with people now. Despite having "multiple" partitioners It is the same partitioenr with different specified configs. The new xnnpack_partitioner.py file contains a lot of changes since it is essentially getting deleted and replaced with xnnpack_partitioner.py. But if you just look at the new file itself it is pretty straight forward.

https://www.internalfb.com/code/fbsource/[D61250577-V2]/fbcode/executorch/backends/xnnpack/partition/xnnpack_partitioner.py

Differential Revision: [D61250577](https://our.internmc.facebook.com/intern/diff/D61250577/)

[ghstack-poisoned]
…sed partitioner"

New Config Based Partitioner maintains parity with old partitioners. I believe we can now replace all the old partitioners with the new partitioners

I also added pre configured partitioners like the ones most users use today, so that this would not cause any problems with people now. Despite having "multiple" partitioners It is the same partitioenr with different specified configs. The new xnnpack_partitioner.py file contains a lot of changes since it is essentially getting deleted and replaced with xnnpack_partitioner.py. But if you just look at the new file itself it is pretty straight forward.

https://www.internalfb.com/code/fbsource/[D61250577-V2]/fbcode/executorch/backends/xnnpack/partition/xnnpack_partitioner.py

Differential Revision: [D61250577](https://our.internmc.facebook.com/intern/diff/D61250577/)

[ghstack-poisoned]
We add the SDPA Config here for partitioner.

Currently there is an issue with SDPA when used from the FairSeq Multihead attention models, so I currently have it disabled for the base partitioner until we resolve that. Otherwise, for our tests, we can use the SDPA correctly from there. We have to track D60553559. Will follow up on this later.

Differential Revision: [D60323285](https://our.internmc.facebook.com/intern/diff/D60323285/)

[ghstack-poisoned]
…to new config based partitioner"

New Config Based Partitioner maintains parity with old partitioners. I believe we can now replace all the old partitioners with the new partitioners

I also added pre configured partitioners like the ones most users use today, so that this would not cause any problems with people now. Despite having "multiple" partitioners It is the same partitioenr with different specified configs. The new xnnpack_partitioner.py file contains a lot of changes since it is essentially getting deleted and replaced with xnnpack_partitioner.py. But if you just look at the new file itself it is pretty straight forward.

https://www.internalfb.com/code/fbsource/[D61250577-V2]/fbcode/executorch/backends/xnnpack/partition/xnnpack_partitioner.py

Differential Revision: [D61250577](https://our.internmc.facebook.com/intern/diff/D61250577/)

[ghstack-poisoned]
We add the SDPA Config here for partitioner.

Currently there is an issue with SDPA when used from the FairSeq Multihead attention models, so I currently have it disabled for the base partitioner until we resolve that. Otherwise, for our tests, we can use the SDPA correctly from there. We have to track D60553559. Will follow up on this later.

Differential Revision: [D60323285](https://our.internmc.facebook.com/intern/diff/D60323285/)

[ghstack-poisoned]
…sed partitioner"

New Config Based Partitioner maintains parity with old partitioners. I believe we can now replace all the old partitioners with the new partitioners

I also added pre configured partitioners like the ones most users use today, so that this would not cause any problems with people now. Despite having "multiple" partitioners It is the same partitioenr with different specified configs. The new xnnpack_partitioner.py file contains a lot of changes since it is essentially getting deleted and replaced with xnnpack_partitioner.py. But if you just look at the new file itself it is pretty straight forward.

https://www.internalfb.com/code/fbsource/[D61250577-V2]/fbcode/executorch/backends/xnnpack/partition/xnnpack_partitioner.py

Differential Revision: [D61250577](https://our.internmc.facebook.com/intern/diff/D61250577/)

[ghstack-poisoned]
We add the SDPA Config here for partitioner.

Currently there is an issue with SDPA when used from the FairSeq Multihead attention models, so I currently have it disabled for the base partitioner until we resolve that. Otherwise, for our tests, we can use the SDPA correctly from there. We have to track D60553559. Will follow up on this later.

Differential Revision: [D60323285](https://our.internmc.facebook.com/intern/diff/D60323285/)

[ghstack-poisoned]
…to new config based partitioner"

New Config Based Partitioner maintains parity with old partitioners. I believe we can now replace all the old partitioners with the new partitioners

I also added pre configured partitioners like the ones most users use today, so that this would not cause any problems with people now. Despite having "multiple" partitioners It is the same partitioenr with different specified configs. The new xnnpack_partitioner.py file contains a lot of changes since it is essentially getting deleted and replaced with xnnpack_partitioner.py. But if you just look at the new file itself it is pretty straight forward.

https://www.internalfb.com/code/fbsource/[D61250577-V2]/fbcode/executorch/backends/xnnpack/partition/xnnpack_partitioner.py

Differential Revision: [D61250577](https://our.internmc.facebook.com/intern/diff/D61250577/)

[ghstack-poisoned]
We add the SDPA Config here for partitioner.

Currently there is an issue with SDPA when used from the FairSeq Multihead attention models, so I currently have it disabled for the base partitioner until we resolve that. Otherwise, for our tests, we can use the SDPA correctly from there. We have to track D60553559. Will follow up on this later.

Differential Revision: [D60323285](https://our.internmc.facebook.com/intern/diff/D60323285/)

[ghstack-poisoned]
…sed partitioner"

New Config Based Partitioner maintains parity with old partitioners. I believe we can now replace all the old partitioners with the new partitioners

I also added pre configured partitioners like the ones most users use today, so that this would not cause any problems with people now. Despite having "multiple" partitioners It is the same partitioenr with different specified configs. The new xnnpack_partitioner.py file contains a lot of changes since it is essentially getting deleted and replaced with xnnpack_partitioner.py. But if you just look at the new file itself it is pretty straight forward.

https://www.internalfb.com/code/fbsource/[D61250577-V2]/fbcode/executorch/backends/xnnpack/partition/xnnpack_partitioner.py

Differential Revision: [D61250577](https://our.internmc.facebook.com/intern/diff/D61250577/)

[ghstack-poisoned]
…to new config based partitioner"

New Config Based Partitioner maintains parity with old partitioners. I believe we can now replace all the old partitioners with the new partitioners

I also added pre configured partitioners like the ones most users use today, so that this would not cause any problems with people now. Despite having "multiple" partitioners It is the same partitioenr with different specified configs. The new xnnpack_partitioner.py file contains a lot of changes since it is essentially getting deleted and replaced with xnnpack_partitioner.py. But if you just look at the new file itself it is pretty straight forward.

https://www.internalfb.com/code/fbsource/[D61250577-V2]/fbcode/executorch/backends/xnnpack/partition/xnnpack_partitioner.py

Differential Revision: [D61250577](https://our.internmc.facebook.com/intern/diff/D61250577/)

[ghstack-poisoned]
…sed partitioner"

New Config Based Partitioner maintains parity with old partitioners. I believe we can now replace all the old partitioners with the new partitioners

I also added pre configured partitioners like the ones most users use today, so that this would not cause any problems with people now. Despite having "multiple" partitioners It is the same partitioenr with different specified configs. The new xnnpack_partitioner.py file contains a lot of changes since it is essentially getting deleted and replaced with xnnpack_partitioner.py. But if you just look at the new file itself it is pretty straight forward.

https://www.internalfb.com/code/fbsource/[D61250577-V2]/fbcode/executorch/backends/xnnpack/partition/xnnpack_partitioner.py

Differential Revision: [D61250577](https://our.internmc.facebook.com/intern/diff/D61250577/)

[ghstack-poisoned]
Copy link

pytorch-bot bot commented Aug 20, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/4798

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit c6b030e with merge base f93a5b5 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Aug 20, 2024
Base automatically changed from gh/mcr229/3/head to main August 20, 2024 18:13
@kirklandsign kirklandsign merged commit c1c8b00 into main Aug 20, 2024
5 checks passed
@kirklandsign kirklandsign deleted the gh/mcr229/4/head branch August 20, 2024 18:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants