Skip to content

[mlir][vector] Refine vector.transfer_read hoisting/forwarding #65770

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

banach-space
Copy link
Contributor

@banach-space banach-space commented Sep 8, 2023

Make sure that when analysing a vector.transfer_read that's a candidate for either hoisting or store-to-load forwarding, memref.collapse_shape Ops are correctly included in the alias analysis. This is done by either

  • making sure that relevant users are taken into account, or
  • source Ops are correctly identified.

@banach-space banach-space requested a review from a team as a code owner September 8, 2023 15:30
@github-actions github-actions bot added mlir:core MLIR Core Infrastructure mlir:linalg mlir labels Sep 8, 2023
@banach-space banach-space requested a review from a team as a code owner September 8, 2023 18:55
@banach-space banach-space force-pushed the andrzej/fix_hoisting_with_collapse branch from 301a920 to a2b9f12 Compare September 9, 2023 09:38
@banach-space banach-space changed the title [mlir][vector] Refine vector.transfer_read hoisting [mlir][vector] Refine vector.transfer_read hoisting/forwarding Sep 9, 2023
Copy link
Member

@matthias-springer matthias-springer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This fixes hoisting for collapse_shape, but it is still broken for other ops that introduce aliasing (e.g., expand_shape, arith.select, ...). We should query an alias analysis in noAliasingUseInLoop instead of hard-coding ops. Still, better than before so I'd say let's land it if it's blocking.

// vector.transfer_write %vec, %collapse_shape
// vector.transfer_write %1, %subview
// Indeed, %alloca and %collapse_shape alias and hence %2 != %1.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no %2 in this example.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, clarified in the latest fixup.

@banach-space
Copy link
Contributor Author

This fixes hoisting for collapse_shape, but it is still broken for other ops that introduce aliasing (e.g., expand_shape, arith.select, ...). We should query an alias analysis in noAliasingUseInLoop instead of hard-coding ops. Still, better than before so I'd say let's land it if it's blocking.

Thanks! Agreed that a more generic would be better, but right now this is causing our convs to be broken - I really appreciate you being flexible here 🙏🏻 .

@matthias-springer
Copy link
Member

Btw, can you perform the hoisting on tensors instead of memrefs? There are no aliasing issues when hoisting before bufferization. I'm trying to see if it's worth generalizing the hoisting infrastructure, making it interface-based, and moving it to mlir/Transforms.

@banach-space
Copy link
Contributor Author

Btw, can you perform the hoisting on tensors instead of memrefs? There are no aliasing issues when hoisting before bufferization.

Tl;Dr Possibly. It looks like in many cases hoisting happens before bufferization, but not always.

I'm experimenting with some optimisations for Linalg Convs and use IREE to drive this. Looking at

addBufferizePasses is usually run after e.g. createHoistRedundantVectorTransfersPass. However, in the case of convolutions, OptimizeVectorTransferPass is created after bufferization and that pass also includes hoistRedundantVectorTransfers.

I've only tried few pipelines so far, so might be missing something. Also, I have a few experiments "downstream" on top of this that trigger these aliasing issue. So, on hand, having this fix in-tree helps me focus on other issues, but I might be doing something daft myself.

I'll add a few TODOs before landing this. Mostly as a reminder that a more generic approach should be implemented instead.

Make sure that when analysing a `vector.transfer_read` that's a candidate
for either hoisting or store-to-load forwarding, memref.collapse_shape
Ops are correctly included in the alias analysis. This is done by
either:
  *  making sure that relevant users are taken into account, or
  *  source Ops are correctly identified (i.e. `memref.collapse_shape`
     is not really the true source).

Note that this won't fix aliasing for other cases (e.g. with
memref.expand_shape). Ideally, we should query an alias analysis in e.g.
`noAliasingUseInLoop` instead of hard-coding ops
@banach-space banach-space force-pushed the andrzej/fix_hoisting_with_collapse branch from 50629b9 to d91aee8 Compare September 12, 2023 09:32
@banach-space banach-space merged commit 22f96ab into llvm:main Sep 12, 2023
@banach-space banach-space deleted the andrzej/fix_hoisting_with_collapse branch September 12, 2023 09:34
ZijunZhaoCCK pushed a commit to ZijunZhaoCCK/llvm-project that referenced this pull request Sep 19, 2023
…65770)

Make sure that when analysing a `vector.transfer_read` that's a
candidate for either hoisting or store-to-load forwarding,
`memref.collapse_shape` Ops are correctly included in the alias
analysis. This is done by either
* making sure that relevant users are taken into account, or
* source Ops are correctly identified.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants