Skip to content

[mlir][linalg] Allow fusing reshapes with nonparallel operands #130148

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Mar 7, 2025

Conversation

IanWood1
Copy link
Contributor

@IanWood1 IanWood1 commented Mar 6, 2025

Removes the condition that checks that operand is not indexed by reduction iterators which allows for more fine-grained control via the reshape fusion control function. For example, users could allow fusing reshapes expand the M/N dims of a matmul but not the K dims (or preserve the current behavior by not fusing at all).

@IanWood1 IanWood1 marked this pull request as ready for review March 6, 2025 18:46
@llvmbot
Copy link
Member

llvmbot commented Mar 6, 2025

@llvm/pr-subscribers-mlir

@llvm/pr-subscribers-mlir-linalg

Author: Ian Wood (IanWood1)

Changes

Removes the condition that checks that operand is not indexed by reduction iterators which allows for more fine-grained control via the reshape fusion control function. For example, users could allow fusing reshapes expand the M/N dims of a matmul but not the K dims (or preserve the current behavior by not fusing at all).


Full diff: https://github.com/llvm/llvm-project/pull/130148.diff

1 Files Affected:

  • (modified) mlir/lib/Dialect/Linalg/Transforms/ElementwiseOpFusion.cpp (+1-6)
diff --git a/mlir/lib/Dialect/Linalg/Transforms/ElementwiseOpFusion.cpp b/mlir/lib/Dialect/Linalg/Transforms/ElementwiseOpFusion.cpp
index a45b5c43f5d33..337fd8f3a0ac1 100644
--- a/mlir/lib/Dialect/Linalg/Transforms/ElementwiseOpFusion.cpp
+++ b/mlir/lib/Dialect/Linalg/Transforms/ElementwiseOpFusion.cpp
@@ -566,7 +566,6 @@ static bool isFusableWithReshapeByDimExpansion(LinalgOp linalgOp,
   // - All the indexing maps for operands and results are projected
   //   permutations.
   // - The fused tensor is not a scalar.
-  // - All the loops for the reshaped operand are parallel loops.
   SmallVector<utils::IteratorType> iteratorTypes =
       linalgOp.getIteratorTypesArray();
   AffineMap operandMap = linalgOp.getMatchingIndexingMap(fusableOpOperand);
@@ -577,11 +576,7 @@ static bool isFusableWithReshapeByDimExpansion(LinalgOp linalgOp,
                             .getValue()
                             .isProjectedPermutation();
                       }) &&
-         operandMap.getNumResults() > 0 &&
-         llvm::all_of(operandMap.getResults(), [&](AffineExpr expr) {
-           return isParallelIterator(
-               iteratorTypes[cast<AffineDimExpr>(expr).getPosition()]);
-         });
+         operandMap.getNumResults() > 0;
 }
 
 namespace {

Copy link
Contributor

@MaheshRavishankar MaheshRavishankar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe add a test for this?

@IanWood1 IanWood1 changed the title [mlir][linalg] Allow fusing reshapes with parallel operands [mlir][linalg] Allow fusing reshapes with nonparallel operands Mar 6, 2025
@MaheshRavishankar
Copy link
Contributor

MaheshRavishankar commented Mar 7, 2025

Already approved. Feel free to land it when you are ready

Edit: geez. Sorry hadn't approved

@IanWood1 IanWood1 merged commit 813bbe0 into llvm:main Mar 7, 2025
11 checks passed
jph-13 pushed a commit to jph-13/llvm-project that referenced this pull request Mar 21, 2025
…#130148)

Removes the condition that checks that operand is not indexed by
reduction iterators which allows for more fine-grained control via the
reshape fusion control function. For example, users could allow fusing
reshapes expand the M/N dims of a matmul but not the K dims (or preserve
the current behavior by not fusing at all).

---------

Signed-off-by: Ian Wood <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants