-
Notifications
You must be signed in to change notification settings - Fork 14.3k
[mlir][linalg] Add a new helper hook: hasVectorizationImpl
#110708
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[mlir][linalg] Add a new helper hook: hasVectorizationImpl
#110708
Conversation
@llvm/pr-subscribers-mlir-linalg Author: Andrzej Warzyński (banach-space) ChangesThe newly added hook simply returns
Full diff: https://github.com/llvm/llvm-project/pull/110708.diff 3 Files Affected:
diff --git a/mlir/include/mlir/Dialect/Linalg/Transforms/Transforms.h b/mlir/include/mlir/Dialect/Linalg/Transforms/Transforms.h
index 48e657cca96e39..facda13eeaa7ed 100644
--- a/mlir/include/mlir/Dialect/Linalg/Transforms/Transforms.h
+++ b/mlir/include/mlir/Dialect/Linalg/Transforms/Transforms.h
@@ -762,6 +762,13 @@ LogicalResult copyToGPUPrivateMemory(OpBuilder &b, Value src, Value dst);
/// memory is freed when going outside of the scope.
LogicalResult deallocateGPUPrivateMemory(OpBuilder &, Value /*buffer*/);
+/// Check if this Op is vectorizable. All Linalg Op are vectorizable, as well
+/// as selected Tensor Ops. Note that this is merely a high level check and
+/// that the vectorizer also requires various additional pre-conditions to be
+/// met for it to work. These are only checked for Ops that are supported,
+/// other Ops should be rejected early.
+bool isVectorizable(Operation *);
+
/// Emit a suitable vector form for an operation. If provided,
/// `inputVectorSizes` are used to vectorize this operation. `inputVectorSizes`
/// must match the rank of the iteration space of the operation and the sizes
diff --git a/mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp b/mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp
index 46c8510f4ed514..8632e7db76353c 100644
--- a/mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp
+++ b/mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp
@@ -3411,11 +3411,11 @@ struct VectorizationPattern : public RewritePattern {
flatten1DDepthwiseConv(flattenConv) {}
LogicalResult matchAndRewrite(Operation *op,
PatternRewriter &rewriter) const override {
- LinalgOp linalgOp = dyn_cast<LinalgOp>(op);
- if (!linalgOp)
- return rewriter.notifyMatchFailure(op, "expected Linalg Op");
- return vectorize(rewriter, linalgOp, /*inputVectorSizes=*/{},
- /*scalableVecDims=*/{}, vectorizeNDExtract,
+ if (!linalg::isVectorizable(op))
+ return rewriter.notifyMatchFailure(op,
+ "Unsupported Op, cannot vectorize");
+ return vectorize(rewriter, op, /*inputVectorSizes=*/{},
+ /*inputScalableVecDims=*/{}, vectorizeNDExtract,
flatten1DDepthwiseConv);
}
@@ -3496,8 +3496,7 @@ DiagnosedSilenceableFailure transform::VectorizeOp::apply(
// TODO: Check that the correct number of vectorSizes was provided.
for (Operation *target : targets) {
- if (!isa<linalg::LinalgOp, tensor::PadOp, tensor::PackOp, tensor::UnPackOp>(
- target)) {
+ if (!linalg::isVectorizable(target)) {
return mlir::emitSilenceableFailure(target->getLoc())
<< "Unsupported Op, cannot vectorize";
}
diff --git a/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp b/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
index ca85f4b9b9c156..d873ca41ee86f8 100644
--- a/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
+++ b/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
@@ -2129,6 +2129,11 @@ static void convertAffineApply(RewriterBase &rewriter, LinalgOp linalgOp) {
}
}
+bool mlir::linalg::isVectorizable(Operation *op) {
+ return isa<linalg::LinalgOp, tensor::PadOp, tensor::PackOp, tensor::UnPackOp>(
+ op);
+}
+
/// Emit a suitable vector form for an operation. If provided,
/// `inputVectorSizes` are used to vectorize this operation.
/// `inputVectorSizes` must match the rank of the iteration space of the
|
@llvm/pr-subscribers-mlir Author: Andrzej Warzyński (banach-space) ChangesThe newly added hook simply returns
Full diff: https://github.com/llvm/llvm-project/pull/110708.diff 3 Files Affected:
diff --git a/mlir/include/mlir/Dialect/Linalg/Transforms/Transforms.h b/mlir/include/mlir/Dialect/Linalg/Transforms/Transforms.h
index 48e657cca96e39..facda13eeaa7ed 100644
--- a/mlir/include/mlir/Dialect/Linalg/Transforms/Transforms.h
+++ b/mlir/include/mlir/Dialect/Linalg/Transforms/Transforms.h
@@ -762,6 +762,13 @@ LogicalResult copyToGPUPrivateMemory(OpBuilder &b, Value src, Value dst);
/// memory is freed when going outside of the scope.
LogicalResult deallocateGPUPrivateMemory(OpBuilder &, Value /*buffer*/);
+/// Check if this Op is vectorizable. All Linalg Op are vectorizable, as well
+/// as selected Tensor Ops. Note that this is merely a high level check and
+/// that the vectorizer also requires various additional pre-conditions to be
+/// met for it to work. These are only checked for Ops that are supported,
+/// other Ops should be rejected early.
+bool isVectorizable(Operation *);
+
/// Emit a suitable vector form for an operation. If provided,
/// `inputVectorSizes` are used to vectorize this operation. `inputVectorSizes`
/// must match the rank of the iteration space of the operation and the sizes
diff --git a/mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp b/mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp
index 46c8510f4ed514..8632e7db76353c 100644
--- a/mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp
+++ b/mlir/lib/Dialect/Linalg/TransformOps/LinalgTransformOps.cpp
@@ -3411,11 +3411,11 @@ struct VectorizationPattern : public RewritePattern {
flatten1DDepthwiseConv(flattenConv) {}
LogicalResult matchAndRewrite(Operation *op,
PatternRewriter &rewriter) const override {
- LinalgOp linalgOp = dyn_cast<LinalgOp>(op);
- if (!linalgOp)
- return rewriter.notifyMatchFailure(op, "expected Linalg Op");
- return vectorize(rewriter, linalgOp, /*inputVectorSizes=*/{},
- /*scalableVecDims=*/{}, vectorizeNDExtract,
+ if (!linalg::isVectorizable(op))
+ return rewriter.notifyMatchFailure(op,
+ "Unsupported Op, cannot vectorize");
+ return vectorize(rewriter, op, /*inputVectorSizes=*/{},
+ /*inputScalableVecDims=*/{}, vectorizeNDExtract,
flatten1DDepthwiseConv);
}
@@ -3496,8 +3496,7 @@ DiagnosedSilenceableFailure transform::VectorizeOp::apply(
// TODO: Check that the correct number of vectorSizes was provided.
for (Operation *target : targets) {
- if (!isa<linalg::LinalgOp, tensor::PadOp, tensor::PackOp, tensor::UnPackOp>(
- target)) {
+ if (!linalg::isVectorizable(target)) {
return mlir::emitSilenceableFailure(target->getLoc())
<< "Unsupported Op, cannot vectorize";
}
diff --git a/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp b/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
index ca85f4b9b9c156..d873ca41ee86f8 100644
--- a/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
+++ b/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
@@ -2129,6 +2129,11 @@ static void convertAffineApply(RewriterBase &rewriter, LinalgOp linalgOp) {
}
}
+bool mlir::linalg::isVectorizable(Operation *op) {
+ return isa<linalg::LinalgOp, tensor::PadOp, tensor::PackOp, tensor::UnPackOp>(
+ op);
+}
+
/// Emit a suitable vector form for an operation. If provided,
/// `inputVectorSizes` are used to vectorize this operation.
/// `inputVectorSizes` must match the rank of the iteration space of the
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was expecting this would actually check if the op has the right method, perhaps driven by implementing an interface or something. The code here is mostly moving things round, NFC almost.
Yeah, I realise that the method name suggests something grand, but in reality is just a wrapper for an I hope that this is OK. Not strictly needed today, but I' working on a small refactor that this is blocking (as one of the TD ops bails out too early). |
I don't mind the little snippet. @ftynse ? |
It is confusing because we have llvm-project/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp Lines 2091 to 2115 in ce72c76
|
@@ -762,6 +762,13 @@ LogicalResult copyToGPUPrivateMemory(OpBuilder &b, Value src, Value dst); | |||
/// memory is freed when going outside of the scope. | |||
LogicalResult deallocateGPUPrivateMemory(OpBuilder &, Value /*buffer*/); | |||
|
|||
/// Check if this Op is vectorizable. All Linalg Op are vectorizable, as well |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this statement is not correct. Not all the Linalg op are vectorizable. Linalg ops with non-trivial indexing maps are not vectorizable atm.
llvm-project/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
Lines 1916 to 1926 in ce72c76
// TODO: the common vector shape is equal to the static loop sizes only when | |
// all indexing maps are projected permutations. For convs and stencils the | |
// logic will need to evolve. | |
if (!allIndexingsAreProjectedPermutation(linalgOp)) { | |
LDBG("precondition failed: not projected permutations\n"); | |
return failure(); | |
} | |
if (failed(reductionPreconditions(linalgOp))) { | |
LDBG("precondition failed: reduction preconditions\n"); | |
return failure(); | |
} |
Sure, but then we have to repeat the same snippet twice. Also, In case it wasn't clear from the diff, this is what's happening today (pseudo-code to capture the high-level idea): // CURRENT logic for transform.structured.vectorize
LogicalResult process_vectorize_td(Operation *target) {
if (!isa<linalg::LinalgOp, tensor::PadOp, tensor::PackOp, tensor::UnPackOp>(target))
return rewriter.notifyMatchFailure(op, "Unsupported Op, cannot vectorize");
return vectorize(target);
}
// CURRENT logic for transform.structured.vectorize_children_and_apply_patterns
LogicalResult process_vectorize_children_and_apply_patterns_td(Operation *target) {
// NOTE - this condition is unnecessarily restrictive and is being relaxed in this PR
if (!isa<linalg::LinalgOp>(target))
return rewriter.notifyMatchFailure(op, "expected Linalg Op");
return vectorize(target);
} After this change, we will have this: bool isVectorizable(Operation *op) {
return isa<linalg::LinalgOp, tensor::PadOp, tensor::PackOp, tensor::UnPackOp>(
op);
}
// NEW logic for transform.structured.vectorize
LogicalResult process_vectorize_td(Operation *target) {
if (!isVectorizable(target))
return mlir::emitSilenceableFailure(target->getLoc())
<< "Unsupported Op, cannot vectorize";
return vectorize(target);
}
// NEW logic for transform.structured.vectorize_children_and_apply_patterns
LogicalResult
process_vectorize_children_and_apply_patterns_td(Operation *target) {
if (!isVectorizable(target))
return mlir::emitSilenceableFailure(target->getLoc())
<< "Unsupported Op, cannot vectorize";
return vectorize(target);
} Put differently, this PR aims to achieve two things:
|
Yes, there are effectively two levels of verification (note that this PR doesn't change that). Here's pseudo-code to explain this: // Verification 1: At the level of transform.structured.vectorize (very crude, initial filtering)
LogicalResult process_vectorize_td(Operation *target) {
if (!isVectorizable)
return rewriter.notifyMatchFailure(op, "Unsupported Op, cannot vectorize");
return vectorize(target);
}
// Verification 2: At the level of linalg::vectorize (very detailed)
LogicalResult vectorize(Operation *op) {
if (failed(vectorizeOpPrecondition(op) {
LDBG("Vectorization pre-conditions failed\n");
return failure();
}
// ...
} Basically:
I can re-wire things, but would rather start with this small change :) I have a few other changes in flight that I'd like to prioritise (related to vectorising (*) You are right that even some Linalg Ops don't qualify for vectorization (at least today). The idea behind Hope I didn't miss anything 😅 |
Note - I've added two tests for |
The newly added hook simply returns `false` for Ops for which there's no "vectorization logic" in the Linalg Vectorizer (i.e. the `vectorize` method). It's added so that the following two TD ops expose identical level of functionality (that's not the case ATM): * `transform.structured.vectorize_children_and_apply_patterns` * `transform.structured.vectorize`
Add tests to demonstrate the new functionality added to `transform.structured.vectorize_children_and_apply_patterns`.
5b0db64
to
778fd1c
Compare
I see your point, thanks for the details. It's very helpful! I'm generally +1 on removing the dup code between transform ops and pattern rewriting. It reduces the probability that one is supported in transform ops but not pattern rewriting. I think |
Rename according to @hanhanW's suggestion
Agreed, thank you for the suggestion, updated :) 🙏🏻 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks! Please wait for an additional approval before landing it. Others could have different comments.
LG. Just one comment: shouldn't we use that check within the vectorizer preconditions? |
isVectorizable
hasVectorizationImpl
…izable` Use hasVectorizationImpl in vectorizeOpPrecondition
@rengolin I will assume that you are OK with this and land tomorrow. Do let me know if my assumption is incorrect :) |
The newly added hook simply returns
false
for Ops for which there's no"vectorization logic" in the Linalg Vectorizer (i.e. the
vectorize()
method). It's added so that the following two TD ops expose identical
level of functionality (that's not the case ATM):
transform.structured.vectorize_children_and_apply_patterns
transform.structured.vectorize
Specifically, ATM, the former works only for Linalg Ops, while the
latter works for all Ops that the vectorizer supports (*). With this change,
I am making sure that both TD will behave consistently.
Note, this shouldn't affect any of the current uses of the vectorizer.
(*) This is implemented via the
vectorize()
method in Vectorization.cpp.