-
Notifications
You must be signed in to change notification settings - Fork 14.3k
[mlir][bufferization] Add BufferizableOpInterface::hasTensorSemantics
#75273
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[mlir][bufferization] Add BufferizableOpInterface::hasTensorSemantics
#75273
Conversation
@llvm/pr-subscribers-mlir @llvm/pr-subscribers-mlir-bufferization Author: Matthias Springer (matthias-springer) ChangesAdd a new interface method to Until now, we assumed that an op has tensor semantics if it has tensor operands and/or tensor op results. However, there are ops like This change also decouples Full diff: https://github.com/llvm/llvm-project/pull/75273.diff 6 Files Affected:
diff --git a/mlir/include/mlir/Dialect/Bufferization/IR/BufferizableOpInterface.h b/mlir/include/mlir/Dialect/Bufferization/IR/BufferizableOpInterface.h
index 7c09a43f96397b..d3dc5683772e27 100644
--- a/mlir/include/mlir/Dialect/Bufferization/IR/BufferizableOpInterface.h
+++ b/mlir/include/mlir/Dialect/Bufferization/IR/BufferizableOpInterface.h
@@ -601,6 +601,12 @@ FailureOr<BaseMemRefType> getBufferType(Value value,
const BufferizationOptions &options,
SmallVector<Value> &invocationStack);
+/// Return "true" if the given op has tensor semantics and should be bufferized.
+/// If the op is bufferizable, the BufferizableOpInterface is queried.
+/// Otherwise, an op has tensor semantics if it has tensor operands, tensor
+/// op results and/or tensor block arguments.
+bool hasTensorSemanticsForBufferization(Operation *op);
+
/// Replace an op with replacement values. The op is deleted. Tensor OpResults
/// must be replaced with memref values.
void replaceOpWithBufferizedValues(RewriterBase &rewriter, Operation *op,
@@ -694,6 +700,10 @@ AliasingOpOperandList unknownGetAliasingOpOperands(Value value);
/// This is the default implementation of getAliasingValues in case the owner
/// op does not implement the BufferizableOpInterface.
AliasingValueList unknownGetAliasingValues(OpOperand &opOperand);
+
+/// This is the default implementation of
+/// BufferizableOpInterface::hasTensorSemanticsForBufferization.
+bool defaultHasTensorSemanticsForBufferization(Operation *op);
} // namespace detail
} // namespace bufferization
diff --git a/mlir/include/mlir/Dialect/Bufferization/IR/BufferizableOpInterface.td b/mlir/include/mlir/Dialect/Bufferization/IR/BufferizableOpInterface.td
index fd1ceb68af5dd9..e7445cb4e63da2 100644
--- a/mlir/include/mlir/Dialect/Bufferization/IR/BufferizableOpInterface.td
+++ b/mlir/include/mlir/Dialect/Bufferization/IR/BufferizableOpInterface.td
@@ -575,6 +575,28 @@ def BufferizableOpInterface : OpInterface<"BufferizableOpInterface"> {
return false;
}]
>,
+ InterfaceMethod<
+ /*desc=*/[{
+ Return "true" if the this op has tensor semantics and should be
+ bufferized. By default, ops with tensor operands, tensor op results
+ and/or tensor block arguments have tensor semantics.
+
+ This interface methods can be implemented by ops that should be
+ bufferized but do not have tensor semantics according to the above
+ definition. E.g., this function can return "true" for symbols.
+
+ TODO: This interface method should be called `hasTensorSemantics`, but
+ that name is already in use in `DestinationStyleOpInterface`.
+ }],
+ /*retType=*/"bool",
+ /*methodName=*/"hasTensorSemanticsForBufferization",
+ /*args=*/(ins),
+ /*methodBody=*/"",
+ /*defaultImplementation=*/[{
+ return ::mlir::bufferization::detail
+ ::defaultHasTensorSemanticsForBufferization($_op.getOperation());
+ }]
+ >,
StaticInterfaceMethod<
/*desc=*/[{
Return `true` if the op and this interface implementation supports
diff --git a/mlir/lib/Dialect/Bufferization/IR/BufferizableOpInterface.cpp b/mlir/lib/Dialect/Bufferization/IR/BufferizableOpInterface.cpp
index 1e8dc4387ed4f0..8f577947f8ddbf 100644
--- a/mlir/lib/Dialect/Bufferization/IR/BufferizableOpInterface.cpp
+++ b/mlir/lib/Dialect/Bufferization/IR/BufferizableOpInterface.cpp
@@ -689,6 +689,12 @@ bufferization::getBufferType(Value value, const BufferizationOptions &options,
*options.defaultMemorySpace);
}
+bool bufferization::hasTensorSemanticsForBufferization(Operation *op) {
+ if (auto bufferizableOp = dyn_cast<BufferizableOpInterface>(op))
+ return bufferizableOp.hasTensorSemanticsForBufferization();
+ return detail::defaultHasTensorSemanticsForBufferization(op);
+}
+
void bufferization::replaceOpWithBufferizedValues(RewriterBase &rewriter,
Operation *op,
ValueRange values) {
@@ -989,3 +995,22 @@ bufferization::detail::unknownGetAliasingValues(OpOperand &opOperand) {
r.addAlias({bbArg, BufferRelation::Unknown, /*isDefinite=*/false});
return r;
}
+
+static bool isaTensor(Type t) { return isa<TensorType>(t); }
+
+bool bufferization::detail::defaultHasTensorSemanticsForBufferization(
+ Operation *op) {
+ bool hasTensorBlockArgument = any_of(op->getRegions(), [](Region &r) {
+ return any_of(r.getBlocks(), [](Block &b) {
+ return any_of(b.getArguments(), [](BlockArgument bbArg) {
+ return isaTensor(bbArg.getType());
+ });
+ });
+ });
+ if (hasTensorBlockArgument)
+ return true;
+
+ bool hasTensorResult = any_of(op->getResultTypes(), isaTensor);
+ bool hasTensorOperand = any_of(op->getOperandTypes(), isaTensor);
+ return hasTensorResult || hasTensorOperand;
+}
diff --git a/mlir/lib/Dialect/Bufferization/Transforms/Bufferize.cpp b/mlir/lib/Dialect/Bufferization/Transforms/Bufferize.cpp
index f2125feeda5415..79be7bed79db38 100644
--- a/mlir/lib/Dialect/Bufferization/Transforms/Bufferize.cpp
+++ b/mlir/lib/Dialect/Bufferization/Transforms/Bufferize.cpp
@@ -350,31 +350,6 @@ mlir::bufferization::createFinalizingBufferizePass() {
// BufferizableOpInterface-based Bufferization
//===----------------------------------------------------------------------===//
-static bool isaTensor(Type t) { return isa<TensorType>(t); }
-
-/// Return true if the given op has a tensor result or a tensor operand.
-static bool hasTensorSemantics(Operation *op) {
- bool hasTensorBlockArgument = any_of(op->getRegions(), [](Region &r) {
- return any_of(r.getBlocks(), [](Block &b) {
- return any_of(b.getArguments(), [](BlockArgument bbArg) {
- return isaTensor(bbArg.getType());
- });
- });
- });
- if (hasTensorBlockArgument)
- return true;
-
- if (auto funcOp = dyn_cast<FunctionOpInterface>(op)) {
- bool hasTensorArg = any_of(funcOp.getArgumentTypes(), isaTensor);
- bool hasTensorResult = any_of(funcOp.getResultTypes(), isaTensor);
- return hasTensorArg || hasTensorResult;
- }
-
- bool hasTensorResult = any_of(op->getResultTypes(), isaTensor);
- bool hasTensorOperand = any_of(op->getOperandTypes(), isaTensor);
- return hasTensorResult || hasTensorOperand;
-}
-
namespace {
/// A rewriter that keeps track of extra information during bufferization.
class BufferizationRewriter : public IRRewriter, public RewriterBase::Listener {
@@ -417,7 +392,7 @@ class BufferizationRewriter : public IRRewriter, public RewriterBase::Listener {
return;
// Skip non-tensor ops.
- if (!hasTensorSemantics(op))
+ if (!hasTensorSemanticsForBufferization(op))
return;
// Skip ops that are not allowed to be bufferized.
@@ -470,7 +445,7 @@ LogicalResult bufferization::bufferizeOp(Operation *op,
// canonicalize away (or canonicalize to more precise layouts).
SmallVector<Operation *> worklist;
op->walk<WalkOrder::PostOrder>([&](Operation *op) {
- if (hasTensorSemantics(op))
+ if (hasTensorSemanticsForBufferization(op))
worklist.push_back(op);
});
@@ -492,7 +467,7 @@ LogicalResult bufferization::bufferizeOp(Operation *op,
if (!options.isOpAllowed(nextOp))
continue;
// Skip ops that no longer have tensor semantics.
- if (!hasTensorSemantics(nextOp))
+ if (!hasTensorSemanticsForBufferization(nextOp))
continue;
// Check for unsupported unstructured control flow.
if (!bufferizableOp.supportsUnstructuredControlFlow())
@@ -546,7 +521,7 @@ LogicalResult bufferization::bufferizeOp(Operation *op,
continue;
// Ops that no longer have tensor semantics (because they were updated
// in-place) are allowed.
- if (!hasTensorSemantics(op))
+ if (!hasTensorSemanticsForBufferization(op))
continue;
// Continue ops that are not allowed.
if (!options.isOpAllowed(op))
diff --git a/mlir/lib/Dialect/Bufferization/Transforms/FuncBufferizableOpInterfaceImpl.cpp b/mlir/lib/Dialect/Bufferization/Transforms/FuncBufferizableOpInterfaceImpl.cpp
index 3a8c397c02a809..5aab035f9f30fd 100644
--- a/mlir/lib/Dialect/Bufferization/Transforms/FuncBufferizableOpInterfaceImpl.cpp
+++ b/mlir/lib/Dialect/Bufferization/Transforms/FuncBufferizableOpInterfaceImpl.cpp
@@ -325,6 +325,28 @@ struct FuncOpInterface
static bool supportsUnstructuredControlFlow() { return true; }
+ bool hasTensorSemanticsForBufferization(Operation *op) const {
+ auto isaTensor = [](Type type) { return isa<TensorType>(type); };
+
+ // A function has tensor semantics if it has tensor arguments/results.
+ auto funcOp = cast<FuncOp>(op);
+ bool hasTensorArg = any_of(funcOp.getArgumentTypes(), isaTensor);
+ bool hasTensorResult = any_of(funcOp.getResultTypes(), isaTensor);
+ if (hasTensorArg || hasTensorResult)
+ return true;
+
+ // It also has tensor semantics if it has tensor block arguments.
+ // TODO: Decouple bufferization of unstructured control flow from
+ // BufferizableOpInterface implementations. We should only care about
+ // region entry block arguments here (which are already covered by the
+ // argument types of the function).
+ for (Block &block : funcOp.getBody())
+ if (any_of(block.getArgumentTypes(), isaTensor))
+ return true;
+
+ return false;
+ }
+
AliasingOpOperandList
getAliasingOpOperands(Operation *op, Value value,
const AnalysisState &state) const {
diff --git a/mlir/lib/Dialect/Bufferization/Transforms/OneShotModuleBufferize.cpp b/mlir/lib/Dialect/Bufferization/Transforms/OneShotModuleBufferize.cpp
index 1404ed8f43f964..aeda995fd585aa 100644
--- a/mlir/lib/Dialect/Bufferization/Transforms/OneShotModuleBufferize.cpp
+++ b/mlir/lib/Dialect/Bufferization/Transforms/OneShotModuleBufferize.cpp
@@ -458,6 +458,15 @@ LogicalResult mlir::bufferization::bufferizeModuleOp(
foldMemRefCasts(funcOp);
}
+ // Bufferize all other ops.
+ for (Operation &op : moduleOp.getOps()) {
+ // Functions were already bufferized.
+ if (isa<func::FuncOp>(&op))
+ continue;
+ if (failed(bufferizeOp(&op, options, statistics)))
+ return failure();
+ }
+
// Post-pass cleanup of function argument attributes.
removeBufferizationAttributesInModule(moduleOp);
|
|
mlir/include/mlir/Dialect/Bufferization/IR/BufferizableOpInterface.td
Outdated
Show resolved
Hide resolved
return true; | ||
|
||
bool hasTensorResult = any_of(op->getResultTypes(), isaTensor); | ||
bool hasTensorOperand = any_of(op->getOperandTypes(), isaTensor); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was wondering: is outputs only sufficient?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In practice, results are usually sufficient. But we could have a hypothetical op such as test.dump_tensor
, which takes a tensor operand and has no result.
// Bufferize all other ops. | ||
for (Operation &op : moduleOp.getOps()) { | ||
// Functions were already bufferized. | ||
if (isa<func::FuncOp>(&op)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be on function like interface?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For consistency reasons, func::FuncOp
would be better here. Other function-like ops are not supported at the moment (they will fail to bufferize) and we exclusively use func::FuncOp
, func::CallOp
, func::ReturnOp
in this file at the moment.
It should really be called Side note: This is not the first time that I'm running into such naming issues. There is an inconsistency in MLIR when (a) implementing an interface directly on an op and (b) implementing an interface with an external model. In the latter case, the interface methods are properly encapsulated: you have to explicitly |
7ed95b1
to
81e929c
Compare
This PR now depends on #77574, which aligns For all ops that implement both See comment in // Both `DestinationStyleOpInterface` and `BufferizableOpInterface` define
// `hasTensorSemantics`. Both return the same result, but we have to choose
// one to disambiguate the method lookup.
using DestinationStyleOpInterface::Trait<MaterializeInDestinationOp>
::hasTensorSemantics; It is possible to have different implementations as long as at least one of the two interfaces is implemented as an external model. In such a case, the functions of the external model can only be accessed by explicitly casting to the interface, and at that point it is clear which implementation is meant. In practice, I don't think that we will see cases where an op implements the two interfaces (external model or not) but has different implementations of |
…fferSemantics` Rename interface functions as follows: * `hasTensorSemantics` -> `hasPureTensorSemantics` * `hasBufferSemantics` -> `hasPureBufferSemantics` These two functions return "true" if the op has tensor/buffer operands but not buffer/tensor operands. Add two new interface functions: * `hasTensorSemantics`: Return "true" if the op has tensor operands. Whether the op has buffer operands or not does not matter. * `hasBufferSemantics`: Return "true" if the op has buffer operands. Whether the op has tensor operands or not does not matter. Also drop the "ranked" part from the interface, i.e., do not distinguish between ranked/unranked types. This change aligns the meaning of "tensor semantics" with the bufferization framework. (An op is supposed to be bufferized if it has tensor operands, and we don't care if it also has memref operands.) This change is in preparation of llvm#75273, which adds `BufferizableOpInterface::hasTensorSemantics`.
…ufferSemantics` (#77574) Rename interface functions as follows: * `hasTensorSemantics` -> `hasPureTensorSemantics` * `hasBufferSemantics` -> `hasPureBufferSemantics` These two functions return "true" if the op has tensor/buffer operands but not buffer/tensor operands. Also drop the "ranked" part from the interface, i.e., do not distinguish between ranked/unranked types. The new function names describe the functions more accurately. They also align their semantics with the notion of "tensor semantics" with the bufferization framework. (An op is supposed to be bufferized if it has tensor operands, and we don't care if it also has memref operands.) This change is in preparation of #75273, which adds `BufferizableOpInterface::hasTensorSemantics`. By renaming the functions in the `DestinationStyleOpInterface`, we can avoid name clashes between the two interfaces.
81e929c
to
f4cd395
Compare
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, thanks!
Add a new interface method to `BufferizableOpInterface`: `hasTensorSemanticsForBufferization`. This method returns "true" if the op has tensor semantics and should be bufferized. Until now, we assumed that an op has tensor semantics if it has tensor operands and/or tensor op results. However, there are ops like `ml_program.global` that do not have any results/operands but must still be bufferized (llvm#75103). The new interface method can return "true" for such ops. This change also decouples `bufferization::bufferizeOp` a bit from the func dialect.
f4cd395
to
9cb1e5d
Compare
…ufferSemantics` (llvm#77574) Rename interface functions as follows: * `hasTensorSemantics` -> `hasPureTensorSemantics` * `hasBufferSemantics` -> `hasPureBufferSemantics` These two functions return "true" if the op has tensor/buffer operands but not buffer/tensor operands. Also drop the "ranked" part from the interface, i.e., do not distinguish between ranked/unranked types. The new function names describe the functions more accurately. They also align their semantics with the notion of "tensor semantics" with the bufferization framework. (An op is supposed to be bufferized if it has tensor operands, and we don't care if it also has memref operands.) This change is in preparation of llvm#75273, which adds `BufferizableOpInterface::hasTensorSemantics`. By renaming the functions in the `DestinationStyleOpInterface`, we can avoid name clashes between the two interfaces.
…s` (llvm#75273) Add a new interface method to `BufferizableOpInterface`: `hasTensorSemantics`. This method returns "true" if the op has tensor semantics and should be bufferized. Until now, we assumed that an op has tensor semantics if it has tensor operands and/or tensor op results. However, there are ops like `ml_program.global` that do not have any results/operands but must still be bufferized (llvm#75103). The new interface method can return "true" for such ops. This change also decouples `bufferization::bufferizeOp` a bit from the func dialect.
Add a new interface method to
BufferizableOpInterface
:hasTensorSemantics
. This method returns "true" if the op has tensor semantics and should be bufferized.Until now, we assumed that an op has tensor semantics if it has tensor operands and/or tensor op results. However, there are ops like
ml_program.global
that do not have any results/operands but must still be bufferized (#75103). The new interface method can return "true" for such ops.This change also decouples
bufferization::bufferizeOp
a bit from the func dialect.Depends on #77574.