-
Notifications
You must be signed in to change notification settings - Fork 14.3k
[mlir][memref]: Fix Bug in GlobalOp Verifier #144900
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@llvm/pr-subscribers-mlir-memref Author: Jack Frankland (FranklandJack) ChangesWhen reconstructing the corresponding tensor type of a memref ensure we include the memory space of the tensor if it exists. Full diff: https://github.com/llvm/llvm-project/pull/144900.diff 1 Files Affected:
diff --git a/mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp b/mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp
index 11597505e7888..5db4ea30c25f7 100644
--- a/mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp
+++ b/mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp
@@ -59,7 +59,8 @@ LogicalResult mlir::memref::foldMemRefCast(Operation *op, Value inner) {
/// type.
Type mlir::memref::getTensorTypeFromMemRefType(Type type) {
if (auto memref = llvm::dyn_cast<MemRefType>(type))
- return RankedTensorType::get(memref.getShape(), memref.getElementType());
+ return RankedTensorType::get(memref.getShape(), memref.getElementType(),
+ memref.getMemorySpace());
if (auto memref = llvm::dyn_cast<UnrankedMemRefType>(type))
return UnrankedTensorType::get(memref.getElementType());
return NoneType::get(type.getContext());
|
@llvm/pr-subscribers-mlir Author: Jack Frankland (FranklandJack) ChangesWhen reconstructing the corresponding tensor type of a memref ensure we include the memory space of the tensor if it exists. Full diff: https://github.com/llvm/llvm-project/pull/144900.diff 1 Files Affected:
diff --git a/mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp b/mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp
index 11597505e7888..5db4ea30c25f7 100644
--- a/mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp
+++ b/mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp
@@ -59,7 +59,8 @@ LogicalResult mlir::memref::foldMemRefCast(Operation *op, Value inner) {
/// type.
Type mlir::memref::getTensorTypeFromMemRefType(Type type) {
if (auto memref = llvm::dyn_cast<MemRefType>(type))
- return RankedTensorType::get(memref.getShape(), memref.getElementType());
+ return RankedTensorType::get(memref.getShape(), memref.getElementType(),
+ memref.getMemorySpace());
if (auto memref = llvm::dyn_cast<UnrankedMemRefType>(type))
return UnrankedTensorType::get(memref.getElementType());
return NoneType::get(type.getContext());
|
Thanks! Please add a test :) |
7c5b417
to
d8ed7e0
Compare
Yes, infact we can add two tests! One to check we can round trip a global memref with a tensor initializer that includes an encoding value and one to check that when the types mismatch we produce the correct error message. Thanks for the review. |
@@ -59,7 +59,8 @@ LogicalResult mlir::memref::foldMemRefCast(Operation *op, Value inner) { | |||
/// type. | |||
Type mlir::memref::getTensorTypeFromMemRefType(Type type) { | |||
if (auto memref = llvm::dyn_cast<MemRefType>(type)) | |||
return RankedTensorType::get(memref.getShape(), memref.getElementType()); | |||
return RankedTensorType::get(memref.getShape(), memref.getElementType(), | |||
memref.getMemorySpace()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, so this is setting the tensor encoding based on the memory space 🤔 This looks good and odd at the same time 😅
@matthias-springer , could you take a look with your expert eye? Based on -use-encoding-for-memory-space
in https://mlir.llvm.org/docs/Passes/#-one-shot-bufferize, this would make sense. But it still feels like a very strong assumption, so not sure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes agreed this feels a bit odd so input on the best way forward very much appreciated here :D
Ultimately there is a question here about whether the encoding/address space of a tensor/memref needs to match its initializer. In the case of TOSA the answer is no (at least that isn’t something the tosa.const
verifier checks), in the case of arith.const
the answer seems to be yes, the operation inherits from an interface that requires they be the same.
For bufferization it seems like we should allow differences between the address space of the memref and the encoding of the tensor that initializes a global memref op since you may want to convert the memref’s address space like we do for spirv and we shouldn’t need to go back and update the initializer so that when the verifier does the type check they match.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR assumes that tensor encoding and memref memory space are the same thing. That's not generally the case. Some compilers use that convention during bufferization, but not all do. E.g., the sparse compiler uses the tensor encoding for different purposes.
I don't quite follow what bug this PR is fixing. Why is it desirable that the tensor attribute encoding and the memref memory space match?
Yeah sorry this issue here is a little bit subtle. To take a concerte example, we have an The problem is that
So As an example:
run with:
with cause a verification error due to the situation described above. I feel like there might be other solutions here:
|
I would change the |
d8ed7e0
to
53d9337
Compare
Sounds good to me. I've updated the verifier to do exactly that and added specific tests case for when the element types mismatch and when the shapes mismatch. I've also added a round trip to check we can now construct a memref in a given address space from an initializer with no matching encoding. Let me know if you would like any further changes. |
When comparing the type of the initializer in a `memref::GlobalOp` against its result only consider the element type and the shape. Other attributes such as memory space should be ignored since comparing these between tensors and memrefs doesn't make sense and constructing a memref in a specific memory space with a tensor that has no such attribute should be valid. Signed-off-by: Jack Frankland <[email protected]>
53d9337
to
3a6403a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, some suggestions inline.
@FranklandJack , I noticed that you forced-pushed and erased the history. Please try to avoid that: In particular, some earlier comments refer to the original revision, but that context is gone making it hard for folks referring to this in the future to follow the discussion. Not a big deal, it is hard to keep track of all the LLVM policies/guidelines. And, TBH, many people do force-push (e.g. to rebase on top of main), but even then we should preserve the original commits. Most importantly, thanks for fixing this! |
When comparing the type of the initializer in a `memref::GlobalOp` against its result only consider the element type and the shape. Other attributes such as memory space should be ignored since comparing these between tensors and memrefs doesn't make sense and constructing a memref in a specific memory space with a tensor that has no such attribute should be valid. Signed-off-by: Jack Frankland <[email protected]>
When reconstructing the corresponding tensor type of a memref ensure we include the memory space of the tensor if it exists.