Skip to content

[MLIR][Bufferization] Choose default memory space in tensor copy insertion #88500

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Apr 12, 2024

Conversation

Groverkss
Copy link
Member

Tensor copy insertion currently uses memory_space = 0 when creating a tensor copy using alloc_tensor. This memory space should instead be the default memory space provided in bufferization options.

@llvmbot llvmbot added mlir mlir:bufferization Bufferization infrastructure labels Apr 12, 2024
@llvmbot
Copy link
Member

llvmbot commented Apr 12, 2024

@llvm/pr-subscribers-mlir-sparse
@llvm/pr-subscribers-mlir-bufferization

@llvm/pr-subscribers-mlir

Author: Kunwar Grover (Groverkss)

Changes

Tensor copy insertion currently uses memory_space = 0 when creating a tensor copy using alloc_tensor. This memory space should instead be the default memory space provided in bufferization options.


Full diff: https://github.com/llvm/llvm-project/pull/88500.diff

2 Files Affected:

  • (modified) mlir/lib/Dialect/Bufferization/IR/BufferizableOpInterface.cpp (+7-4)
  • (added) mlir/test/Dialect/Bufferization/Transforms/tensor-copy-insertion-memory-space-default.mlir (+14)
diff --git a/mlir/lib/Dialect/Bufferization/IR/BufferizableOpInterface.cpp b/mlir/lib/Dialect/Bufferization/IR/BufferizableOpInterface.cpp
index 55c9299c58effd..46f2639c21cad2 100644
--- a/mlir/lib/Dialect/Bufferization/IR/BufferizableOpInterface.cpp
+++ b/mlir/lib/Dialect/Bufferization/IR/BufferizableOpInterface.cpp
@@ -193,10 +193,13 @@ FailureOr<Value> bufferization::allocateTensorForShapedValue(
   FailureOr<BaseMemRefType> copyBufferType = getBufferType(tensor, options);
   if (failed(copyBufferType))
     return failure();
-  Attribute memorySpace = copyBufferType->getMemorySpace();
-  if (!memorySpace)
-    memorySpace = b.getI64IntegerAttr(0);
-  allocTensorOp.setMemorySpaceAttr(memorySpace);
+  std::optional<Attribute> memorySpace = copyBufferType->getMemorySpace();
+  if (!memorySpace) {
+    memorySpace = options.defaultMemorySpaceFn(tensorType);
+  }
+  if (memorySpace.has_value()) {
+    allocTensorOp.setMemorySpaceAttr(memorySpace.value());
+  }
   return allocTensorOp.getResult();
 }
 
diff --git a/mlir/test/Dialect/Bufferization/Transforms/tensor-copy-insertion-memory-space-default.mlir b/mlir/test/Dialect/Bufferization/Transforms/tensor-copy-insertion-memory-space-default.mlir
new file mode 100644
index 00000000000000..e33c95c3710f85
--- /dev/null
+++ b/mlir/test/Dialect/Bufferization/Transforms/tensor-copy-insertion-memory-space-default.mlir
@@ -0,0 +1,14 @@
+// RUN: mlir-opt %s -test-tensor-copy-insertion -split-input-file | FileCheck %s
+
+// -----
+
+// CHECK-LABEL: func @alloc_tensor_default_memory_space
+func.func @alloc_tensor_default_memory_space() -> (tensor<10xf32>, tensor<10xf32>) {
+  %c0 = arith.constant 0 : index
+  %cst = arith.constant 0.0 : f32
+  // CHECK: bufferization.alloc_tensor() : tensor<10xf32>
+  %t = bufferization.alloc_tensor() : tensor<10xf32>
+  // CHECK: bufferization.alloc_tensor() : tensor<10xf32>
+  %s = tensor.insert %cst into %t[%c0] : tensor<10xf32>
+  return %s, %t : tensor<10xf32>, tensor<10xf32>
+}

@llvmbot llvmbot added the mlir:sparse Sparse compiler in MLIR label Apr 12, 2024
@Groverkss Groverkss merged commit 6f1e23b into llvm:main Apr 12, 2024
bazuzi pushed a commit to bazuzi/llvm-project that referenced this pull request Apr 15, 2024
…rtion (llvm#88500)

Tensor copy insertion currently uses memory_space = 0 when creating a
tensor copy using alloc_tensor. This memory space should instead be the
default memory space provided in bufferization options.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
mlir:bufferization Bufferization infrastructure mlir:sparse Sparse compiler in MLIR mlir
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants