Skip to content

[MLIR][Vector] Allow non-default memory spaces in gather/scatter lowerings #67500

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Sep 28, 2023

Conversation

qedawkins
Copy link
Contributor

GPU targets can gather on non-default address spaces (e.g. global), so this removes the check for the default memory space.

@llvmbot llvmbot added the mlir label Sep 26, 2023
@qedawkins qedawkins changed the title [Vector] Allow non-default memory spaces in gather/scatter lowerings [MLIR][Vector] Allow non-default memory spaces in gather/scatter lowerings Sep 26, 2023
@llvmbot
Copy link
Member

llvmbot commented Sep 26, 2023

@llvm/pr-subscribers-mlir

Changes

GPU targets can gather on non-default address spaces (e.g. global), so this removes the check for the default memory space.


Full diff: https://github.com/llvm/llvm-project/pull/67500.diff

2 Files Affected:

  • (modified) mlir/lib/Conversion/VectorToLLVM/ConvertVectorToLLVM.cpp (+2-4)
  • (modified) mlir/test/Conversion/VectorToLLVM/vector-to-llvm.mlir (+14)
diff --git a/mlir/lib/Conversion/VectorToLLVM/ConvertVectorToLLVM.cpp b/mlir/lib/Conversion/VectorToLLVM/ConvertVectorToLLVM.cpp
index 3f77c5b5f24e9b5..8427d60f14c0bcc 100644
--- a/mlir/lib/Conversion/VectorToLLVM/ConvertVectorToLLVM.cpp
+++ b/mlir/lib/Conversion/VectorToLLVM/ConvertVectorToLLVM.cpp
@@ -87,14 +87,12 @@ LogicalResult getMemRefAlignment(const LLVMTypeConverter &typeConverter,
   return success();
 }
 
-// Check if the last stride is non-unit or the memory space is not zero.
+// Check if the last stride is non-unit and has a valid memory space.
 static LogicalResult isMemRefTypeSupported(MemRefType memRefType,
                                            const LLVMTypeConverter &converter) {
   if (!isLastMemrefDimUnitStride(memRefType))
     return failure();
-  FailureOr<unsigned> addressSpace =
-      converter.getMemRefAddressSpace(memRefType);
-  if (failed(addressSpace) || *addressSpace != 0)
+  if (failed(converter.getMemRefAddressSpace(memRefType)))
     return failure();
   return success();
 }
diff --git a/mlir/test/Conversion/VectorToLLVM/vector-to-llvm.mlir b/mlir/test/Conversion/VectorToLLVM/vector-to-llvm.mlir
index 41ab06f2e23b501..0d075bca2e3c45c 100644
--- a/mlir/test/Conversion/VectorToLLVM/vector-to-llvm.mlir
+++ b/mlir/test/Conversion/VectorToLLVM/vector-to-llvm.mlir
@@ -2108,6 +2108,20 @@ func.func @gather_op(%arg0: memref<?xf32>, %arg1: vector<3xi32>, %arg2: vector<3
 
 // -----
 
+func.func @gather_op_global_memory(%arg0: memref<?xf32, 1>, %arg1: vector<3xi32>, %arg2: vector<3xi1>, %arg3: vector<3xf32>) -> vector<3xf32> {
+  %0 = arith.constant 0: index
+  %1 = vector.gather %arg0[%0][%arg1], %arg2, %arg3 : memref<?xf32, 1>, vector<3xi32>, vector<3xi1>, vector<3xf32> into vector<3xf32>
+  return %1 : vector<3xf32>
+}
+
+// CHECK-LABEL: func @gather_op
+// CHECK: %[[P:.*]] = llvm.getelementptr %{{.*}}[%{{.*}}] : (!llvm.ptr<1>, vector<3xi32>) -> !llvm.vec<3 x ptr<1>>, f32
+// CHECK: %[[G:.*]] = llvm.intr.masked.gather %[[P]], %{{.*}}, %{{.*}} {alignment = 4 : i32} : (!llvm.vec<3 x ptr<1>>, vector<3xi1>, vector<3xf32>) -> vector<3xf32>
+// CHECK: return %[[G]] : vector<3xf32>
+
+// -----
+
+
 func.func @gather_op_index(%arg0: memref<?xindex>, %arg1: vector<3xindex>, %arg2: vector<3xi1>, %arg3: vector<3xindex>) -> vector<3xindex> {
   %0 = arith.constant 0: index
   %1 = vector.gather %arg0[%0][%arg1], %arg2, %arg3 : memref<?xindex>, vector<3xindex>, vector<3xi1>, vector<3xindex> into vector<3xindex>

Copy link
Contributor

@dcaballe dcaballe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks ok to me but I'm not sure why that check was added in first place... I would wait just in case anybody else has some context...

@qedawkins
Copy link
Contributor Author

That was my concern as well. I'll wait another day and then merge this if there are no comments.

@banach-space
Copy link
Contributor

I'm not familiar with this code-path, but doesn't GPU lowering target SPIR-V instead?

@qedawkins
Copy link
Contributor Author

I'm not familiar with this code-path, but doesn't GPU lowering target SPIR-V instead?

We can also target NVVM and ROCDL for GPU, both of which are based on LLVM-IR.

GPU targets can gather on non-default address spaces (e.g. global),
so this removes the check for the default memory space.
@qedawkins qedawkins force-pushed the non_default_gather_mem_space branch from d811955 to d9264e6 Compare September 28, 2023 20:30
@qedawkins qedawkins merged commit 78c4974 into llvm:main Sep 28, 2023
legrosbuffle pushed a commit to legrosbuffle/llvm-project that referenced this pull request Sep 29, 2023
…rings (llvm#67500)

GPU targets can gather on non-default address spaces (e.g. global), so
this removes the check for the default memory space.
Guzhu-AMD pushed a commit to GPUOpen-Drivers/llvm-project that referenced this pull request Oct 12, 2023
Local branch amd-gfx 9e2030a Merged main:de85739ded2e into amd-gfx:9fbf8eb57eae
Remote branch main 78c4974 [MLIR][Vector] Allow non-default memory spaces in gather/scatter lowerings (llvm#67500)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants