Skip to content

[mlir][vector] Update tests for xfer-permute-lowering (nfc) #101468

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

banach-space
Copy link
Contributor

@banach-space banach-space commented Aug 1, 2024

Updates formatting and variable names in:

  • vector-transfer-permutation-lowering.mlir

This is primarily to improve consistency, both within this particular
test file as well as across tests. In particular, with this PR I'm
adopting similar naming convention to that that's already present in
vector-transfer-flatten.mlir.

Overview of changes:

  • All memref input arguments are re-named as %mem.
  • All vector input arguments are re-named as %vec.
  • All tensor input arguments are re-named as %dest.
  • LIT variables are update to be consistent with input arguments.
  • Renamed all output arguments as %res.
  • Updated indentation to be more C-like.

Updates formatting and variable names in:
  * vector-transfer-permutation-lowering.mlir

This is primarily to improve consistency, both within this particular
test file as well as across tests. In particular, with this PR I'm
adopting similar naming convention to that that's already present in
vector-transfer-flatten.mlir.

Overivew of changes:
  * All memref input arguments are re-named as `%mem`.
  * All vector input arguments are re-named as `%vec`.
  * All tensor input arguments are re-named as `%dest`.
  * LIT variables are update to be consistent with input arguments.
  * Renamed all output arguments as `%res`.
  * Updated indentation to be more C-like.
@llvmbot
Copy link
Member

llvmbot commented Aug 1, 2024

@llvm/pr-subscribers-mlir-vector

@llvm/pr-subscribers-mlir

Author: Andrzej Warzyński (banach-space)

Changes

Updates formatting and variable names in:

  • vector-transfer-permutation-lowering.mlir

This is primarily to improve consistency, both within this particular
test file as well as across tests. In particular, with this PR I'm
adopting similar naming convention to that that's already present in
vector-transfer-flatten.mlir.

Overivew of changes:

  • All memref input arguments are re-named as %mem.
  • All vector input arguments are re-named as %vec.
  • All tensor input arguments are re-named as %dest.
  • LIT variables are update to be consistent with input arguments.
  • Renamed all output arguments as %res.
  • Updated indentation to be more C-like.

Patch is 25.99 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/101468.diff

2 Files Affected:

  • (modified) mlir/test/Dialect/Vector/vector-transfer-flatten.mlir (+2)
  • (modified) mlir/test/Dialect/Vector/vector-transfer-permutation-lowering.mlir (+169-100)
diff --git a/mlir/test/Dialect/Vector/vector-transfer-flatten.mlir b/mlir/test/Dialect/Vector/vector-transfer-flatten.mlir
index 621baef82319f..31c4775696b31 100644
--- a/mlir/test/Dialect/Vector/vector-transfer-flatten.mlir
+++ b/mlir/test/Dialect/Vector/vector-transfer-flatten.mlir
@@ -1,6 +1,8 @@
 // RUN: mlir-opt %s -test-vector-transfer-flatten-patterns -split-input-file | FileCheck %s
 // RUN: mlir-opt %s -test-vector-transfer-flatten-patterns=target-vector-bitwidth=128 -split-input-file | FileCheck %s --check-prefix=CHECK-128B
 
+// TODO: Align naming and format with e.g. vector-transfer-permutation-lowering.mlir
+
 ///----------------------------------------------------------------------------------------
 /// vector.transfer_read
 /// [Pattern: FlattenContiguousRowMajorTransferReadPattern]
diff --git a/mlir/test/Dialect/Vector/vector-transfer-permutation-lowering.mlir b/mlir/test/Dialect/Vector/vector-transfer-permutation-lowering.mlir
index 3ca430a92cf97..4d7e957f154bd 100644
--- a/mlir/test/Dialect/Vector/vector-transfer-permutation-lowering.mlir
+++ b/mlir/test/Dialect/Vector/vector-transfer-permutation-lowering.mlir
@@ -1,9 +1,6 @@
 // RUN: mlir-opt %s --transform-interpreter --split-input-file | FileCheck %s
 
 // TODO: Align naming with e.g. vector-transfer-flatten.mlir
-// TODO: Replace %arg0 with %vec
-// TODO: Replace index args as %idx
-// TODO: Align argument definition in CHECKS with function body.
 
 ///----------------------------------------------------------------------------------------
 /// vector.transfer_write -> vector.transpose + vector.transfer_write
@@ -17,18 +14,18 @@
 ///     _is_ a minor identity
 
 // CHECK-LABEL:   func.func @xfer_write_transposing_permutation_map
-// CHECK-SAME:      %[[ARG_0:.*]]: vector<4x8xi16>,
+// CHECK-SAME:      %[[VEC:.*]]: vector<4x8xi16>,
 // CHECK-SAME:      %[[MEM:.*]]: memref<2x2x8x4xi16>) {
-// CHECK:           %[[TR:.*]] = vector.transpose %[[ARG_0]], [1, 0] : vector<4x8xi16> to vector<8x4xi16>
+// CHECK:           %[[TR:.*]] = vector.transpose %[[VEC]], [1, 0] : vector<4x8xi16> to vector<8x4xi16>
 // CHECK:           vector.transfer_write
 // CHECK-NOT:       permutation_map
 // CHECK-SAME:      %[[TR]], %[[MEM]]{{.*}} {in_bounds = [true, true]} : vector<8x4xi16>, memref<2x2x8x4xi16>
 func.func @xfer_write_transposing_permutation_map(
-    %arg0: vector<4x8xi16>,
+    %vec: vector<4x8xi16>,
     %mem: memref<2x2x8x4xi16>) {
 
   %c0 = arith.constant 0 : index
-  vector.transfer_write %arg0, %mem[%c0, %c0, %c0, %c0] {
+  vector.transfer_write %vec, %mem[%c0, %c0, %c0, %c0] {
     in_bounds = [true, true],
     permutation_map = affine_map<(d0, d1, d2, d3) -> (d3, d2)>
   } : vector<4x8xi16>, memref<2x2x8x4xi16>
@@ -38,10 +35,10 @@ func.func @xfer_write_transposing_permutation_map(
 
 // Even with out-of-bounds, it is safe to apply this pattern
 // CHECK-LABEL:   func.func @xfer_write_transposing_permutation_map_out_of_bounds
-// CHECK-SAME:      %[[ARG_0:.*]]: vector<4x8xi16>,
+// CHECK-SAME:      %[[VEC:.*]]: vector<4x8xi16>,
 // CHECK-SAME:      %[[MEM:.*]]: memref<2x2x?x?xi16>) {
 // CHECK:           %[[C0:.*]] = arith.constant 0 : index
-// CHECK:           %[[TR:.*]] = vector.transpose %[[ARG_0]], [1, 0] : vector<4x8xi16> to vector<8x4xi16>
+// CHECK:           %[[TR:.*]] = vector.transpose %[[VEC]], [1, 0] : vector<4x8xi16> to vector<8x4xi16>
 // Expect the in_bounds attribute to be preserved. Since we don't print it when
 // all flags are "false", it should not appear in the output.
 // CHECK-NOT:       in_bounds
@@ -49,11 +46,11 @@ func.func @xfer_write_transposing_permutation_map(
 // CHECK-NOT:       permutation_map
 // CHECK-SAME:      %[[TR]], %[[MEM]][%[[C0]], %[[C0]], %[[C0]], %[[C0]]] : vector<8x4xi16>, memref<2x2x?x?xi16>
 func.func @xfer_write_transposing_permutation_map_out_of_bounds(
-    %arg0: vector<4x8xi16>,
+    %vec: vector<4x8xi16>,
     %mem: memref<2x2x?x?xi16>) {
 
   %c0 = arith.constant 0 : index
-  vector.transfer_write %arg0, %mem[%c0, %c0, %c0, %c0] {
+  vector.transfer_write %vec, %mem[%c0, %c0, %c0, %c0] {
     in_bounds = [false, false],
     permutation_map = affine_map<(d0, d1, d2, d3) -> (d3, d2)>
   } : vector<4x8xi16>, memref<2x2x?x?xi16>
@@ -62,20 +59,20 @@ func.func @xfer_write_transposing_permutation_map_out_of_bounds(
 }
 
 // CHECK-LABEL:   func.func @xfer_write_transposing_permutation_map_with_mask_scalable
-// CHECK-SAME:      %[[ARG_0:.*]]: vector<4x[8]xi16>,
+// CHECK-SAME:      %[[VEC:.*]]: vector<4x[8]xi16>,
 // CHECK-SAME:      %[[MEM:.*]]: memref<2x2x?x4xi16>,
 // CHECK-SAME:      %[[MASK:.*]]: vector<[8]x4xi1>) {
-// CHECK:           %[[TR:.*]] = vector.transpose %[[ARG_0]], [1, 0] : vector<4x[8]xi16> to vector<[8]x4xi16>
+// CHECK:           %[[TR:.*]] = vector.transpose %[[VEC]], [1, 0] : vector<4x[8]xi16> to vector<[8]x4xi16>
 // CHECK:           vector.transfer_write
 // CHECK-NOT:       permutation_map
 // CHECK-SAME:      %[[TR]], %[[MEM]]{{.*}}, %[[MASK]] {in_bounds = [true, true]} : vector<[8]x4xi16>, memref<2x2x?x4xi16>
 func.func @xfer_write_transposing_permutation_map_with_mask_scalable(
-    %arg0: vector<4x[8]xi16>,
+    %vec: vector<4x[8]xi16>,
     %mem: memref<2x2x?x4xi16>,
     %mask: vector<[8]x4xi1>) {
 
   %c0 = arith.constant 0 : index
-  vector.transfer_write %arg0, %mem[%c0, %c0, %c0, %c0], %mask {
+  vector.transfer_write %vec, %mem[%c0, %c0, %c0, %c0], %mask {
     in_bounds = [true, true],
     permutation_map = affine_map<(d0, d1, d2, d3) -> (d3, d2)>
   } : vector<4x[8]xi16>, memref<2x2x?x4xi16>
@@ -87,13 +84,13 @@ func.func @xfer_write_transposing_permutation_map_with_mask_scalable(
 // CHECK-LABEL:   func.func @xfer_write_transposing_permutation_map_masked
 // CHECK-NOT: vector.transpose
 func.func @xfer_write_transposing_permutation_map_masked(
-    %arg0: vector<4x8xi16>,
+    %vec: vector<4x8xi16>,
     %mem: memref<2x2x8x4xi16>,
     %mask: vector<8x4xi1>) {
 
   %c0 = arith.constant 0 : index
   vector.mask %mask {
-    vector.transfer_write %arg0, %mem[%c0, %c0, %c0, %c0] {
+    vector.transfer_write %vec, %mem[%c0, %c0, %c0, %c0] {
       in_bounds = [true, true],
       permutation_map = affine_map<(d0, d1, d2, d3) -> (d3, d2)>
     } : vector<4x8xi16>, memref<2x2x8x4xi16>
@@ -122,13 +119,14 @@ func.func @xfer_write_transposing_permutation_map_masked(
 // CHECK:           vector.transfer_write %[[TR]], %[[MEM]]{{\[}}%[[IDX_1]], %[[IDX_2]]] {in_bounds = [false, true]} : vector<7x1xf32>, memref<?x?xf32>
 func.func @xfer_write_non_transposing_permutation_map(
     %mem : memref<?x?xf32>,
-    %arg0 : vector<7xf32>,
+    %vec : vector<7xf32>,
     %idx_1 : index,
     %idx_2 : index) {
 
-  vector.transfer_write %arg0, %mem[%idx_1, %idx_2]
-    {permutation_map = affine_map<(d0, d1) -> (d0)>}
-    : vector<7xf32>, memref<?x?xf32>
+  vector.transfer_write %vec, %mem[%idx_1, %idx_2] {
+    permutation_map = affine_map<(d0, d1) -> (d0)>
+  } : vector<7xf32>, memref<?x?xf32>
+
   return
 }
 
@@ -145,76 +143,109 @@ func.func @xfer_write_non_transposing_permutation_map(
 // CHECK:           vector.transfer_write %[[TR_VEC]], %[[MEM]]{{\[}}%[[IDX_1]], %[[IDX_2]]], %[[TR_MASK]] {in_bounds = [false, true]} : vector<7x1xf32>, memref<?x?xf32>
 func.func @xfer_write_non_transposing_permutation_map_with_mask_out_of_bounds(
     %mem : memref<?x?xf32>,
-    %arg0 : vector<7xf32>,
+    %vec : vector<7xf32>,
     %idx_1 : index,
     %idx_2 : index,
     %mask : vector<7xi1>) {
 
-  vector.transfer_write %arg0, %mem[%idx_1, %idx_2], %mask
-    {permutation_map = affine_map<(d0, d1) -> (d0)>, in_bounds = [false]}
-    : vector<7xf32>, memref<?x?xf32>
+  vector.transfer_write %vec, %mem[%idx_1, %idx_2], %mask {
+    permutation_map = affine_map<(d0, d1) -> (d0)>,
+    in_bounds = [false]
+  } : vector<7xf32>, memref<?x?xf32>
+
   return
 }
 
 // CHECK:           func.func @permutation_with_mask_xfer_write_scalable(
-// CHECK-SAME:        %[[ARG_0:.*]]: vector<4x[8]xi16>,
-// CHECK-SAME:        %[[ARG_1:.*]]: memref<1x4x?x1xi16>,
+// CHECK-SAME:        %[[VEC:.*]]: vector<4x[8]xi16>,
+// CHECK-SAME:        %[[MEM:.*]]: memref<1x4x?x1xi16>,
 // CHECK-SAME:        %[[MASK:.*]]: vector<4x[8]xi1>) {
 // CHECK:             %[[C0:.*]] = arith.constant 0 : index
-// CHECK:             %[[BCAST_1:.*]] = vector.broadcast %[[ARG_0]] : vector<4x[8]xi16> to vector<1x4x[8]xi16>
+// CHECK:             %[[BCAST_1:.*]] = vector.broadcast %[[VEC]] : vector<4x[8]xi16> to vector<1x4x[8]xi16>
 // CHECK:             %[[BCAST_2:.*]] = vector.broadcast %[[MASK]] : vector<4x[8]xi1> to vector<1x4x[8]xi1>
 // CHECK:             %[[TRANSPOSE_1:.*]] =  vector.transpose %[[BCAST_2]], [1, 2, 0] : vector<1x4x[8]xi1> to vector<4x[8]x1xi1>
 // CHECK:             %[[TRANSPOSE_2:.*]] =  vector.transpose %[[BCAST_1]], [1, 2, 0] : vector<1x4x[8]xi16> to vector<4x[8]x1xi16>
-// CHECK:             vector.transfer_write %[[TRANSPOSE_2]], %[[ARG_1]]{{.*}}, %[[TRANSPOSE_1]] {in_bounds = [true, true, true]} : vector<4x[8]x1xi16>, memref<1x4x?x1xi16>
-func.func @permutation_with_mask_xfer_write_scalable(%arg0: vector<4x[8]xi16>, %mem: memref<1x4x?x1xi16>, %mask:  vector<4x[8]xi1>){
-     %c0 = arith.constant 0 : index
-      vector.transfer_write %arg0, %mem[%c0, %c0, %c0, %c0], %mask {in_bounds = [true, true], permutation_map = affine_map<(d0, d1, d2, d3) -> (d1, d2)>
-} : vector<4x[8]xi16>, memref<1x4x?x1xi16>
+// CHECK:             vector.transfer_write %[[TRANSPOSE_2]], %[[MEM]]{{.*}}, %[[TRANSPOSE_1]] {in_bounds = [true, true, true]} : vector<4x[8]x1xi16>, memref<1x4x?x1xi16>
+func.func @permutation_with_mask_xfer_write_scalable(
+    %vec: vector<4x[8]xi16>,
+    %mem: memref<1x4x?x1xi16>,
+    %mask: vector<4x[8]xi1>){
 
-    return
+  %c0 = arith.constant 0 : index
+  vector.transfer_write %vec, %mem[%c0, %c0, %c0, %c0], %mask {
+    in_bounds = [true, true],
+    permutation_map = affine_map<(d0, d1, d2, d3) -> (d1, d2)>
+  } : vector<4x[8]xi16>, memref<1x4x?x1xi16>
+
+  return
 }
 
 // transfer_write in MaskOp case not supported.
 // CHECK-LABEL: func @masked_permutation_xfer_write_fixed_width
-//  CHECK-SAME:        %[[ARG_0:.*]]: tensor<?x?xf32>,
-//  CHECK-SAME:        %[[ARG_1:.*]]: vector<16xf32>,
-//  CHECK-SAME:        %[[IDX:.*]]: index,
-//  CHECK-SAME:        %[[MASK:.*]]: vector<16xi1>
+//  CHECK-SAME:   %[[DEST:.*]]: tensor<?x?xf32>,
+//  CHECK-SAME:   %[[VEC:.*]]: vector<16xf32>,
+//  CHECK-SAME:   %[[IDX:.*]]: index,
+//  CHECK-SAME:   %[[MASK:.*]]: vector<16xi1>
 //   CHECK-NOT:   vector.transpose
-//       CHECK:   %[[RES:.*]] = vector.mask %[[MASK]] { vector.transfer_write %[[ARG_1]], %[[ARG_0]]{{.*}} vector<16xf32>, tensor<?x?xf32> } : vector<16xi1> -> tensor<?x?xf32>
-func.func @masked_permutation_xfer_write_fixed_width(%t: tensor<?x?xf32>, %val: vector<16xf32>, %idx: index, %mask: vector<16xi1>) -> tensor<?x?xf32> {
-  %r = vector.mask %mask { vector.transfer_write %val, %t[%idx, %idx] {permutation_map = affine_map<(d0, d1) -> (d0)>} : vector<16xf32>, tensor<?x?xf32> } : vector<16xi1> -> tensor<?x?xf32>
+//       CHECK:   %[[RES:.*]] = vector.mask %[[MASK]] { vector.transfer_write %[[VEC]], %[[DEST]]{{.*}} vector<16xf32>, tensor<?x?xf32> } : vector<16xi1> -> tensor<?x?xf32>
+func.func @masked_permutation_xfer_write_fixed_width(
+    %dest: tensor<?x?xf32>,
+    %val: vector<16xf32>,
+    %idx: index,
+    %mask: vector<16xi1>) -> tensor<?x?xf32> {
+
+  %r = vector.mask %mask {
+    vector.transfer_write %val, %dest[%idx, %idx] {
+      permutation_map = affine_map<(d0, d1) -> (d0)>
+    } : vector<16xf32>, tensor<?x?xf32>
+  } : vector<16xi1> -> tensor<?x?xf32>
+
   return %r : tensor<?x?xf32>
 }
 
 // CHECK-LABEL: func.func @masked_permutation_xfer_write_scalable(
-//  CHECK-SAME:        %[[ARG_0:.*]]: vector<4x[8]xi16>,
-//  CHECK-SAME:        %[[ARG_1:.*]]: tensor<?x?x?x?xf32>,
-//  CHECK-SAME:        %[[MASK:.*]]: vector<4x[8]xi1>)
-//  CHECK-SAME:        -> tensor<?x?x?x?xf32> {
+//  CHECK-SAME:   %[[VEC:.*]]: vector<4x[8]xi16>,
+//  CHECK-SAME:   %[[DEST:.*]]: tensor<?x?x?x?xf32>,
+//  CHECK-SAME:   %[[MASK:.*]]: vector<4x[8]xi1>)
+//  CHECK-SAME:   -> tensor<?x?x?x?xf32> {
 //   CHECK-NOT:   vector.transpose
-//       CHECK:   %[[R:.*]] = vector.mask %[[MASK]] { vector.transfer_write %[[ARG_0]], %[[ARG_1]]{{.*}} : vector<4x[8]xi16>, tensor<?x?x?x?xf32> } : vector<4x[8]xi1> -> tensor<?x?x?x?xf32>
-func.func @masked_permutation_xfer_write_scalable(%arg0: vector<4x[8]xi16>, %t: tensor<?x?x?x?xf32>, %mask:  vector<4x[8]xi1>) -> tensor<?x?x?x?xf32> {
-     %c0 = arith.constant 0 : index
-     %r = vector.mask %mask { vector.transfer_write %arg0, %t[%c0, %c0, %c0, %c0] {in_bounds = [true, true], permutation_map = affine_map<(d0, d1, d2, d3) -> (d1, d2)>
-} : vector<4x[8]xi16>, tensor<?x?x?x?xf32> } : vector<4x[8]xi1> -> tensor<?x?x?x?xf32>
+//       CHECK:   %[[R:.*]] = vector.mask %[[MASK]] { vector.transfer_write %[[VEC]], %[[DEST]]{{.*}} : vector<4x[8]xi16>, tensor<?x?x?x?xf32> } : vector<4x[8]xi1> -> tensor<?x?x?x?xf32>
+func.func @masked_permutation_xfer_write_scalable(
+    %vec: vector<4x[8]xi16>,
+    %dest: tensor<?x?x?x?xf32>,
+    %mask:  vector<4x[8]xi1>) -> tensor<?x?x?x?xf32> {
 
-    return %r : tensor<?x?x?x?xf32>
+  %c0 = arith.constant 0 : index
+  %r = vector.mask %mask {
+    vector.transfer_write %vec, %dest[%c0, %c0, %c0, %c0] {
+      in_bounds = [true, true],
+      permutation_map = affine_map<(d0, d1, d2, d3) -> (d1, d2)>
+    } : vector<4x[8]xi16>, tensor<?x?x?x?xf32>
+  } : vector<4x[8]xi1> -> tensor<?x?x?x?xf32>
+
+  return %r : tensor<?x?x?x?xf32>
 }
 
 // transfer_write in MaskOp case not supported.
 // CHECK-LABEL: func @masked_non_permutation_xfer_write_fixed_width
-//  CHECK-SAME:      %[[ARG0:.*]]: tensor<?x?x?x?xf32>
-//  CHECK-SAME:      %[[ARG1:.*]]: vector<14x8x16xf32>
-//  CHECK-SAME:      %[[IDX:.*]]: index) -> tensor<?x?x?x?xf32>
+//  CHECK-SAME:   %[[DEST:.*]]: tensor<?x?x?x?xf32>
+//  CHECK-SAME:   %[[VEC:.*]]: vector<14x8x16xf32>
+//  CHECK-SAME:   %[[IDX:.*]]: index) -> tensor<?x?x?x?xf32>
 //   CHECK-NOT:   vector.broadcast
-//       CHECK:   %[[masked1:.*]] = vector.mask %0 { vector.transfer_write %[[ARG1]], %[[ARG0]]{{.*}} : vector<14x8x16xf32>, tensor<?x?x?x?xf32> } : vector<14x8x16xi1> -> tensor<?x?x?x?xf32>
+//       CHECK:   %[[masked1:.*]] = vector.mask %0 { vector.transfer_write %[[VEC]], %[[DEST]]{{.*}} : vector<14x8x16xf32>, tensor<?x?x?x?xf32> } : vector<14x8x16xi1> -> tensor<?x?x?x?xf32>
 func.func @masked_non_permutation_xfer_write_fixed_width(
-    %arg0 : tensor<?x?x?x?xf32>,
-    %v1 : vector<14x8x16xf32>, %dim : index) -> tensor<?x?x?x?xf32> {
+    %dest : tensor<?x?x?x?xf32>,
+    %vec : vector<14x8x16xf32>,
+    %dim : index) -> tensor<?x?x?x?xf32> {
+
   %c0 = arith.constant 0 : index
   %mask = vector.create_mask %dim, %dim, %dim : vector<14x8x16xi1>
-  %0 = vector.mask %mask { vector.transfer_write %v1, %arg0[%c0, %c0, %c0, %c0] {in_bounds = [false, false, true], permutation_map = affine_map<(d0, d1, d2, d3) -> (d0, d1, d3)>} : vector<14x8x16xf32>, tensor<?x?x?x?xf32> } : vector<14x8x16xi1> -> tensor<?x?x?x?xf32>
+  %0 = vector.mask %mask {
+    vector.transfer_write %vec, %dest[%c0, %c0, %c0, %c0] {
+      in_bounds = [false, false, true],
+      permutation_map = affine_map<(d0, d1, d2, d3) -> (d0, d1, d3)>
+    } : vector<14x8x16xf32>, tensor<?x?x?x?xf32>
+  } : vector<14x8x16xi1> -> tensor<?x?x?x?xf32>
 
   return %0 : tensor<?x?x?x?xf32>
 }
@@ -229,80 +260,105 @@ func.func @masked_non_permutation_xfer_write_fixed_width(
 ///     vector.transpose op
 
 // CHECK-LABEL:   func.func @permutation_with_mask_xfer_read_fixed_width(
-// CHECK-SAME:      %[[ARG_0:.*]]: memref<?x?xf32>,
+// CHECK-SAME:      %[[MEM:.*]]: memref<?x?xf32>,
 // CHECK-SAME:      %[[IDX_1:.*]]: index,
 // CHECK-SAME:      %[[IDX_2:.*]]: index) -> vector<8x4x2xf32> {
 // CHECK:           %[[C0:.*]] = arith.constant 0 : index
 // CHECK:           %[[PASS_THROUGH:.*]] = arith.constant 0.000000e+00 : f32
 // CHECK:           %[[MASK:.*]] = vector.create_mask %[[IDX_2]], %[[IDX_1]] : vector<2x4xi1>
-// CHECK:           %[[T_READ:.*]] = vector.transfer_read %[[ARG_0]]{{\[}}%[[C0]], %[[C0]]], %[[PASS_THROUGH]], %[[MASK]] {in_bounds = [true, true]} : memref<?x?xf32>, vector<2x4xf32>
+// CHECK:           %[[T_READ:.*]] = vector.transfer_read %[[MEM]]{{\[}}%[[C0]], %[[C0]]], %[[PASS_THROUGH]], %[[MASK]] {in_bounds = [true, true]} : memref<?x?xf32>, vector<2x4xf32>
 // CHECK:           %[[BCAST:.*]] = vector.broadcast %[[T_READ]] : vector<2x4xf32> to vector<8x2x4xf32>
 // CHECK:           %[[TRANSPOSE:.*]] = vector.transpose %[[BCAST]], [0, 2, 1] : vector<8x2x4xf32> to vector<8x4x2xf32>
 // CHECK:           return %[[TRANSPOSE]] : vector<8x4x2xf32>
-func.func @permutation_with_mask_xfer_read_fixed_width(%mem: memref<?x?xf32>, %dim_1: index, %dim_2: index) -> (vector<8x4x2xf32>) {
+func.func @permutation_with_mask_xfer_read_fixed_width(
+    %mem: memref<?x?xf32>,
+    %dim_1: index,
+    %dim_2: index) -> (vector<8x4x2xf32>) {
 
   %c0 = arith.constant 0 : index
   %cst_0 = arith.constant 0.000000e+00 : f32
 
   %mask = vector.create_mask %dim_2, %dim_1 : vector<2x4xi1>
-  %1 = vector.transfer_read %mem[%c0, %c0], %cst_0, %mask
-    {in_bounds = [true, true, true], permutation_map = affine_map<(d0, d1) -> (0, d1, d0)>}
-    : memref<?x?xf32>, vector<8x4x2xf32>
-  return %1 : vector<8x4x2xf32>
+  %res = vector.transfer_read %mem[%c0, %c0], %cst_0, %mask {
+    in_bounds = [true, true, true],
+    permutation_map = affine_map<(d0, d1) -> (0, d1, d0)>
+  } : memref<?x?xf32>, vector<8x4x2xf32>
+
+  return %res : vector<8x4x2xf32>
 }
 
 // CHECK-LABEL:   func.func @permutation_with_mask_xfer_read_scalable(
-// CHECK-SAME:      %[[ARG_0:.*]]: memref<?x?xf32>,
+// CHECK-SAME:      %[[MEM:.*]]: memref<?x?xf32>,
 // CHECK-SAME:      %[[IDX_1:.*]]: index,
 // CHECK-SAME:      %[[IDX_2:.*]]: index) -> vector<8x[4]x2xf32> {
 // CHECK:           %[[C0:.*]] = arith.constant 0 : index
 // CHECK:           %[[PASS_THROUGH:.*]] = arith.constant 0.000000e+00 : f32
 // CHECK:           %[[MASK:.*]] = vector.create_mask %[[IDX_2]], %[[IDX_1]] : vector<2x[4]xi1>
-// CHECK:           %[[T_READ:.*]] = vector.transfer_read %[[ARG_0]]{{\[}}%[[C0]], %[[C0]]], %[[PASS_THROUGH]], %[[MASK]] {in_bounds = [true, true]} : memref<?x?xf32>, vector<2x[4]xf32>
+// CHECK:           %[[T_READ:.*]] = vector.transfer_read %[[MEM]]{{\[}}%[[C0]], %[[C0]]], %[[PASS_THROUGH]], %[[MASK]] {in_bounds = [true, true]} : memref<?x?xf32>, vector<2x[4]xf32>
 // CHECK:           %[[BCAST:.*]] = vector.broadcast %[[T_READ]] : vector<2x[4]xf32> to vector<8x2x[4]xf32>
 // CHECK:           %[[TRANSPOSE:.*]] = vector.transpose %[[BCAST]], [0, 2, 1] : vector<8x2x[4]xf32> to vector<8x[4]x2xf32>
 // CHECK:           return %[[TRANSPOSE]] : vector<8x[4]x2xf32>
-func.func @permutation_with_mask_xfer_read_scalable(%mem: memref<?x?xf32>, %dim_1: index, %dim_2: index) -> (vector<8x[4]x2xf32>) {
+func.func @permutation_with_mask_xfer_read_scalable(
+    %mem: memref<?x?xf32>,
+    %dim_1: index,
+    %dim_2: index) -> (vector<8x[4]x2xf32>) {
 
   %c0 = arith.constant 0 : index
   %cst_0 = arith.constant 0.000000e+00 : f32
 
   %mask = vector.create_mask %dim_2, %dim_1 : vector<2x[4]xi1>
-  %1 = vector.transfer_read %mem[%c0, %c0], %cst_0, %mask
-    {in_bounds = [true, true, true], permutation_map = affine_map<(d0, d1) -> (0, d1, d0)>}
-    : memref<?x?xf32>, vector<8x[4]x2xf32>
-  return %1 : vector<8x[4]x2xf32>
+  %res = vector.transfer_read %mem[%c0, %c0], %cst_0, %mask {
+    in_bounds = [true, true, true],
+    permutation_map = affine_map<(d0, d1) -> (0, d1, d0)>
+  } : memref<?x?xf32>, vector<8x[4]x2xf32>
+
+  return %res : vector<8x[4]x2xf32>
 }
 
 // transfer_read in MaskOp case not supported.
 // CHECK-LABEL: func @masked_permutation_xfer_read_fixed_width
-//  CHECK-SAME:        %[[ARG_0:.*]]: tensor<?x1xf32>,
-//  CHECK-SAME:        %[[ARG_1:.*]]: vector<4x1xi1>
+//  CHECK-SAME:   %[[DEST:.*]]: tensor<?x1xf32>,
+//  CHECK-SAME:   %[[MASK:.*]]: vector<4x1xi1>
 //   CHECK-NOT:   vector.transpose
-//       CHECK:   vector.mask %[[ARG_1]] { vector.transfer_read %[[ARG_0]]{{.*}}: tensor<?x1xf32>, vector<1x4x4xf32> } : vector<4x1xi1> -> vector<1x4x4xf32>
-func.func @masked_permutation_xfer_read_fixed_width(%arg0: tensor<?x1xf32>, %mask : vector<4x1xi1>) {
+//       CHECK:   vector.mask %[[MASK]] { vector.transfer_read %[[DEST]]{{.*}}: tensor<?x1xf32>, vector<1x4x4xf32> } : vector<4x1xi1> -...
[truncated]

banach-space added a commit to banach-space/llvm-project that referenced this pull request Aug 1, 2024
Updates formatting and variable names in:
  * vector-transfer-flatten.mlir

This is primarily to improve consistency, both within this particular
test file as well as across tests.

Overview of changes:
  * All memref input arguments are re-named as `%mem`.
  * All vector input arguments are re-named as `%vec`.
  * All tensor input arguments are re-named as `%dest`.
  * LIT variables are update to be consistent with input arguments.
  * Renamed all output arguments as `%res`.
  * Updated indentation to be more C-like.

Note, similar changes for vector-transfer-permutation-lowering.mlir were
implemented in llvm#101468.
Copy link
Contributor

@nujaa nujaa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello again, what a reactivity ! A few missed ones, there's a typo in description Overivew => Overview

@@ -1,9 +1,6 @@
// RUN: mlir-opt %s --transform-interpreter --split-input-file | FileCheck %s

// TODO: Align naming with e.g. vector-transfer-flatten.mlir
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is remaining in your opinion ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that this can go. Unless you see something outstanding.

// CHECK: %[[C0:.*]] = arith.constant 0 : index
// CHECK: %[[TFR:.*]] = vector.transfer_read %arg0[%[[C0]], %[[C0]], %[[C0]], %[[C0]]]{{.*}} permutation_map = #[[MAP]]} : memref<?x?x?x?xf32>, vector<[4]x2x3xf32>
// CHECK: %[[TFR:.*]] = vector.transfer_read %[[MEM]][%[[C0]], %[[C0]], %[[C0]], %[[C0]]]{{.*}} permutation_map = #[[MAP]]} : memref<?x?x?x?xf32>, vector<[4]x2x3xf32>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

transfer_read are called TR above. and T_READ under. 🤷

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TR tends to be used for transposes :)

On a related note, I think that some level of inconsistency is fine - it's bound to happen. It's just that what we have today is quite a chaos 😅 In my refactoring, I tend to focus on the inputs and the outputs as that's one thing that's common across all tests. But otherwise ... 🤷🏻

But I won't mind if you prefer more rigour :)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, let's not bother with CHECKs variable naming.

// CHECK-NOT: vector.broadcast
// CHECK: %[[masked1:.*]] = vector.mask %0 { vector.transfer_write %[[ARG1]], %[[ARG0]]{{.*}} : vector<14x8x16xf32>, tensor<?x?x?x?xf32> } : vector<14x8x16xi1> -> tensor<?x?x?x?xf32>
// CHECK: %[[masked1:.*]] = vector.mask %0 { vector.transfer_write %[[VEC]], %[[DEST]]{{.*}} : vector<14x8x16xf32>, tensor<?x?x?x?xf32> } : vector<14x8x16xi1> -> tensor<?x?x?x?xf32>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// CHECK: %[[masked1:.*]] = vector.mask %0 { vector.transfer_write %[[VEC]], %[[DEST]]{{.*}} : vector<14x8x16xf32>, tensor<?x?x?x?xf32> } : vector<14x8x16xi1> -> tensor<?x?x?x?xf32>
// CHECK: %[[RES:.*]] = vector.mask %0 { vector.transfer_write %[[VEC]], %[[DEST]]{{.*}} : vector<14x8x16xf32>, tensor<?x?x?x?xf32> } : vector<14x8x16xi1> -> tensor<?x?x?x?xf32>

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will just remove the LIT var definition.

// CHECK-NOT: vector.broadcast
// CHECK: %[[MASK:.*]] = vector.mask %0 { vector.transfer_read %arg0{{.*}} : memref<?x?x?x?xf32>, vector<8x[4]x2x3xf32> } : vector<[4]x3xi1> -> vector<8x[4]x2x3xf32>
func.func @masked_transfer_read_reduce_rank(%mem: memref<?x?x?x?xf32>, %dim: index) -> vector<8x[4]x2x3xf32> {
// CHECK: %[[MASK:.*]] = vector.mask %0 { vector.transfer_read %[[MEM]]{{.*}} : memref<?x?x?x?xf32>, vector<8x[4]x2x3xf32> } : vector<[4]x3xi1> -> vector<8x[4]x2x3xf32>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// CHECK: %[[MASK:.*]] = vector.mask %0 { vector.transfer_read %[[MEM]]{{.*}} : memref<?x?x?x?xf32>, vector<8x[4]x2x3xf32> } : vector<[4]x3xi1> -> vector<8x[4]x2x3xf32>
// CHECK: %[[RES:.*]] = vector.mask %0 { vector.transfer_read %[[MEM]]{{.*}} : memref<?x?x?x?xf32>, vector<8x[4]x2x3xf32> } : vector<[4]x3xi1> -> vector<8x[4]x2x3xf32>

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will remove altogether

@banach-space
Copy link
Contributor Author

Hello again, what a reactivity ! A few missed ones, there's a typo in description Overivew => Overview

Two of my meetings got cancelled today 😂 Also, I've been meaning to finish this for ages 😅

Copy link
Contributor

@nujaa nujaa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. 🚀

@banach-space banach-space merged commit 98e4413 into llvm:main Aug 1, 2024
5 of 7 checks passed
@banach-space banach-space deleted the andrzej/refactor_xfer_permute_low_test_3 branch August 1, 2024 16:22
banach-space added a commit to banach-space/llvm-project that referenced this pull request Aug 1, 2024
Updates formatting and variable names in:
  * vector-transfer-flatten.mlir

This is primarily to improve consistency, both within this particular
test file as well as across tests.

Overview of changes:
  * All memref input arguments are re-named as `%mem`.
  * All vector input arguments are re-named as `%vec`.
  * All tensor input arguments are re-named as `%dest`.
  * LIT variables are update to be consistent with input arguments.
  * Renamed all output arguments as `%res`.
  * Updated indentation to be more C-like.

Note, similar changes for vector-transfer-permutation-lowering.mlir were
implemented in llvm#101468.
banach-space added a commit that referenced this pull request Aug 2, 2024
Updates formatting and variable names in:
  * vector-transfer-flatten.mlir

This is primarily to improve consistency, both within this particular
test file as well as across tests.

Overview of changes:
  * All memref input arguments are re-named as `%mem`.
  * All vector input arguments are re-named as `%vec`.
  * All tensor input arguments are re-named as `%dest`.
  * LIT variables are update to be consistent with input arguments.
  * Renamed all output arguments as `%res`.
  * Updated indentation to be more C-like.

Note, similar changes for vector-transfer-permutation-lowering.mlir were
implemented in #101468.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants