Skip to content

[mlir][vector] Update tests for xfer-flatten (nfc) #101471

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Aug 2, 2024

Conversation

banach-space
Copy link
Contributor

Updates formatting and variable names in:

  • vector-transfer-flatten.mlir

This is primarily to improve consistency, both within this particular
test file as well as across tests.

Overview of changes:

  • All memref input arguments are re-named as %mem.
  • All vector input arguments are re-named as %vec.
  • All tensor input arguments are re-named as %dest.
  • LIT variables are update to be consistent with input arguments.
  • Renamed all output arguments as %res.
  • Updated indentation to be more C-like.

Note, similar changes for vector-transfer-permutation-lowering.mlir were
implemented in #101468.

@llvmbot
Copy link
Member

llvmbot commented Aug 1, 2024

@llvm/pr-subscribers-mlir

@llvm/pr-subscribers-mlir-vector

Author: Andrzej Warzyński (banach-space)

Changes

Updates formatting and variable names in:

  • vector-transfer-flatten.mlir

This is primarily to improve consistency, both within this particular
test file as well as across tests.

Overview of changes:

  • All memref input arguments are re-named as %mem.
  • All vector input arguments are re-named as %vec.
  • All tensor input arguments are re-named as %dest.
  • LIT variables are update to be consistent with input arguments.
  • Renamed all output arguments as %res.
  • Updated indentation to be more C-like.

Note, similar changes for vector-transfer-permutation-lowering.mlir were
implemented in #101468.


Patch is 29.18 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/101471.diff

1 Files Affected:

  • (modified) mlir/test/Dialect/Vector/vector-transfer-flatten.mlir (+115-111)
diff --git a/mlir/test/Dialect/Vector/vector-transfer-flatten.mlir b/mlir/test/Dialect/Vector/vector-transfer-flatten.mlir
index 621baef82319f..46155b0d044fc 100644
--- a/mlir/test/Dialect/Vector/vector-transfer-flatten.mlir
+++ b/mlir/test/Dialect/Vector/vector-transfer-flatten.mlir
@@ -7,18 +7,18 @@
 ///----------------------------------------------------------------------------------------
 
 func.func @transfer_read_dims_match_contiguous(
-    %arg : memref<5x4x3x2xi8, strided<[24, 6, 2, 1], offset: ?>>) -> vector<5x4x3x2xi8> {
+    %mem : memref<5x4x3x2xi8, strided<[24, 6, 2, 1], offset: ?>>) -> vector<5x4x3x2xi8> {
 
   %c0 = arith.constant 0 : index
   %cst = arith.constant 0 : i8
-  %v = vector.transfer_read %arg[%c0, %c0, %c0, %c0], %cst :
+  %res = vector.transfer_read %mem[%c0, %c0, %c0, %c0], %cst :
     memref<5x4x3x2xi8, strided<[24, 6, 2, 1], offset: ?>>, vector<5x4x3x2xi8>
-  return %v : vector<5x4x3x2xi8>
+  return %res : vector<5x4x3x2xi8>
 }
 
 // CHECK-LABEL: func @transfer_read_dims_match_contiguous
-// CHECK-SAME:    %[[ARG:[0-9a-zA-Z]+]]: memref<5x4x3x2xi8
-// CHECK:         %[[COLLAPSED:.+]] = memref.collapse_shape %[[ARG]] {{.}}[0, 1, 2, 3]
+// CHECK-SAME:    %[[MEM:[0-9a-zA-Z]+]]: memref<5x4x3x2xi8
+// CHECK:         %[[COLLAPSED:.+]] = memref.collapse_shape %[[MEM]] {{.}}[0, 1, 2, 3]
 // CHECK:         %[[READ1D:.+]] = vector.transfer_read %[[COLLAPSED]]
 // CHECK:         %[[VEC2D:.+]] = vector.shape_cast %[[READ1D]] : vector<120xi8> to vector<5x4x3x2xi8>
 // CHECK:         return %[[VEC2D]]
@@ -29,18 +29,18 @@ func.func @transfer_read_dims_match_contiguous(
 // -----
 
 func.func @transfer_read_dims_match_contiguous_empty_stride(
-    %arg : memref<5x4x3x2xi8>) -> vector<5x4x3x2xi8> {
+    %mem : memref<5x4x3x2xi8>) -> vector<5x4x3x2xi8> {
 
   %c0 = arith.constant 0 : index
   %cst = arith.constant 0 : i8
-  %v = vector.transfer_read %arg[%c0, %c0, %c0, %c0], %cst :
+  %res = vector.transfer_read %mem[%c0, %c0, %c0, %c0], %cst :
     memref<5x4x3x2xi8>, vector<5x4x3x2xi8>
-  return %v : vector<5x4x3x2xi8>
+  return %res : vector<5x4x3x2xi8>
 }
 
 // CHECK-LABEL: func @transfer_read_dims_match_contiguous_empty_stride(
-// CHECK-SAME:    %[[ARG:[0-9a-zA-Z]+]]: memref<5x4x3x2xi8
-// CHECK:         %[[COLLAPSED:.+]] = memref.collapse_shape %[[ARG]] {{.}}[0, 1, 2, 3]
+// CHECK-SAME:    %[[MEM:[0-9a-zA-Z]+]]: memref<5x4x3x2xi8
+// CHECK:         %[[COLLAPSED:.+]] = memref.collapse_shape %[[MEM]] {{.}}[0, 1, 2, 3]
 // CHECK:         %[[READ1D:.+]] = vector.transfer_read %[[COLLAPSED]]
 // CHECK:         %[[VEC2D:.+]] = vector.shape_cast %[[READ1D]] : vector<120xi8> to vector<5x4x3x2xi8>
 // CHECK:         return %[[VEC2D]]
@@ -54,20 +54,20 @@ func.func @transfer_read_dims_match_contiguous_empty_stride(
 // contiguous subset of the memref, so "flattenable".
 
 func.func @transfer_read_dims_mismatch_contiguous(
-    %arg : memref<5x4x3x2xi8, strided<[24, 6, 2, 1], offset: ?>>) -> vector<1x1x2x2xi8> {
+    %mem : memref<5x4x3x2xi8, strided<[24, 6, 2, 1], offset: ?>>) -> vector<1x1x2x2xi8> {
 
   %c0 = arith.constant 0 : index
   %cst = arith.constant 0 : i8
-  %v = vector.transfer_read %arg[%c0, %c0, %c0, %c0], %cst :
+  %res = vector.transfer_read %mem[%c0, %c0, %c0, %c0], %cst :
     memref<5x4x3x2xi8, strided<[24, 6, 2, 1], offset: ?>>, vector<1x1x2x2xi8>
-  return %v : vector<1x1x2x2xi8>
+  return %res : vector<1x1x2x2xi8>
 }
 
 // CHECK-LABEL:   func.func @transfer_read_dims_mismatch_contiguous(
-// CHECK-SAME:      %[[VAL_0:.*]]: memref<5x4x3x2xi8, strided<[24, 6, 2, 1], offset: ?>>) -> vector<1x1x2x2xi8> {
+// CHECK-SAME:      %[[MEM:.*]]: memref<5x4x3x2xi8, strided<[24, 6, 2, 1], offset: ?>>) -> vector<1x1x2x2xi8> {
 // CHECK:           %[[VAL_1:.*]] = arith.constant 0 : i8
 // CHECK:           %[[VAL_2:.*]] = arith.constant 0 : index
-// CHECK:           %[[VAL_3:.*]] = memref.collapse_shape %[[VAL_0]] {{\[\[}}0, 1, 2, 3]] : memref<5x4x3x2xi8, strided<[24, 6, 2, 1], offset: ?>> into memref<120xi8, strided<[1], offset: ?>>
+// CHECK:           %[[VAL_3:.*]] = memref.collapse_shape %[[MEM]] {{\[\[}}0, 1, 2, 3]] : memref<5x4x3x2xi8, strided<[24, 6, 2, 1], offset: ?>> into memref<120xi8, strided<[1], offset: ?>>
 // CHECK:           %[[VAL_4:.*]] = vector.transfer_read %[[VAL_3]]{{\[}}%[[VAL_2]]], %[[VAL_1]] {in_bounds = [true]} : memref<120xi8, strided<[1], offset: ?>>, vector<4xi8>
 // CHECK:           %[[VAL_5:.*]] = vector.shape_cast %[[VAL_4]] : vector<4xi8> to vector<1x1x2x2xi8>
 // CHECK:           return %[[VAL_5]] : vector<1x1x2x2xi8>
@@ -80,23 +80,24 @@ func.func @transfer_read_dims_mismatch_contiguous(
 func.func @transfer_read_dims_mismatch_non_zero_indices(
     %idx_1: index,
     %idx_2: index,
-    %arg: memref<1x43x4x6xi32>) -> vector<1x2x6xi32>{
+    %mem: memref<1x43x4x6xi32>) -> vector<1x2x6xi32>{
 
   %c0 = arith.constant 0 : index
   %c0_i32 = arith.constant 0 : i32
-  %v = vector.transfer_read %arg[%c0, %idx_1, %idx_2, %c0], %c0_i32 {in_bounds = [true, true, true]} :
-    memref<1x43x4x6xi32>, vector<1x2x6xi32>
-  return %v : vector<1x2x6xi32>
+  %res = vector.transfer_read %mem[%c0, %idx_1, %idx_2, %c0], %c0_i32 {
+    in_bounds = [true, true, true]
+  } : memref<1x43x4x6xi32>, vector<1x2x6xi32>
+  return %res : vector<1x2x6xi32>
 }
 
 // CHECK: #[[$ATTR_0:.+]] = affine_map<()[s0, s1] -> (s0 * 24 + s1 * 6)>
 
 // CHECK-LABEL:   func.func @transfer_read_dims_mismatch_non_zero_indices(
 // CHECK-SAME:      %[[IDX_1:.*]]: index, %[[IDX_2:.*]]: index,
-// CHECK-SAME:      %[[M_IN:.*]]: memref<1x43x4x6xi32>
+// CHECK-SAME:      %[[MEM:.*]]: memref<1x43x4x6xi32>
 // CHECK:           %[[C_0:.*]] = arith.constant 0 : i32
 // CHECK:           %[[C_0_IDX:.*]] = arith.constant 0 : index
-// CHECK:           %[[COLLAPSED_IN:.*]] = memref.collapse_shape %[[M_IN]] {{\[}}[0], [1, 2, 3]] : memref<1x43x4x6xi32> into memref<1x1032xi32>
+// CHECK:           %[[COLLAPSED_IN:.*]] = memref.collapse_shape %[[MEM]] {{\[}}[0], [1, 2, 3]] : memref<1x43x4x6xi32> into memref<1x1032xi32>
 // CHECK:           %[[COLLAPSED_IDX:.*]] = affine.apply #[[$ATTR_0]]()[%[[IDX_1]], %[[IDX_2]]]
 // CHECK:           %[[READ:.*]] = vector.transfer_read %[[COLLAPSED_IN]][%[[C_0_IDX]], %[[COLLAPSED_IDX]]], %[[C_0]] {in_bounds = [true]} : memref<1x1032xi32>, vector<12xi32>
 
@@ -109,15 +110,16 @@ func.func @transfer_read_dims_mismatch_non_zero_indices(
 // the output vector is to be read _is_ contiguous. Hence the flattening works fine.
 
 func.func @transfer_read_dims_mismatch_non_contiguous_non_zero_indices(
-    %arg : memref<1x3x3x2xf32, strided<[40, 10, 2, 1], offset: ?>>,
+    %mem : memref<1x3x3x2xf32, strided<[40, 10, 2, 1], offset: ?>>,
     %idx_1 : index,
     %idx_2 : index) -> vector<2x2xf32> {
 
   %c0 = arith.constant 0 : index
   %cst_1 = arith.constant 0.000000e+00 : f32
-  %8 = vector.transfer_read %arg[%c0, %idx_1, %idx_2, %c0], %cst_1 {in_bounds = [true, true]} :
-    memref<1x3x3x2xf32, strided<[40, 10, 2, 1], offset: ?>>, vector<2x2xf32>
-  return %8 : vector<2x2xf32>
+  %res = vector.transfer_read %mem[%c0, %idx_1, %idx_2, %c0], %cst_1 {
+    in_bounds = [true, true]
+  } : memref<1x3x3x2xf32, strided<[40, 10, 2, 1], offset: ?>>, vector<2x2xf32>
+  return %res : vector<2x2xf32>
 }
 
 // CHECK: #[[$MAP:.+]] = affine_map<()[s0] -> (s0 * 2)>
@@ -136,22 +138,23 @@ func.func @transfer_read_dims_mismatch_non_contiguous_non_zero_indices(
 // or not. Indeed, those dynamic shapes are not candidates for flattening anyway.
 
 func.func @transfer_read_leading_dynamic_dims(
-    %arg : memref<?x?x8x4xi8, strided<[?, 32, 4, 1], offset: ?>>,
+    %mem : memref<?x?x8x4xi8, strided<[?, 32, 4, 1], offset: ?>>,
     %idx_1 : index,
     %idx_2 : index) -> vector<8x4xi8> {
 
   %c0_i8 = arith.constant 0 : i8
   %c0 = arith.constant 0 : index
-  %result = vector.transfer_read %arg[%idx_1, %idx_2, %c0, %c0], %c0_i8 {in_bounds = [true, true]} :
-    memref<?x?x8x4xi8, strided<[?, 32, 4, 1], offset: ?>>, vector<8x4xi8>
-  return %result : vector<8x4xi8>
+  %res = vector.transfer_read %mem[%idx_1, %idx_2, %c0, %c0], %c0_i8 {
+    in_bounds = [true, true]
+  } : memref<?x?x8x4xi8, strided<[?, 32, 4, 1], offset: ?>>, vector<8x4xi8>
+  return %res : vector<8x4xi8>
 }
 
 // CHECK-LABEL: func @transfer_read_leading_dynamic_dims
-// CHECK-SAME:    %[[ARG0:.+]]: memref<?x?x8x4xi8, {{.+}}>, %[[ARG1:.+]]: index, %[[ARG2:.+]]: index
+// CHECK-SAME:    %[[MEM:.+]]: memref<?x?x8x4xi8, {{.+}}>, %[[ARG1:.+]]: index, %[[ARG2:.+]]: index
 // CHECK:         %[[C0_I8:.+]] = arith.constant 0 : i8
 // CHECK:         %[[C0:.+]] = arith.constant 0 : index
-// CHECK:         %[[COLLAPSED:.+]] = memref.collapse_shape %[[ARG0]] {{\[}}[0], [1], [2, 3]{{\]}}
+// CHECK:         %[[COLLAPSED:.+]] = memref.collapse_shape %[[MEM]] {{\[}}[0], [1], [2, 3]{{\]}}
 // CHECK-SAME:      : memref<?x?x8x4xi8, {{.+}}> into memref<?x?x32xi8, {{.+}}>
 // CHECK:         %[[VEC1D:.+]] = vector.transfer_read %[[COLLAPSED]]
 // CHECK-SAME:    [%[[ARG1]], %[[ARG2]], %[[C0]]], %[[C0_I8]]
@@ -170,12 +173,13 @@ func.func @transfer_read_leading_dynamic_dims(
 func.func @negative_transfer_read_dynamic_dim_to_flatten(
     %idx_1: index,
     %idx_2: index,
-    %m_in: memref<1x?x4x6xi32>) -> vector<1x2x6xi32> {
+    %mem: memref<1x?x4x6xi32>) -> vector<1x2x6xi32> {
 
   %c0 = arith.constant 0 : index
   %c0_i32 = arith.constant 0 : i32
-  %v = vector.transfer_read %m_in[%c0, %idx_1, %idx_2, %c0], %c0_i32 {in_bounds = [true, true, true]} :
-    memref<1x?x4x6xi32>, vector<1x2x6xi32>
+  %v = vector.transfer_read %mem[%c0, %idx_1, %idx_2, %c0], %c0_i32 {
+    in_bounds = [true, true, true]
+  } : memref<1x?x4x6xi32>, vector<1x2x6xi32>
   return %v : vector<1x2x6xi32>
 }
 
@@ -192,11 +196,11 @@ func.func @negative_transfer_read_dynamic_dim_to_flatten(
 // memref.
 
 func.func @transfer_read_dims_mismatch_non_contiguous_slice(
-    %arg : memref<5x4x3x2xi8>) -> vector<2x1x2x2xi8> {
+    %mem : memref<5x4x3x2xi8>) -> vector<2x1x2x2xi8> {
 
   %c0 = arith.constant 0 : index
   %cst = arith.constant 0 : i8
-  %v = vector.transfer_read %arg[%c0, %c0, %c0, %c0], %cst :
+  %v = vector.transfer_read %mem[%c0, %c0, %c0, %c0], %cst :
     memref<5x4x3x2xi8>, vector<2x1x2x2xi8>
   return %v : vector<2x1x2x2xi8>
 }
@@ -211,10 +215,10 @@ func.func @transfer_read_dims_mismatch_non_contiguous_slice(
 // -----
 
 func.func @transfer_read_0d(
-    %arg : memref<i8>) -> vector<i8> {
+    %mem : memref<i8>) -> vector<i8> {
 
   %cst = arith.constant 0 : i8
-  %0 = vector.transfer_read %arg[], %cst : memref<i8>, vector<i8>
+  %0 = vector.transfer_read %mem[], %cst : memref<i8>, vector<i8>
   return %0 : vector<i8>
 }
 
@@ -231,11 +235,11 @@ func.func @transfer_read_0d(
 // Strides make the input memref non-contiguous, hence non-flattenable.
 
 func.func @transfer_read_non_contiguous_src(
-    %arg : memref<5x4x3x2xi8, strided<[24, 8, 2, 1], offset: ?>>) -> vector<5x4x3x2xi8> {
+    %mem : memref<5x4x3x2xi8, strided<[24, 8, 2, 1], offset: ?>>) -> vector<5x4x3x2xi8> {
 
   %c0 = arith.constant 0 : index
   %cst = arith.constant 0 : i8
-  %v = vector.transfer_read %arg[%c0, %c0, %c0, %c0], %cst :
+  %v = vector.transfer_read %mem[%c0, %c0, %c0, %c0], %cst :
     memref<5x4x3x2xi8, strided<[24, 8, 2, 1], offset: ?>>, vector<5x4x3x2xi8>
   return %v : vector<5x4x3x2xi8>
 }
@@ -256,19 +260,19 @@ func.func @transfer_read_non_contiguous_src(
 ///----------------------------------------------------------------------------------------
 
 func.func @transfer_write_dims_match_contiguous(
-    %arg : memref<5x4x3x2xi8, strided<[24, 6, 2, 1], offset: ?>>,
+    %mem : memref<5x4x3x2xi8, strided<[24, 6, 2, 1], offset: ?>>,
     %vec : vector<5x4x3x2xi8>) {
 
   %c0 = arith.constant 0 : index
-  vector.transfer_write %vec, %arg [%c0, %c0, %c0, %c0] :
+  vector.transfer_write %vec, %mem [%c0, %c0, %c0, %c0] :
     vector<5x4x3x2xi8>, memref<5x4x3x2xi8, strided<[24, 6, 2, 1], offset: ?>>
   return
 }
 
 // CHECK-LABEL: func @transfer_write_dims_match_contiguous(
-// CHECK-SAME:    %[[ARG:[0-9a-zA-Z]+]]: memref<5x4x3x2xi8
+// CHECK-SAME:    %[[MEM:[0-9a-zA-Z]+]]: memref<5x4x3x2xi8
 // CHECK-SAME:    %[[VEC:[0-9a-zA-Z]+]]: vector<5x4x3x2xi8>
-// CHECK-DAG:     %[[COLLAPSED:.+]] = memref.collapse_shape %[[ARG]] {{.}}[0, 1, 2, 3]{{.}} : memref<5x4x3x2xi8, {{.+}}> into memref<120xi8, {{.+}}>
+// CHECK-DAG:     %[[COLLAPSED:.+]] = memref.collapse_shape %[[MEM]] {{.}}[0, 1, 2, 3]{{.}} : memref<5x4x3x2xi8, {{.+}}> into memref<120xi8, {{.+}}>
 // CHECK-DAG:     %[[VEC1D:.+]] = vector.shape_cast %[[VEC]] : vector<5x4x3x2xi8> to vector<120xi8>
 // CHECK:         vector.transfer_write %[[VEC1D]], %[[COLLAPSED]]
 
@@ -278,19 +282,19 @@ func.func @transfer_write_dims_match_contiguous(
 // -----
 
 func.func @transfer_write_dims_match_contiguous_empty_stride(
-    %arg : memref<5x4x3x2xi8>,
+    %mem : memref<5x4x3x2xi8>,
     %vec : vector<5x4x3x2xi8>) {
 
   %c0 = arith.constant 0 : index
-  vector.transfer_write %vec, %arg [%c0, %c0, %c0, %c0] :
+  vector.transfer_write %vec, %mem [%c0, %c0, %c0, %c0] :
     vector<5x4x3x2xi8>, memref<5x4x3x2xi8>
   return
 }
 
 // CHECK-LABEL: func @transfer_write_dims_match_contiguous_empty_stride(
-// CHECK-SAME:    %[[ARG:[0-9a-zA-Z]+]]: memref<5x4x3x2xi8
+// CHECK-SAME:    %[[MEM:[0-9a-zA-Z]+]]: memref<5x4x3x2xi8
 // CHECK-SAME:    %[[VEC:[0-9a-zA-Z]+]]: vector<5x4x3x2xi8>
-// CHECK-DAG:     %[[COLLAPSED:.+]] = memref.collapse_shape %[[ARG]] {{.}}[0, 1, 2, 3]{{.}} : memref<5x4x3x2xi8> into memref<120xi8>
+// CHECK-DAG:     %[[COLLAPSED:.+]] = memref.collapse_shape %[[MEM]] {{.}}[0, 1, 2, 3]{{.}} : memref<5x4x3x2xi8> into memref<120xi8>
 // CHECK-DAG:     %[[VEC1D:.+]] = vector.shape_cast %[[VEC]] : vector<5x4x3x2xi8> to vector<120xi8>
 // CHECK:         vector.transfer_write %[[VEC1D]], %[[COLLAPSED]]
 
@@ -300,21 +304,21 @@ func.func @transfer_write_dims_match_contiguous_empty_stride(
 // -----
 
 func.func @transfer_write_dims_mismatch_contiguous(
-    %arg : memref<5x4x3x2xi8, strided<[24, 6, 2, 1], offset: ?>>,
+    %mem : memref<5x4x3x2xi8, strided<[24, 6, 2, 1], offset: ?>>,
     %vec : vector<1x1x2x2xi8>) {
 
   %c0 = arith.constant 0 : index
-  vector.transfer_write %vec, %arg [%c0, %c0, %c0, %c0] :
+  vector.transfer_write %vec, %mem [%c0, %c0, %c0, %c0] :
     vector<1x1x2x2xi8>, memref<5x4x3x2xi8, strided<[24, 6, 2, 1], offset: ?>>
   return
 }
 
 // CHECK-LABEL:   func.func @transfer_write_dims_mismatch_contiguous
-// CHECK-SAME:      %[[VAL_0:.*]]: memref<5x4x3x2xi8, strided<[24, 6, 2, 1], offset: ?>>,
-// CHECK-SAME:      %[[VAL_1:.*]]: vector<1x1x2x2xi8>) {
+// CHECK-SAME:      %[[MEM:.*]]: memref<5x4x3x2xi8, strided<[24, 6, 2, 1], offset: ?>>,
+// CHECK-SAME:      %[[VEC:.*]]: vector<1x1x2x2xi8>) {
 // CHECK:           %[[VAL_2:.*]] = arith.constant 0 : index
-// CHECK:           %[[VAL_3:.*]] = memref.collapse_shape %[[VAL_0]] {{\[\[}}0, 1, 2, 3]] : memref<5x4x3x2xi8, strided<[24, 6, 2, 1], offset: ?>> into memref<120xi8, strided<[1], offset: ?>>
-// CHECK:           %[[VAL_4:.*]] = vector.shape_cast %[[VAL_1]] : vector<1x1x2x2xi8> to vector<4xi8>
+// CHECK:           %[[VAL_3:.*]] = memref.collapse_shape %[[MEM]] {{\[\[}}0, 1, 2, 3]] : memref<5x4x3x2xi8, strided<[24, 6, 2, 1], offset: ?>> into memref<120xi8, strided<[1], offset: ?>>
+// CHECK:           %[[VAL_4:.*]] = vector.shape_cast %[[VEC]] : vector<1x1x2x2xi8> to vector<4xi8>
 // CHECK:           vector.transfer_write %[[VAL_4]], %[[VAL_3]]{{\[}}%[[VAL_2]]] {in_bounds = [true]} : vector<4xi8>, memref<120xi8, strided<[1], offset: ?>>
 
 // CHECK-128B-LABEL: func @transfer_write_dims_mismatch_contiguous(
@@ -325,12 +329,12 @@ func.func @transfer_write_dims_mismatch_contiguous(
 func.func @transfer_write_dims_mismatch_non_zero_indices(
     %idx_1: index,
     %idx_2: index,
-    %arg: memref<1x43x4x6xi32>,
+    %mem: memref<1x43x4x6xi32>,
     %vec: vector<1x2x6xi32>) {
 
   %c0 = arith.constant 0 : index
   %c0_i32 = arith.constant 0 : i32
-  vector.transfer_write %vec, %arg[%c0, %idx_1, %idx_2, %c0] {in_bounds = [true, true, true]} :
+  vector.transfer_write %vec, %mem[%c0, %idx_1, %idx_2, %c0] {in_bounds = [true, true, true]} :
     vector<1x2x6xi32>, memref<1x43x4x6xi32>
   return
 }
@@ -339,11 +343,11 @@ func.func @transfer_write_dims_mismatch_non_zero_indices(
 
 // CHECK-LABEL:   func.func @transfer_write_dims_mismatch_non_zero_indices(
 // CHECK-SAME:      %[[IDX_1:.*]]: index, %[[IDX_2:.*]]: index,
-// CHECK-SAME:      %[[ARG:.*]]: memref<1x43x4x6xi32>,
+// CHECK-SAME:      %[[MEM:.*]]: memref<1x43x4x6xi32>,
 // CHECK-SAME:      %[[VEC:.*]]: vector<1x2x6xi32>) {
 // CHECK-DAG:       %[[C0:.*]] = arith.constant 0 : index
 // CHECK-DAG:       %[[IDX:.*]] = affine.apply #[[$ATTR_0]](){{\[}}%[[IDX_1]], %[[IDX_2]]]
-// CHECK-DAG:       %[[CS:.*]] = memref.collapse_shape %[[ARG]] {{\[\[}}0], [1, 2, 3]] : memref<1x43x4x6xi32> into memref<1x1032xi32>
+// CHECK-DAG:       %[[CS:.*]] = memref.collapse_shape %[[MEM]] {{\[\[}}0], [1, 2, 3]] : memref<1x43x4x6xi32> into memref<1x1032xi32>
 // CHECK:           %[[SC:.*]] = vector.shape_cast %[[VEC]] : vector<1x2x6xi32> to vector<12xi32>
 // CHECK:           vector.transfer_write %[[SC]], %[[CS]]{{\[}}%[[C0]], %[[IDX]]] {in_bounds = [true]} : vector<12xi32>, memref<1x1032xi32>
 
@@ -357,13 +361,13 @@ func.func @transfer_write_dims_mismatch_non_zero_indices(
 // flattening works fine.
 
 func.func @transfer_write_dims_mismatch_non_contiguous_non_zero_indices(
-    %value : vector<2x2xf32>,
-    %subview : memref<1x3x3x2xf32, strided<[40, 10, 2, 1], offset: ?>>,
+    %vec : vector<2x2xf32>,
+    %mem : memref<1x3x3x2xf32, strided<[40, 10, 2, 1], offset: ?>>,
     %idx_1 : index,
     %idx_2 : index) {
 
   %c0 = arith.constant 0 : index
-  vector.transfer_write %value, %subview[%c0, %idx_1, %idx_2, %c0] {in_bounds = [true, true]} : vector<2x2xf32>, memref<1x3x3x2xf32, strided<[40, 10, 2, 1], offset: ?>>
+  vector.transfer_write %vec, %mem[%c0, %idx_1, %idx_2, %c0] {in_bounds = [true, true]} : vector<2x2xf32>, memref<1x3x3x2xf32, strided<[40, 10, 2, 1], offset: ?>>
   return
 }
 
@@ -383,22 +387,22 @@ func.func @transfer_write_dims_mismatch_non_contiguous_non_zero_indices(
 
 func.func @transfer_write_leading_dynamic_dims(
     %vec : vector<8x4xi8>,
-    %arg : memref<?x?x8x4xi8, strided<[?, 32, 4, 1], offset: ?>>,
+    %mem : memref<?x?x8x4xi8, strided<[?, 32, 4, 1], offset: ?>>,
     %idx_1 : index,
     %idx_2 : index) {
 
   %c0 = arith.constant 0 : index
-  vector.transfer_write %vec, %arg[%idx_1, %idx_2, %c0, %c0] {in_bounds = [true, true]} :
+  vector.transfer_write %vec, %mem[%idx_1, %idx_2, %c0, %c0] {in_bounds = [true, true]} :
     vector<8x4xi8>, memref<?x?x8x4xi8, strided<[?, 32, 4, 1], offset: ?>>
   return
 }
 
 // CHECK-LABEL: func @transfer_write_leading_dynamic_dims
-// CHECK-SAME:    %[[ARG0:.+]]: vector<8x4xi8>, %[[ARG1:.+]]: memref<?x?x8x4xi8, {{.+}}>, %[[ARG2:.+]]: index, %[[ARG3:.+]]: index
+// CHECK-SAME:    %[[VEC:.+]]: vector<8x4xi8>, %[[MEM:.+]]: memref<?x?x8x4xi8, {{.+}}>, %[[ARG2:.+]]: index, %[[ARG3:.+]]: index
 // CHECK:         %[[C0:.+]] = arith.constant 0 : index
-// CHECK:         %[[COLLAPSED:.+]] = memref.collapse_shape %[[ARG1]] {{\[}}[0], [1], [2, 3]{{\]}}
+// CHECK:         %[[COLLAPSED:.+]] = memref.collapse_shape %[[MEM]] {{\[}}[0], [1], [2, 3]{{\]}}
 // CHECK-SAME:      : memref<?x?x8x4xi8, {{.+}}> into memref<?x?x32xi8, {{.+}}>
-// CHECK:         %[[VEC1D:.+]] = vector.shape_cast %[[ARG0]] : vector<8x4xi8> to vector<32xi8>
+// CHECK:         %[[VEC1D:.+]] = vector.shape_cast %[[VEC]] : vector<8x4xi8> to vector<32xi8>
 // CHECK:         vector.transfer_write %[[VEC1D]], %[[COLLAPSED]]
 // CHECK-SAME:      [%[[ARG2]], %[[ARG3]], %[[C0]]]
 // CHECK-SAME:      {in_bounds = [true]}
@@ -415,11 +419,11 @@ func.func @negative_transfer_write_dynamic_to_flatten(
     %idx_1: index,
     %idx_2: index,
     %vec : vector<1x2x6xi32>,
-    %arg: memref<1x?x4x6xi32>) {
+    %mem: memref<1x?x4x6xi32>) {
 
   %c0 = arith.constant 0 : index
   %c0_i32 = arith.constant 0 : i32
-  vector.transfer_write %vec, %arg[%c0, %idx_1, %idx_2, %c0] {in_bounds = [true, true, true]} :
+  vector.transfer_write %vec, %mem[%c0, %idx_1, %idx_2, %c0] {in_bounds = [true, true, true]} :
     vector<1x2x6xi32>, memref<1x?x4x6xi32>
   return
 }
@@ -437,12 +441,12 @@ func.func @negative_transfer_write_dynamic_to_flatten(
 // memref.
 
 func.func @transfer_write_dims_mismatch_non_contiguous_slice(
-    %arg : memref<5x4x3x2xi8>,
+    %...
[truncated]

Copy link
Contributor

@nujaa nujaa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some went through the net hehe. Thanks for improving the codebase.

func.func @fold_unit_dim_add_basic(%arg0 : vector<1x8xi32>) -> vector<1x8xi32> {
%add = arith.addi %arg0, %arg0 : vector<1x8xi32>
func.func @fold_unit_dim_add_basic(%vec_0 : vector<1x8xi32>) -> vector<1x8xi32> {
%add = arith.addi %vec_0, %vec_0 : vector<1x8xi32>
return %add : vector<1x8xi32>
}
// CHECK-LABEL: func.func @fold_unit_dim_add_basic(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suppose getting rid of %[[VAL_X:.*]]: system will be as part of
// TODO: Align naming and format with e.g. vector-transfer-permutation-lowering.mlir ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We may need to accept a tiny bit of inconsistency here ... 😅

IMHO, for small tests, there's not much harm when using VAL_X - it's still possible to follow what's going on. However, for larger tests, these variables make analysing tests super hard.

Updates formatting and variable names in:
  * vector-transfer-flatten.mlir

This is primarily to improve consistency, both within this particular
test file as well as across tests.

Overview of changes:
  * All memref input arguments are re-named as `%mem`.
  * All vector input arguments are re-named as `%vec`.
  * All tensor input arguments are re-named as `%dest`.
  * LIT variables are update to be consistent with input arguments.
  * Renamed all output arguments as `%res`.
  * Updated indentation to be more C-like.

Note, similar changes for vector-transfer-permutation-lowering.mlir were
implemented in llvm#101468.
@banach-space
Copy link
Contributor Author

Some went through the net hehe.

That's putting it mildly 😂 I really appreciate your thorough reviews, thank you!

@banach-space banach-space force-pushed the andrzej/refactor_xfer_flatten branch from 1392654 to 9fd0e33 Compare August 1, 2024 16:34
Copy link
Contributor

@nujaa nujaa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We may need to accept a tiny bit of inconsistency here ... 😅

Am I over doing it ?
LGTM. 🎆

@banach-space
Copy link
Contributor Author

We may need to accept a tiny bit of inconsistency here ... 😅

Am I over doing it ?

Not at all - you are being very thorough and that's greatly appreciated 🙏🏻 And in general, I suggest that you always trust your own judgement when reviewing.

If there's something that I have limited bandwidth for I just try to be honest about it. It's then for the reviewer to decide how "critical" a particular request/suggestion is. Sometimes it's OK to leave things for later (e.g. a follow-up patch) or even ignore.

Thanks again for your help with this!

@banach-space banach-space merged commit 8bfa089 into llvm:main Aug 2, 2024
7 checks passed
@banach-space banach-space deleted the andrzej/refactor_xfer_flatten branch August 9, 2024 07:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants