Skip to content

[mlir][sparse] Migrate more tests to use new syntax #66443

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Sep 14, 2023

Conversation

yinying-lisa-li
Copy link
Contributor

@yinying-lisa-li yinying-lisa-li commented Sep 14, 2023

Dense
lvlTypes = [ "dense", "dense" ] to map = (d0, d1) -> (d0 : dense, d1 : dense)
lvlTypes = [ "dense", "dense" ], dimToLvl = affine_map<(i,j) -> (j,i)> to map = (d0, d1) -> (d1 : dense, d0 : dense)

DCSR
lvlTypes = [ "compressed", "compressed" ] to map = (d0, d1) -> (d0 : compressed, d1 : compressed)

DCSC
lvlTypes = [ "compressed", "compressed" ], dimToLvl = affine_map<(i,j) -> (j,i)> to map = (d0, d1) -> (d1 : compressed, d0 : compressed)

Block Row
lvlTypes = [ "compressed", "dense" ] to map = (d0, d1) -> (d0 : compressed, d1 : dense)

Block Column
lvlTypes = [ "compressed", "dense" ], dimToLvl = affine_map<(i,j) -> (j,i)> to map = (d0, d1) -> (d1 : compressed, d0 : dense)

This is an ongoing effort: #66146, #66309

Dense
lvlTypes = [ "dense", "dense" ] to map = (d0, d1) -> (d0 : dense, d1 : dense)
lvlTypes = [ "dense", "dense" ], dimToLvl = affine_map<(i,j) -> (j,i)> to map = (d0, d1) -> (d1 : dense, d0 : dense)

DCSR
lvlTypes = [ "compressed", "compressed" ] to map = (d0, d1) -> (d0 : compressed, d1 : compressed)

DCSC
lvlTypes = [ "compressed", "compressed" ], dimToLvl = affine_map<(i,j) -> (j,i)> to map = (d0, d1) -> (d1 : compressed, d0 : compressed)

Block Row
lvlTypes = [ "compressed", "dense" ] to map = (d0, d1) -> (d0 : compressed, d1 : dense)

Block Column
lvlTypes = [ "compressed", "dense" ], dimToLvl = affine_map<(i,j) -> (j,i)> to map = (d0, d1) -> (d1 : compressed, d0 : dense)
@llvmbot llvmbot added mlir:core MLIR Core Infrastructure mlir:sparse Sparse compiler in MLIR mlir mlir:bufferization Bufferization infrastructure labels Sep 14, 2023
@llvmbot
Copy link
Member

llvmbot commented Sep 14, 2023

@llvm/pr-subscribers-mlir-bufferization
@llvm/pr-subscribers-mlir
@llvm/pr-subscribers-mlir-sparse

@llvm/pr-subscribers-mlir-core

Changes **Dense** `lvlTypes = [ "dense", "dense" ]` to `map = (d0, d1) -> (d0 : dense, d1 : dense)` `lvlTypes = [ "dense", "dense" ], dimToLvl = affine_map<(i,j) -> (j,i)>` to `map = (d0, d1) -> (d1 : dense, d0 : dense)`

DCSR
lvlTypes = [ "compressed", "compressed" ] to map = (d0, d1) -> (d0 : compressed, d1 : compressed)

DCSC
lvlTypes = [ "compressed", "compressed" ], dimToLvl = affine_map<(i,j) -> (j,i)> to map = (d0, d1) -> (d1 : compressed, d0 : compressed)

Block Row
lvlTypes = [ "compressed", "dense" ] to map = (d0, d1) -> (d0 : compressed, d1 : dense)

Block Column
lvlTypes = [ "compressed", "dense" ], dimToLvl = affine_map<(i,j) -> (j,i)> to map = (d0, d1) -> (d1 : compressed, d0 : dense)

Patch is 63.59 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/66443.diff

69 Files Affected:

  • (modified) mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorAttrDefs.td (+1-2)
  • (modified) mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp (+1-2)
  • (modified) mlir/test/Dialect/Bufferization/invalid.mlir (+2-2)
  • (modified) mlir/test/Dialect/SparseTensor/codegen.mlir (+3-3)
  • (modified) mlir/test/Dialect/SparseTensor/dense.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/invalid.mlir (+8-8)
  • (modified) mlir/test/Dialect/SparseTensor/one_shot_bufferize_tensor_copy_insertion.mlir (+1-2)
  • (modified) mlir/test/Dialect/SparseTensor/one_trip.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/post_rewriting.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/pre_rewriting.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/roundtrip.mlir (+13-13)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_2d.mlir (+3-3)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_affine.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_broadcast.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_concat.mlir (+3-9)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_concat_codegen.mlir (+3-6)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_conv_2d_slice_based.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_expand.mlir (+1-2)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_fill_zero.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_index.mlir (+2-2)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_kernels.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_out.mlir (+1-2)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_parallel.mlir (+2-2)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_reshape.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_scalars.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_sddmm.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_sddmm_org.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_tensor_reshape.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_transpose.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_vector_concat.mlir (+2-4)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_vector_mv.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/concatenate_dim_0.mlir (+5-8)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/concatenate_dim_0_permute.mlir (+5-8)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/concatenate_dim_1.mlir (+5-8)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/concatenate_dim_1_permute.mlir (+5-8)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output.mlir (+1-2)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/dual_sparse_conv_2d.mlir (+2-2)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_binary.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_cmp.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_codegen_dim.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_codegen_foreach.mlir (+2-3)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_collapse_shape.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_constant_to_sparse_tensor.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conv_2d.mlir (+2-2)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conversion_dyn.mlir (+2-3)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conversion_ptr.mlir (+2-3)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_expand_shape.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_filter_conv2d.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_index.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_index_dense.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_insert_2d.mlir (+3-3)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_matmul.mlir (+1-2)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_matmul_slice.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_matrix_ops.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_out_mult_elt.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_out_reduction.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_out_simple.mlir (+1-2)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_quantized_matmul.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_reshape.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_sampled_matmul.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_sampled_mm_fusion.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_semiring_select.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_storage.mlir (+5-7)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_sum.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_sum_bf16.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_sum_c32.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_sum_f16.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_transpose.mlir (+2-3)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_unary.mlir (+1-1)
diff --git a/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorAttrDefs.td b/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorAttrDefs.td
index 85fd07e3883ea3d..8b79fbf726495ad 100644
--- a/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorAttrDefs.td
+++ b/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorAttrDefs.td
@@ -206,8 +206,7 @@ def SparseTensorEncodingAttr : SparseTensor_Attr&lt;&quot;SparseTensorEncoding&quot;,
 
     // Doubly compressed sparse column storage with specific bitwidths.
     #DCSC = #sparse_tensor.encoding&lt;{
-      lvlTypes = [ &quot;compressed&quot;, &quot;compressed&quot; ],
-      dimToLvl = affine_map&lt;(i, j) -&gt; (j, i)&gt;,
+      map = (d0, d1) -&gt; (d1 : compressed, d0 : compressed),
       posWidth = 32,
       crdWidth = 8
     }&gt;
diff --git a/mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp b/mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp
index 770349d6d1db0f1..fee32a5717f62ae 100644
--- a/mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp
+++ b/mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp
@@ -351,8 +351,7 @@ static bool findDepIdxSet(Merger &amp;merger, TensorId tensor, Level lvl,
 /// Get the total number of compound affine expressions in the
 /// `getMatchingIndexingMap` for the given tensor.  For the following inputs:
 ///
-/// map = (d0, d1, d2) =&gt; (d0 + d1, d2)
-/// lvlTypes = [&quot;compressed&quot;, &quot;compressed&quot;]
+/// map = (d0, d1, d2) =&gt; (d0 + d1 : compressed, d2 : compressed)
 ///
 /// Returns 1 (because the first level is compressed and its corresponding
 /// indexing-expression is `d0 + d1`)
diff --git a/mlir/test/Dialect/Bufferization/invalid.mlir b/mlir/test/Dialect/Bufferization/invalid.mlir
index 7c92193ab068dba..ad3e657cd37e38e 100644
--- a/mlir/test/Dialect/Bufferization/invalid.mlir
+++ b/mlir/test/Dialect/Bufferization/invalid.mlir
@@ -58,7 +58,7 @@ func.func @escape_attr_non_bufferizable(%m0: memref&lt;?xf32&gt;) {
 
 // -----
 
-#DCSR = #sparse_tensor.encoding&lt;{ lvlTypes = [ &quot;compressed&quot;, &quot;compressed&quot; ] }&gt;
+#DCSR = #sparse_tensor.encoding&lt;{ map = (d0, d1) -&gt; (d0 : compressed, d1 : compressed) }&gt;
 
 func.func @sparse_alloc_direct_return() -&gt; tensor&lt;20x40xf32, #DCSR&gt; {
   // expected-error @+1{{sparse tensor allocation should not escape function}}
@@ -68,7 +68,7 @@ func.func @sparse_alloc_direct_return() -&gt; tensor&lt;20x40xf32, #DCSR&gt; {
 
 // -----
 
-#DCSR = #sparse_tensor.encoding&lt;{ lvlTypes = [ &quot;compressed&quot;, &quot;compressed&quot; ] }&gt;
+#DCSR = #sparse_tensor.encoding&lt;{ map = (d0, d1) -&gt; (d0 : compressed, d1 : compressed) }&gt;
 
 func.func private @foo(tensor&lt;20x40xf32, #DCSR&gt;) -&gt; ()
 
diff --git a/mlir/test/Dialect/SparseTensor/codegen.mlir b/mlir/test/Dialect/SparseTensor/codegen.mlir
index 43d86a9f158f03c..c5061c40eb0b16b 100644
--- a/mlir/test/Dialect/SparseTensor/codegen.mlir
+++ b/mlir/test/Dialect/SparseTensor/codegen.mlir
@@ -9,13 +9,13 @@
 }&gt;
 
 #Dense2D = #sparse_tensor.encoding&lt;{
-  lvlTypes = [ &quot;dense&quot;, &quot;dense&quot; ],
+  map = (d0, d1) -&gt; (d0 : dense, d1 : dense),
   crdWidth = 64,
   posWidth = 32
 }&gt;
 
 #Row = #sparse_tensor.encoding&lt;{
-  lvlTypes = [ &quot;compressed&quot;, &quot;dense&quot; ],
+  map = (d0, d1) -&gt; (d0 : compressed, d1 : dense),
   crdWidth = 64,
   posWidth = 32
 }&gt;
@@ -35,7 +35,7 @@
 }&gt;
 
 #DCSR = #sparse_tensor.encoding&lt;{
-  lvlTypes = [ &quot;compressed&quot;, &quot;compressed&quot; ],
+  map = (d0, d1) -&gt; (d0 : compressed, d1 : compressed),
   crdWidth = 64,
   posWidth = 32
 }&gt;
diff --git a/mlir/test/Dialect/SparseTensor/dense.mlir b/mlir/test/Dialect/SparseTensor/dense.mlir
index 8d37a8d7b662588..485a5cbb178af94 100644
--- a/mlir/test/Dialect/SparseTensor/dense.mlir
+++ b/mlir/test/Dialect/SparseTensor/dense.mlir
@@ -7,7 +7,7 @@
 // latter class is linearized into one-dimensional buffers that are backed
 // by the runtime support library.
 
-#DenseMatrix = #sparse_tensor.encoding&lt;{ lvlTypes = [ &quot;dense&quot;, &quot;dense&quot;  ] }&gt;
+#DenseMatrix = #sparse_tensor.encoding&lt;{ map = (d0, d1) -&gt; (d0 : dense, d1 : dense) }&gt;
 
 #trait_2d = {
   indexing_maps = [
diff --git a/mlir/test/Dialect/SparseTensor/invalid.mlir b/mlir/test/Dialect/SparseTensor/invalid.mlir
index 3091b0b8505d220..8e25cf06bcb6212 100644
--- a/mlir/test/Dialect/SparseTensor/invalid.mlir
+++ b/mlir/test/Dialect/SparseTensor/invalid.mlir
@@ -371,7 +371,7 @@ func.func @sparse_convert_unranked(%arg0: tensor&lt;*xf32&gt;) -&gt; tensor&lt;10xf32&gt; {
 
 // -----
 
-#DCSR = #sparse_tensor.encoding&lt;{lvlTypes = [&quot;compressed&quot;, &quot;compressed&quot;]}&gt;
+#DCSR = #sparse_tensor.encoding&lt;{map = (d0, d1) -&gt; (d0 : compressed, d1 : compressed)}&gt;
 
 func.func @sparse_convert_rank_mismatch(%arg0: tensor&lt;10x10xf64, #DCSR&gt;) -&gt; tensor&lt;?xf64&gt; {
   // expected-error@+1 {{unexpected conversion mismatch in rank}}
@@ -714,7 +714,7 @@ func.func @invalid_concat_size_mismatch(%arg0: tensor&lt;2x4xf64, #DC&gt;,
 
 // -----
 
-#DCSR = #sparse_tensor.encoding&lt;{lvlTypes = [&quot;compressed&quot;, &quot;compressed&quot;]}&gt;
+#DCSR = #sparse_tensor.encoding&lt;{map = (d0, d1) -&gt; (d0 : compressed, d1 : compressed)}&gt;
 func.func @sparse_tensor_foreach(%arg0: tensor&lt;2x4xf64, #DCSR&gt;) -&gt; () {
   // expected-error@+1 {{Unmatched number of arguments in the block}}
   sparse_tensor.foreach in %arg0 : tensor&lt;2x4xf64, #DCSR&gt; do {
@@ -725,7 +725,7 @@ func.func @sparse_tensor_foreach(%arg0: tensor&lt;2x4xf64, #DCSR&gt;) -&gt; () {
 
 // -----
 
-#DCSR = #sparse_tensor.encoding&lt;{lvlTypes = [&quot;compressed&quot;, &quot;compressed&quot;]}&gt;
+#DCSR = #sparse_tensor.encoding&lt;{map = (d0, d1) -&gt; (d0 : compressed, d1 : compressed)}&gt;
 func.func @sparse_tensor_foreach(%arg0: tensor&lt;2x4xf64, #DCSR&gt;) -&gt; () {
   // expected-error@+1 {{Expecting Index type for argument at index 1}}
   sparse_tensor.foreach in %arg0 : tensor&lt;2x4xf64, #DCSR&gt; do {
@@ -736,7 +736,7 @@ func.func @sparse_tensor_foreach(%arg0: tensor&lt;2x4xf64, #DCSR&gt;) -&gt; () {
 
 // -----
 
-#DCSR = #sparse_tensor.encoding&lt;{lvlTypes = [&quot;compressed&quot;, &quot;compressed&quot;]}&gt;
+#DCSR = #sparse_tensor.encoding&lt;{map = (d0, d1) -&gt; (d0 : compressed, d1 : compressed)}&gt;
 func.func @sparse_tensor_foreach(%arg0: tensor&lt;2x4xf64, #DCSR&gt;) -&gt; () {
   // expected-error@+1 {{Unmatched element type between input tensor and block argument}}
   sparse_tensor.foreach in %arg0 : tensor&lt;2x4xf64, #DCSR&gt; do {
@@ -747,7 +747,7 @@ func.func @sparse_tensor_foreach(%arg0: tensor&lt;2x4xf64, #DCSR&gt;) -&gt; () {
 
 // -----
 
-#DCSR = #sparse_tensor.encoding&lt;{lvlTypes = [&quot;compressed&quot;, &quot;compressed&quot;]}&gt;
+#DCSR = #sparse_tensor.encoding&lt;{map = (d0, d1) -&gt; (d0 : compressed, d1 : compressed)}&gt;
 func.func @sparse_tensor_foreach(%arg0: tensor&lt;2x4xf64, #DCSR&gt;) -&gt; () {
   // expected-error@+1 {{Unmatched element type between input tensor and block argument}}
   sparse_tensor.foreach in %arg0 : tensor&lt;2x4xf64, #DCSR&gt; do {
@@ -758,7 +758,7 @@ func.func @sparse_tensor_foreach(%arg0: tensor&lt;2x4xf64, #DCSR&gt;) -&gt; () {
 
 // -----
 
-#DCSR = #sparse_tensor.encoding&lt;{lvlTypes = [&quot;compressed&quot;, &quot;compressed&quot;]}&gt;
+#DCSR = #sparse_tensor.encoding&lt;{map = (d0, d1) -&gt; (d0 : compressed, d1 : compressed)}&gt;
 func.func @sparse_tensor_foreach(%arg0: tensor&lt;2x4xf64, #DCSR&gt;, %arg1: f32) -&gt; () {
   // expected-error@+1 {{Mismatch in number of init arguments and results}}
   sparse_tensor.foreach in %arg0 init(%arg1) : tensor&lt;2x4xf64, #DCSR&gt;, f32 do {
@@ -769,7 +769,7 @@ func.func @sparse_tensor_foreach(%arg0: tensor&lt;2x4xf64, #DCSR&gt;, %arg1: f32) -&gt; (
 
 // -----
 
-#DCSR = #sparse_tensor.encoding&lt;{lvlTypes = [&quot;compressed&quot;, &quot;compressed&quot;]}&gt;
+#DCSR = #sparse_tensor.encoding&lt;{map = (d0, d1) -&gt; (d0 : compressed, d1 : compressed)}&gt;
 func.func @sparse_tensor_foreach(%arg0: tensor&lt;2x4xf64, #DCSR&gt;, %arg1: f32) -&gt; () {
   // expected-error@+1 {{Mismatch in types of init arguments and results}}
   %1 = sparse_tensor.foreach in %arg0 init(%arg1) : tensor&lt;2x4xf64, #DCSR&gt;, f32 -&gt; i32 do {
@@ -780,7 +780,7 @@ func.func @sparse_tensor_foreach(%arg0: tensor&lt;2x4xf64, #DCSR&gt;, %arg1: f32) -&gt; (
 
 // -----
 
-#DCSR = #sparse_tensor.encoding&lt;{lvlTypes = [&quot;compressed&quot;, &quot;compressed&quot;]}&gt;
+#DCSR = #sparse_tensor.encoding&lt;{map = (d0, d1) -&gt; (d0 : compressed, d1 : compressed)}&gt;
 func.func @sparse_tensor_foreach(%arg0: tensor&lt;2x4xf64, #DCSR&gt;, %arg1: f32) -&gt; () {
   // expected-error@+1 {{Mismatch in types of yield values and results}}
   %1 = sparse_tensor.foreach in %arg0 init(%arg1) : tensor&lt;2x4xf64, #DCSR&gt;, f32 -&gt; f32 do {
diff --git a/mlir/test/Dialect/SparseTensor/one_shot_bufferize_tensor_copy_insertion.mlir b/mlir/test/Dialect/SparseTensor/one_shot_bufferize_tensor_copy_insertion.mlir
index 0ccce5121ce1ae2..fc9695f8c3c9870 100644
--- a/mlir/test/Dialect/SparseTensor/one_shot_bufferize_tensor_copy_insertion.mlir
+++ b/mlir/test/Dialect/SparseTensor/one_shot_bufferize_tensor_copy_insertion.mlir
@@ -2,8 +2,7 @@
 // RUN: mlir-opt %s -test-tensor-copy-insertion=&quot;bufferize-function-boundaries allow-return-allocs&quot; | FileCheck %s --check-prefix=CHECK-FUNC
 
 #DCSR = #sparse_tensor.encoding&lt;{
-  lvlTypes = [ &quot;compressed&quot;, &quot;compressed&quot; ],
-  dimToLvl = affine_map&lt;(i,j) -&gt; (i,j)&gt;
+  map = (d0, d1) -&gt; (d0 : compressed, d1 : compressed)
 }&gt;
 
 // CHECK-LABEL: func @bufferization_alloc_tensor
diff --git a/mlir/test/Dialect/SparseTensor/one_trip.mlir b/mlir/test/Dialect/SparseTensor/one_trip.mlir
index ad6816616c8bc4e..5a15be651c89268 100644
--- a/mlir/test/Dialect/SparseTensor/one_trip.mlir
+++ b/mlir/test/Dialect/SparseTensor/one_trip.mlir
@@ -1,7 +1,7 @@
 // RUN: mlir-opt %s -sparsification -cse | FileCheck %s
 
 #Dense = #sparse_tensor.encoding&lt;{
-  lvlTypes = [ &quot;dense&quot; , &quot;dense&quot; ]
+  map = (d0, d1) -&gt; (d0 : dense, d1 : dense)
 }&gt;
 
 #trait_scale = {
diff --git a/mlir/test/Dialect/SparseTensor/post_rewriting.mlir b/mlir/test/Dialect/SparseTensor/post_rewriting.mlir
index ab334496aaad5af..93fc610b64b3359 100644
--- a/mlir/test/Dialect/SparseTensor/post_rewriting.mlir
+++ b/mlir/test/Dialect/SparseTensor/post_rewriting.mlir
@@ -5,7 +5,7 @@
 }&gt;
 
 #SparseMatrix = #sparse_tensor.encoding&lt;{
-  lvlTypes = [&quot;compressed&quot;, &quot;compressed&quot;]
+  map = (d0, d1) -&gt; (d0 : compressed, d1 : compressed)
 }&gt;
 
 // CHECK-LABEL: func.func @expand_dense(
diff --git a/mlir/test/Dialect/SparseTensor/pre_rewriting.mlir b/mlir/test/Dialect/SparseTensor/pre_rewriting.mlir
index 0c5f32b0b55102b..1245cb0eeed3c55 100644
--- a/mlir/test/Dialect/SparseTensor/pre_rewriting.mlir
+++ b/mlir/test/Dialect/SparseTensor/pre_rewriting.mlir
@@ -9,7 +9,7 @@
 }&gt;
 
 #DCSR = #sparse_tensor.encoding&lt;{
-  lvlTypes = [&quot;compressed&quot;, &quot;compressed&quot;]
+  map = (d0, d1) -&gt; (d0 : compressed, d1 : compressed)
 }&gt;
 
 #Slice = #sparse_tensor.encoding&lt;{
diff --git a/mlir/test/Dialect/SparseTensor/roundtrip.mlir b/mlir/test/Dialect/SparseTensor/roundtrip.mlir
index cb178e4257b1cea..d3f07fd298d72e9 100644
--- a/mlir/test/Dialect/SparseTensor/roundtrip.mlir
+++ b/mlir/test/Dialect/SparseTensor/roundtrip.mlir
@@ -283,7 +283,7 @@ func.func @sparse_noe(%arg0: tensor&lt;128xf64, #SparseVector&gt;) -&gt; index {
 
 // -----
 
-#DenseMatrix = #sparse_tensor.encoding&lt;{lvlTypes = [&quot;dense&quot;,&quot;dense&quot;]}&gt;
+#DenseMatrix = #sparse_tensor.encoding&lt;{map = (d0, d1) -&gt; (d0 : dense, d1 : dense)}&gt;
 
 // CHECK-LABEL: func @sparse_load(
 //  CHECK-SAME: %[[A:.*]]: tensor&lt;16x32xf64, #{{.*}}&gt;)
@@ -296,7 +296,7 @@ func.func @sparse_load(%arg0: tensor&lt;16x32xf64, #DenseMatrix&gt;) -&gt; tensor&lt;16x32xf
 
 // -----
 
-#DenseMatrix = #sparse_tensor.encoding&lt;{lvlTypes = [&quot;dense&quot;,&quot;dense&quot;]}&gt;
+#DenseMatrix = #sparse_tensor.encoding&lt;{map = (d0, d1) -&gt; (d0 : dense, d1 : dense)}&gt;
 
 // CHECK-LABEL: func @sparse_load_ins(
 //  CHECK-SAME: %[[A:.*]]: tensor&lt;16x32xf64, #{{.*}}&gt;)
@@ -364,7 +364,7 @@ func.func @sparse_push_back_n(%arg0: index, %arg1: memref&lt;?xf64&gt;, %arg2: f64, %a
 
 // -----
 
-#SparseMatrix = #sparse_tensor.encoding&lt;{lvlTypes = [&quot;compressed&quot;, &quot;compressed&quot;]}&gt;
+#SparseMatrix = #sparse_tensor.encoding&lt;{map = (d0, d1) -&gt; (d0 : compressed, d1 : compressed)}&gt;
 
 // CHECK-LABEL: func @sparse_expansion(
 //  CHECK-SAME: %[[A:.*]]: tensor&lt;8x8xf64, #sparse_tensor.encoding&lt;{{.*}}&gt;&gt;)
@@ -378,7 +378,7 @@ func.func @sparse_expansion(%tensor: tensor&lt;8x8xf64, #SparseMatrix&gt;) -&gt; index {
 
 // -----
 
-#SparseMatrix = #sparse_tensor.encoding&lt;{lvlTypes = [&quot;compressed&quot;, &quot;compressed&quot;]}&gt;
+#SparseMatrix = #sparse_tensor.encoding&lt;{map = (d0, d1) -&gt; (d0 : compressed, d1 : compressed)}&gt;
 
 // CHECK-LABEL: func @sparse_compression(
 //  CHECK-SAME: %[[A0:.*0]]: memref&lt;?xf64&gt;,
@@ -402,7 +402,7 @@ func.func @sparse_compression(%values: memref&lt;?xf64&gt;,
 
 // -----
 
-#SparseMatrix = #sp...

@yinying-lisa-li yinying-lisa-li merged commit 2a07f0f into llvm:main Sep 14, 2023
@yinying-lisa-li yinying-lisa-li deleted the migrate3 branch September 14, 2023 23:20
yinying-lisa-li added a commit that referenced this pull request Sep 15, 2023
**COO**
`lvlTypes = [ "compressed_nu", "singleton" ]` to `map = (d0, d1) -> (d0
: compressed(nonunique), d1 : singleton)`
`lvlTypes = [ "compressed_nu_no", "singleton_no" ]` to `map = (d0, d1)
-> (d0 : compressed(nonunique, nonordered), d1 : singleton(nonordered))`

**SortedCOO**
`lvlTypes = [ "compressed_nu", "singleton" ]` to `map = (d0, d1) -> (d0
: compressed(nonunique), d1 : singleton)`

**BCOO**
`lvlTypes = [ "dense", "compressed_hi_nu", "singleton" ]` to `map = (d0,
d1, d2) -> (d0 : dense, d1 : compressed(nonunique, high), d2 :
singleton)`

**BCSR**
`lvlTypes = [ "compressed", "compressed", "dense", "dense" ], dimToLvl =
affine_map<(d0, d1) -> (d0 floordiv 2, d1 floordiv 3, d0 mod 2, d1 mod
3)>` to
`map = ( i, j ) ->
      ( i floordiv 2 : compressed,
        j floordiv 3 : compressed,
        i mod 2 : dense,
        j mod 3 : dense
      )`

**Tensor and other supported formats(e.g. CCC, CDC, CCCC)**

Currently, ELL and slice are not supported yet in the new syntax and the
CHECK tests will be updated once printing is set to output the new
syntax.

Previous PRs: #66146, #66309, #66443
ZijunZhaoCCK pushed a commit to ZijunZhaoCCK/llvm-project that referenced this pull request Sep 19, 2023
**Dense**
`lvlTypes = [ "dense", "dense" ]` to `map = (d0, d1) -> (d0 : dense, d1
: dense)`
`lvlTypes = [ "dense", "dense" ], dimToLvl = affine_map<(i,j) -> (j,i)>`
to `map = (d0, d1) -> (d1 : dense, d0 : dense)`

**DCSR**
`lvlTypes = [ "compressed", "compressed" ]` to `map = (d0, d1) -> (d0 :
compressed, d1 : compressed)`

**DCSC**
`lvlTypes = [ "compressed", "compressed" ], dimToLvl = affine_map<(i,j)
-> (j,i)>` to `map = (d0, d1) -> (d1 : compressed, d0 : compressed)`

**Block Row**
`lvlTypes = [ "compressed", "dense" ]` to `map = (d0, d1) -> (d0 :
compressed, d1 : dense)`

**Block Column**
`lvlTypes = [ "compressed", "dense" ], dimToLvl = affine_map<(i,j) ->
(j,i)>` to `map = (d0, d1) -> (d1 : compressed, d0 : dense)`

This is an ongoing effort: llvm#66146, llvm#66309
ZijunZhaoCCK pushed a commit to ZijunZhaoCCK/llvm-project that referenced this pull request Sep 19, 2023
**COO**
`lvlTypes = [ "compressed_nu", "singleton" ]` to `map = (d0, d1) -> (d0
: compressed(nonunique), d1 : singleton)`
`lvlTypes = [ "compressed_nu_no", "singleton_no" ]` to `map = (d0, d1)
-> (d0 : compressed(nonunique, nonordered), d1 : singleton(nonordered))`

**SortedCOO**
`lvlTypes = [ "compressed_nu", "singleton" ]` to `map = (d0, d1) -> (d0
: compressed(nonunique), d1 : singleton)`

**BCOO**
`lvlTypes = [ "dense", "compressed_hi_nu", "singleton" ]` to `map = (d0,
d1, d2) -> (d0 : dense, d1 : compressed(nonunique, high), d2 :
singleton)`

**BCSR**
`lvlTypes = [ "compressed", "compressed", "dense", "dense" ], dimToLvl =
affine_map<(d0, d1) -> (d0 floordiv 2, d1 floordiv 3, d0 mod 2, d1 mod
3)>` to
`map = ( i, j ) ->
      ( i floordiv 2 : compressed,
        j floordiv 3 : compressed,
        i mod 2 : dense,
        j mod 3 : dense
      )`

**Tensor and other supported formats(e.g. CCC, CDC, CCCC)**

Currently, ELL and slice are not supported yet in the new syntax and the
CHECK tests will be updated once printing is set to output the new
syntax.

Previous PRs: llvm#66146, llvm#66309, llvm#66443
zahiraam pushed a commit to tahonermann/llvm-project that referenced this pull request Oct 24, 2023
**COO**
`lvlTypes = [ "compressed_nu", "singleton" ]` to `map = (d0, d1) -> (d0
: compressed(nonunique), d1 : singleton)`
`lvlTypes = [ "compressed_nu_no", "singleton_no" ]` to `map = (d0, d1)
-> (d0 : compressed(nonunique, nonordered), d1 : singleton(nonordered))`

**SortedCOO**
`lvlTypes = [ "compressed_nu", "singleton" ]` to `map = (d0, d1) -> (d0
: compressed(nonunique), d1 : singleton)`

**BCOO**
`lvlTypes = [ "dense", "compressed_hi_nu", "singleton" ]` to `map = (d0,
d1, d2) -> (d0 : dense, d1 : compressed(nonunique, high), d2 :
singleton)`

**BCSR**
`lvlTypes = [ "compressed", "compressed", "dense", "dense" ], dimToLvl =
affine_map<(d0, d1) -> (d0 floordiv 2, d1 floordiv 3, d0 mod 2, d1 mod
3)>` to
`map = ( i, j ) ->
      ( i floordiv 2 : compressed,
        j floordiv 3 : compressed,
        i mod 2 : dense,
        j mod 3 : dense
      )`

**Tensor and other supported formats(e.g. CCC, CDC, CCCC)**

Currently, ELL and slice are not supported yet in the new syntax and the
CHECK tests will be updated once printing is set to output the new
syntax.

Previous PRs: llvm#66146, llvm#66309, llvm#66443
zahiraam pushed a commit to tahonermann/llvm-project that referenced this pull request Oct 24, 2023
**COO**
`lvlTypes = [ "compressed_nu", "singleton" ]` to `map = (d0, d1) -> (d0
: compressed(nonunique), d1 : singleton)`
`lvlTypes = [ "compressed_nu_no", "singleton_no" ]` to `map = (d0, d1)
-> (d0 : compressed(nonunique, nonordered), d1 : singleton(nonordered))`

**SortedCOO**
`lvlTypes = [ "compressed_nu", "singleton" ]` to `map = (d0, d1) -> (d0
: compressed(nonunique), d1 : singleton)`

**BCOO**
`lvlTypes = [ "dense", "compressed_hi_nu", "singleton" ]` to `map = (d0,
d1, d2) -> (d0 : dense, d1 : compressed(nonunique, high), d2 :
singleton)`

**BCSR**
`lvlTypes = [ "compressed", "compressed", "dense", "dense" ], dimToLvl =
affine_map<(d0, d1) -> (d0 floordiv 2, d1 floordiv 3, d0 mod 2, d1 mod
3)>` to
`map = ( i, j ) ->
      ( i floordiv 2 : compressed,
        j floordiv 3 : compressed,
        i mod 2 : dense,
        j mod 3 : dense
      )`

**Tensor and other supported formats(e.g. CCC, CDC, CCCC)**

Currently, ELL and slice are not supported yet in the new syntax and the
CHECK tests will be updated once printing is set to output the new
syntax.

Previous PRs: llvm#66146, llvm#66309, llvm#66443
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
mlir:bufferization Bufferization infrastructure mlir:core MLIR Core Infrastructure mlir:sparse Sparse compiler in MLIR mlir
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants