Skip to content

[mlir][sparse] test four row/col major versions of BSR #72898

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Nov 20, 2023
Merged

Conversation

aartbik
Copy link
Contributor

@aartbik aartbik commented Nov 20, 2023

Note, this is a redo of #72712 which was reverted due to time outs in the bot. I have timed the tests on various settings, and it does not even hit the top 20 of integration tests. To be safe, I removed the SIMD version of the tests, just keeping libgen/direcIR paths (which are the most important to test for us).

I will also keep an eye on https://lab.llvm.org/buildbot/#/builders/264/builds after submitting to make sure there is no repeat.

@llvmbot llvmbot added mlir:sparse Sparse compiler in MLIR mlir labels Nov 20, 2023
@llvmbot
Copy link
Member

llvmbot commented Nov 20, 2023

@llvm/pr-subscribers-mlir

Author: Aart Bik (aartbik)

Changes

Note, this is a redo of #72712 which was reverted due to time outs in the bot. I have timed the tests on various settings, and it does not even hit the top 20 of integration tests. To be safe, I removed the SIMD version of the tests, just keeping libgen/direcIR paths (which are the most important to test for us).

I will also keep an eye on https://lab.llvm.org/buildbot/#/builders/264/builds after submitting to make sure there is no repeat.


Full diff: https://github.com/llvm/llvm-project/pull/72898.diff

1 Files Affected:

  • (added) mlir/test/Integration/Dialect/SparseTensor/CPU/block_majors.mlir (+174)
diff --git a/mlir/test/Integration/Dialect/SparseTensor/CPU/block_majors.mlir b/mlir/test/Integration/Dialect/SparseTensor/CPU/block_majors.mlir
new file mode 100755
index 000000000000000..780c1e8b3f64a7a
--- /dev/null
+++ b/mlir/test/Integration/Dialect/SparseTensor/CPU/block_majors.mlir
@@ -0,0 +1,174 @@
+//--------------------------------------------------------------------------------------------------
+// WHEN CREATING A NEW TEST, PLEASE JUST COPY & PASTE WITHOUT EDITS.
+//
+// Set-up that's shared across all tests in this directory. In principle, this
+// config could be moved to lit.local.cfg. However, there are downstream users that
+// do not use these LIT config files. Hence why this is kept inline.
+//
+// DEFINE: %{sparsifier_opts} = enable-runtime-library=true
+// DEFINE: %{sparsifier_opts_sve} = enable-arm-sve=true %{sparsifier_opts}
+// DEFINE: %{compile} = mlir-opt %s --sparsifier="%{sparsifier_opts}"
+// DEFINE: %{compile_sve} = mlir-opt %s --sparsifier="%{sparsifier_opts_sve}"
+// DEFINE: %{run_libs} = -shared-libs=%mlir_c_runner_utils,%mlir_runner_utils
+// DEFINE: %{run_opts} = -e main -entry-point-result=void
+// DEFINE: %{run} = mlir-cpu-runner %{run_opts} %{run_libs}
+// DEFINE: %{run_sve} = %mcr_aarch64_cmd --march=aarch64 --mattr="+sve" %{run_opts} %{run_libs}
+//
+// DEFINE: %{env} =
+//--------------------------------------------------------------------------------------------------
+
+// RUN: %{compile} | %{run} | FileCheck %s
+//
+// Do the same run, but now with direct IR generation.
+// REDEFINE: %{sparsifier_opts} = enable-runtime-library=false
+// RUN: %{compile} | %{run} | FileCheck %s
+
+#BSR_row_rowmajor = #sparse_tensor.encoding<{
+  map = (i, j) ->
+    ( i floordiv 3 : dense
+    , j floordiv 4 : compressed
+    , i mod 3 : dense
+    , j mod 4 : dense
+    )
+}>
+
+#BSR_row_colmajor = #sparse_tensor.encoding<{
+  map = (i, j) ->
+    ( i floordiv 3 : dense
+    , j floordiv 4 : compressed
+    , j mod 4 : dense
+    , i mod 3 : dense
+    )
+}>
+
+#BSR_col_rowmajor = #sparse_tensor.encoding<{
+  map = (i, j) ->
+    ( j floordiv 4 : dense
+    , i floordiv 3 : compressed
+    , i mod 3 : dense
+    , j mod 4 : dense
+    )
+}>
+
+#BSR_col_colmajor = #sparse_tensor.encoding<{
+  map = (i, j) ->
+    ( j floordiv 4 : dense
+    , i floordiv 3 : compressed
+    , j mod 4 : dense
+    , i mod 3 : dense
+    )
+}>
+
+//
+// Example 3x4 block storage of a 6x16 matrix:
+//
+//  +---------+---------+---------+---------+
+//  | 1 2 . . | . . . . | . . . . | . . . . |
+//  | . . . . | . . . . | . . . . | . . . . |
+//  | . . . 3 | . . . . | . . . . | . . . . |
+//  +---------+---------+---------+---------+
+//  | . . . . | . . . . | 4 5 . . | . . . . |
+//  | . . . . | . . . . | . . . . | . . . . |
+//  | . . . . | . . . . | . . 6 7 | . . . . |
+//  +---------+---------+---------+---------+
+//
+// Storage for CSR block storage. Note that this essentially
+// provides CSR storage of 2x4 blocks with either row-major
+// or column-major storage within each 3x4 block of elements.
+//
+//    positions[1]   : 0 1 2
+//    coordinates[1] : 0 2
+//    values         : 1, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3,
+//                     4, 5, 0, 0, 0, 0, 0, 0, 0, 0, 6, 7 [row-major]
+//
+//                     1, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 3,
+//                     4, 0, 0, 5, 0, 0, 0, 0, 6, 0, 0, 7 [col-major]
+//
+// Storage for CSC block storage. Note that this essentially
+// provides CSC storage of 4x2 blocks with either row-major
+// or column-major storage within each 3x4 block of elements.
+//
+//    positions[1]   : 0 1 1 2 2
+//    coordinates[1] : 0 1
+//    values         : 1, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3,
+//                     4, 5, 0, 0, 0, 0, 0, 0, 0, 0, 6, 7 [row-major]
+//
+//                     1, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 3,
+//                     4, 0, 0, 5, 0, 0, 0, 0, 6, 0, 0, 7 [col-major]
+//
+module {
+
+  func.func @main() {
+    %c0 = arith.constant 0   : index
+    %f0 = arith.constant 0.0 : f64
+
+    %m = arith.constant sparse<
+        [ [0, 0], [0, 1], [2, 3], [3, 8], [3, 9], [5, 10], [5, 11] ],
+        [ 1., 2., 3., 4., 5., 6., 7.]
+    > : tensor<6x16xf64>
+    %s1 = sparse_tensor.convert %m : tensor<6x16xf64> to tensor<?x?xf64, #BSR_row_rowmajor>
+    %s2 = sparse_tensor.convert %m : tensor<6x16xf64> to tensor<?x?xf64, #BSR_row_colmajor>
+    %s3 = sparse_tensor.convert %m : tensor<6x16xf64> to tensor<?x?xf64, #BSR_col_rowmajor>
+    %s4 = sparse_tensor.convert %m : tensor<6x16xf64> to tensor<?x?xf64, #BSR_col_colmajor>
+
+    // CHECK:      ( 0, 1, 2 )
+    // CHECK-NEXT: ( 0, 2 )
+    // CHECK-NEXT: ( 1, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 4, 5, 0, 0, 0, 0, 0, 0, 0, 0, 6, 7 )
+    %pos1 = sparse_tensor.positions %s1 {level = 1 : index } : tensor<?x?xf64, #BSR_row_rowmajor> to memref<?xindex>
+    %vecp1 = vector.transfer_read %pos1[%c0], %c0 : memref<?xindex>, vector<3xindex>
+    vector.print %vecp1 : vector<3xindex>
+    %crd1 = sparse_tensor.coordinates %s1 {level = 1 : index } : tensor<?x?xf64, #BSR_row_rowmajor> to memref<?xindex>
+    %vecc1 = vector.transfer_read %crd1[%c0], %c0 : memref<?xindex>, vector<2xindex>
+    vector.print %vecc1 : vector<2xindex>
+    %val1 = sparse_tensor.values %s1 : tensor<?x?xf64, #BSR_row_rowmajor> to memref<?xf64>
+    %vecv1 = vector.transfer_read %val1[%c0], %f0 : memref<?xf64>, vector<24xf64>
+    vector.print %vecv1 : vector<24xf64>
+
+    // CHECK-NEXT: ( 0, 1, 2 )
+    // CHECK-NEXT: ( 0, 2 )
+    // CHECK-NEXT: ( 1, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 3, 4, 0, 0, 5, 0, 0, 0, 0, 6, 0, 0, 7 )
+    %pos2 = sparse_tensor.positions %s2 {level = 1 : index } : tensor<?x?xf64, #BSR_row_colmajor> to memref<?xindex>
+    %vecp2 = vector.transfer_read %pos2[%c0], %c0 : memref<?xindex>, vector<3xindex>
+    vector.print %vecp2 : vector<3xindex>
+    %crd2 = sparse_tensor.coordinates %s2 {level = 1 : index } : tensor<?x?xf64, #BSR_row_colmajor> to memref<?xindex>
+    %vecc2 = vector.transfer_read %crd2[%c0], %c0 : memref<?xindex>, vector<2xindex>
+    vector.print %vecc2 : vector<2xindex>
+    %val2 = sparse_tensor.values %s2 : tensor<?x?xf64, #BSR_row_colmajor> to memref<?xf64>
+    %vecv2 = vector.transfer_read %val2[%c0], %f0 : memref<?xf64>, vector<24xf64>
+    vector.print %vecv2 : vector<24xf64>
+
+    // CHECK-NEXT: ( 0, 1, 1, 2, 2 )
+    // CHECK-NEXT: ( 0, 1 )
+    // CHECK-NEXT: ( 1, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 4, 5, 0, 0, 0, 0, 0, 0, 0, 0, 6, 7 )
+    %pos3 = sparse_tensor.positions %s3 {level = 1 : index } : tensor<?x?xf64, #BSR_col_rowmajor> to memref<?xindex>
+    %vecp3 = vector.transfer_read %pos3[%c0], %c0 : memref<?xindex>, vector<5xindex>
+    vector.print %vecp3 : vector<5xindex>
+    %crd3 = sparse_tensor.coordinates %s3 {level = 1 : index } : tensor<?x?xf64, #BSR_col_rowmajor> to memref<?xindex>
+    %vecc3 = vector.transfer_read %crd3[%c0], %c0 : memref<?xindex>, vector<2xindex>
+    vector.print %vecc3 : vector<2xindex>
+    %val3 = sparse_tensor.values %s3 : tensor<?x?xf64, #BSR_col_rowmajor> to memref<?xf64>
+    %vecv3 = vector.transfer_read %val3[%c0], %f0 : memref<?xf64>, vector<24xf64>
+    vector.print %vecv3 : vector<24xf64>
+
+    // CHECK-NEXT: ( 0, 1, 1, 2, 2 )
+    // CHECK-NEXT: ( 0, 1 )
+    // CHECK-NEXT: ( 1, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 3, 4, 0, 0, 5, 0, 0, 0, 0, 6, 0, 0, 7 )
+    %pos4 = sparse_tensor.positions %s4 {level = 1 : index } : tensor<?x?xf64, #BSR_col_colmajor> to memref<?xindex>
+    %vecp4 = vector.transfer_read %pos4[%c0], %c0 : memref<?xindex>, vector<5xindex>
+    vector.print %vecp4 : vector<5xindex>
+    %crd4 = sparse_tensor.coordinates %s4 {level = 1 : index } : tensor<?x?xf64, #BSR_col_colmajor> to memref<?xindex>
+    %vecc4 = vector.transfer_read %crd4[%c0], %c0 : memref<?xindex>, vector<2xindex>
+    vector.print %vecc4 : vector<2xindex>
+    %val4 = sparse_tensor.values %s4 : tensor<?x?xf64, #BSR_col_colmajor> to memref<?xf64>
+    %vecv4 = vector.transfer_read %val4[%c0], %f0 : memref<?xf64>, vector<24xf64>
+    vector.print %vecv4 : vector<24xf64>
+
+    // Release the resources.
+    bufferization.dealloc_tensor %s1: tensor<?x?xf64, #BSR_row_rowmajor>
+    bufferization.dealloc_tensor %s2: tensor<?x?xf64, #BSR_row_colmajor>
+    bufferization.dealloc_tensor %s3: tensor<?x?xf64, #BSR_col_rowmajor>
+    bufferization.dealloc_tensor %s4: tensor<?x?xf64, #BSR_col_colmajor>
+
+    return
+  }
+}

@llvmbot
Copy link
Member

llvmbot commented Nov 20, 2023

@llvm/pr-subscribers-mlir-sparse

Author: Aart Bik (aartbik)

Changes

Note, this is a redo of #72712 which was reverted due to time outs in the bot. I have timed the tests on various settings, and it does not even hit the top 20 of integration tests. To be safe, I removed the SIMD version of the tests, just keeping libgen/direcIR paths (which are the most important to test for us).

I will also keep an eye on https://lab.llvm.org/buildbot/#/builders/264/builds after submitting to make sure there is no repeat.


Full diff: https://github.com/llvm/llvm-project/pull/72898.diff

1 Files Affected:

  • (added) mlir/test/Integration/Dialect/SparseTensor/CPU/block_majors.mlir (+174)
diff --git a/mlir/test/Integration/Dialect/SparseTensor/CPU/block_majors.mlir b/mlir/test/Integration/Dialect/SparseTensor/CPU/block_majors.mlir
new file mode 100755
index 000000000000000..780c1e8b3f64a7a
--- /dev/null
+++ b/mlir/test/Integration/Dialect/SparseTensor/CPU/block_majors.mlir
@@ -0,0 +1,174 @@
+//--------------------------------------------------------------------------------------------------
+// WHEN CREATING A NEW TEST, PLEASE JUST COPY & PASTE WITHOUT EDITS.
+//
+// Set-up that's shared across all tests in this directory. In principle, this
+// config could be moved to lit.local.cfg. However, there are downstream users that
+// do not use these LIT config files. Hence why this is kept inline.
+//
+// DEFINE: %{sparsifier_opts} = enable-runtime-library=true
+// DEFINE: %{sparsifier_opts_sve} = enable-arm-sve=true %{sparsifier_opts}
+// DEFINE: %{compile} = mlir-opt %s --sparsifier="%{sparsifier_opts}"
+// DEFINE: %{compile_sve} = mlir-opt %s --sparsifier="%{sparsifier_opts_sve}"
+// DEFINE: %{run_libs} = -shared-libs=%mlir_c_runner_utils,%mlir_runner_utils
+// DEFINE: %{run_opts} = -e main -entry-point-result=void
+// DEFINE: %{run} = mlir-cpu-runner %{run_opts} %{run_libs}
+// DEFINE: %{run_sve} = %mcr_aarch64_cmd --march=aarch64 --mattr="+sve" %{run_opts} %{run_libs}
+//
+// DEFINE: %{env} =
+//--------------------------------------------------------------------------------------------------
+
+// RUN: %{compile} | %{run} | FileCheck %s
+//
+// Do the same run, but now with direct IR generation.
+// REDEFINE: %{sparsifier_opts} = enable-runtime-library=false
+// RUN: %{compile} | %{run} | FileCheck %s
+
+#BSR_row_rowmajor = #sparse_tensor.encoding<{
+  map = (i, j) ->
+    ( i floordiv 3 : dense
+    , j floordiv 4 : compressed
+    , i mod 3 : dense
+    , j mod 4 : dense
+    )
+}>
+
+#BSR_row_colmajor = #sparse_tensor.encoding<{
+  map = (i, j) ->
+    ( i floordiv 3 : dense
+    , j floordiv 4 : compressed
+    , j mod 4 : dense
+    , i mod 3 : dense
+    )
+}>
+
+#BSR_col_rowmajor = #sparse_tensor.encoding<{
+  map = (i, j) ->
+    ( j floordiv 4 : dense
+    , i floordiv 3 : compressed
+    , i mod 3 : dense
+    , j mod 4 : dense
+    )
+}>
+
+#BSR_col_colmajor = #sparse_tensor.encoding<{
+  map = (i, j) ->
+    ( j floordiv 4 : dense
+    , i floordiv 3 : compressed
+    , j mod 4 : dense
+    , i mod 3 : dense
+    )
+}>
+
+//
+// Example 3x4 block storage of a 6x16 matrix:
+//
+//  +---------+---------+---------+---------+
+//  | 1 2 . . | . . . . | . . . . | . . . . |
+//  | . . . . | . . . . | . . . . | . . . . |
+//  | . . . 3 | . . . . | . . . . | . . . . |
+//  +---------+---------+---------+---------+
+//  | . . . . | . . . . | 4 5 . . | . . . . |
+//  | . . . . | . . . . | . . . . | . . . . |
+//  | . . . . | . . . . | . . 6 7 | . . . . |
+//  +---------+---------+---------+---------+
+//
+// Storage for CSR block storage. Note that this essentially
+// provides CSR storage of 2x4 blocks with either row-major
+// or column-major storage within each 3x4 block of elements.
+//
+//    positions[1]   : 0 1 2
+//    coordinates[1] : 0 2
+//    values         : 1, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3,
+//                     4, 5, 0, 0, 0, 0, 0, 0, 0, 0, 6, 7 [row-major]
+//
+//                     1, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 3,
+//                     4, 0, 0, 5, 0, 0, 0, 0, 6, 0, 0, 7 [col-major]
+//
+// Storage for CSC block storage. Note that this essentially
+// provides CSC storage of 4x2 blocks with either row-major
+// or column-major storage within each 3x4 block of elements.
+//
+//    positions[1]   : 0 1 1 2 2
+//    coordinates[1] : 0 1
+//    values         : 1, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3,
+//                     4, 5, 0, 0, 0, 0, 0, 0, 0, 0, 6, 7 [row-major]
+//
+//                     1, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 3,
+//                     4, 0, 0, 5, 0, 0, 0, 0, 6, 0, 0, 7 [col-major]
+//
+module {
+
+  func.func @main() {
+    %c0 = arith.constant 0   : index
+    %f0 = arith.constant 0.0 : f64
+
+    %m = arith.constant sparse<
+        [ [0, 0], [0, 1], [2, 3], [3, 8], [3, 9], [5, 10], [5, 11] ],
+        [ 1., 2., 3., 4., 5., 6., 7.]
+    > : tensor<6x16xf64>
+    %s1 = sparse_tensor.convert %m : tensor<6x16xf64> to tensor<?x?xf64, #BSR_row_rowmajor>
+    %s2 = sparse_tensor.convert %m : tensor<6x16xf64> to tensor<?x?xf64, #BSR_row_colmajor>
+    %s3 = sparse_tensor.convert %m : tensor<6x16xf64> to tensor<?x?xf64, #BSR_col_rowmajor>
+    %s4 = sparse_tensor.convert %m : tensor<6x16xf64> to tensor<?x?xf64, #BSR_col_colmajor>
+
+    // CHECK:      ( 0, 1, 2 )
+    // CHECK-NEXT: ( 0, 2 )
+    // CHECK-NEXT: ( 1, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 4, 5, 0, 0, 0, 0, 0, 0, 0, 0, 6, 7 )
+    %pos1 = sparse_tensor.positions %s1 {level = 1 : index } : tensor<?x?xf64, #BSR_row_rowmajor> to memref<?xindex>
+    %vecp1 = vector.transfer_read %pos1[%c0], %c0 : memref<?xindex>, vector<3xindex>
+    vector.print %vecp1 : vector<3xindex>
+    %crd1 = sparse_tensor.coordinates %s1 {level = 1 : index } : tensor<?x?xf64, #BSR_row_rowmajor> to memref<?xindex>
+    %vecc1 = vector.transfer_read %crd1[%c0], %c0 : memref<?xindex>, vector<2xindex>
+    vector.print %vecc1 : vector<2xindex>
+    %val1 = sparse_tensor.values %s1 : tensor<?x?xf64, #BSR_row_rowmajor> to memref<?xf64>
+    %vecv1 = vector.transfer_read %val1[%c0], %f0 : memref<?xf64>, vector<24xf64>
+    vector.print %vecv1 : vector<24xf64>
+
+    // CHECK-NEXT: ( 0, 1, 2 )
+    // CHECK-NEXT: ( 0, 2 )
+    // CHECK-NEXT: ( 1, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 3, 4, 0, 0, 5, 0, 0, 0, 0, 6, 0, 0, 7 )
+    %pos2 = sparse_tensor.positions %s2 {level = 1 : index } : tensor<?x?xf64, #BSR_row_colmajor> to memref<?xindex>
+    %vecp2 = vector.transfer_read %pos2[%c0], %c0 : memref<?xindex>, vector<3xindex>
+    vector.print %vecp2 : vector<3xindex>
+    %crd2 = sparse_tensor.coordinates %s2 {level = 1 : index } : tensor<?x?xf64, #BSR_row_colmajor> to memref<?xindex>
+    %vecc2 = vector.transfer_read %crd2[%c0], %c0 : memref<?xindex>, vector<2xindex>
+    vector.print %vecc2 : vector<2xindex>
+    %val2 = sparse_tensor.values %s2 : tensor<?x?xf64, #BSR_row_colmajor> to memref<?xf64>
+    %vecv2 = vector.transfer_read %val2[%c0], %f0 : memref<?xf64>, vector<24xf64>
+    vector.print %vecv2 : vector<24xf64>
+
+    // CHECK-NEXT: ( 0, 1, 1, 2, 2 )
+    // CHECK-NEXT: ( 0, 1 )
+    // CHECK-NEXT: ( 1, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 4, 5, 0, 0, 0, 0, 0, 0, 0, 0, 6, 7 )
+    %pos3 = sparse_tensor.positions %s3 {level = 1 : index } : tensor<?x?xf64, #BSR_col_rowmajor> to memref<?xindex>
+    %vecp3 = vector.transfer_read %pos3[%c0], %c0 : memref<?xindex>, vector<5xindex>
+    vector.print %vecp3 : vector<5xindex>
+    %crd3 = sparse_tensor.coordinates %s3 {level = 1 : index } : tensor<?x?xf64, #BSR_col_rowmajor> to memref<?xindex>
+    %vecc3 = vector.transfer_read %crd3[%c0], %c0 : memref<?xindex>, vector<2xindex>
+    vector.print %vecc3 : vector<2xindex>
+    %val3 = sparse_tensor.values %s3 : tensor<?x?xf64, #BSR_col_rowmajor> to memref<?xf64>
+    %vecv3 = vector.transfer_read %val3[%c0], %f0 : memref<?xf64>, vector<24xf64>
+    vector.print %vecv3 : vector<24xf64>
+
+    // CHECK-NEXT: ( 0, 1, 1, 2, 2 )
+    // CHECK-NEXT: ( 0, 1 )
+    // CHECK-NEXT: ( 1, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 3, 4, 0, 0, 5, 0, 0, 0, 0, 6, 0, 0, 7 )
+    %pos4 = sparse_tensor.positions %s4 {level = 1 : index } : tensor<?x?xf64, #BSR_col_colmajor> to memref<?xindex>
+    %vecp4 = vector.transfer_read %pos4[%c0], %c0 : memref<?xindex>, vector<5xindex>
+    vector.print %vecp4 : vector<5xindex>
+    %crd4 = sparse_tensor.coordinates %s4 {level = 1 : index } : tensor<?x?xf64, #BSR_col_colmajor> to memref<?xindex>
+    %vecc4 = vector.transfer_read %crd4[%c0], %c0 : memref<?xindex>, vector<2xindex>
+    vector.print %vecc4 : vector<2xindex>
+    %val4 = sparse_tensor.values %s4 : tensor<?x?xf64, #BSR_col_colmajor> to memref<?xf64>
+    %vecv4 = vector.transfer_read %val4[%c0], %f0 : memref<?xf64>, vector<24xf64>
+    vector.print %vecv4 : vector<24xf64>
+
+    // Release the resources.
+    bufferization.dealloc_tensor %s1: tensor<?x?xf64, #BSR_row_rowmajor>
+    bufferization.dealloc_tensor %s2: tensor<?x?xf64, #BSR_row_colmajor>
+    bufferization.dealloc_tensor %s3: tensor<?x?xf64, #BSR_col_rowmajor>
+    bufferization.dealloc_tensor %s4: tensor<?x?xf64, #BSR_col_colmajor>
+
+    return
+  }
+}

@aartbik
Copy link
Contributor Author

aartbik commented Nov 20, 2023

Just to play it super safe, broke the test up into foo1,2,3,4 to make sure we are not hitting some IR size issues.

@aartbik aartbik merged commit 6352a07 into llvm:main Nov 20, 2023
@aartbik aartbik deleted the bik branch November 20, 2023 20:28
@aartbik
Copy link
Contributor Author

aartbik commented Nov 20, 2023

Submitted... keeping an eye on https://lab.llvm.org/buildbot/#/builders/264

@aartbik
Copy link
Contributor Author

aartbik commented Nov 20, 2023

Looks like we are in good shape \o/

https://lab.llvm.org/buildbot/#/builders/264/builds/4503

@aartbik
Copy link
Contributor Author

aartbik commented Nov 20, 2023

And all bots are happy with this change:

https://lab.llvm.org/buildbot/#/changes/115712

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
mlir:sparse Sparse compiler in MLIR mlir
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants