Skip to content

[mlir][sparse] best effort finalization of escaping empty sparse tensors #85482

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Mar 15, 2024

Conversation

aartbik
Copy link
Contributor

@aartbik aartbik commented Mar 15, 2024

This change lifts the restriction that purely allocated empty sparse tensors cannot escape the method. Instead it makes a best effort to add a finalizing operation before the escape.

This assumes that
(1) we never build sparse tensors across method boundaries
(e.g. allocate in one, insert in other method)
(2) if we have other uses of the empty allocation in the
same method, we assume that either that op will fail
or will do the finalization for us.

This is best-effort, but fixes some very obvious missing cases.

This change lifts the restriction that purely allocated empty
sparse tensors cannot escape the method. Instead it makes a
best effort to add a finalizing operation before the escape.

This assumes that
(1) we never build sparse tensors across method boundaries
    (e.g. allocate in one, insert in other method)
(2) if we have other uses of the empty allocation in the
    same method, we assume that either that op will fail
    or will do the finalization for us.

This is best-effort, but fixes some very obvious missing cases.
@llvmbot llvmbot added mlir:sparse Sparse compiler in MLIR mlir mlir:bufferization Bufferization infrastructure labels Mar 15, 2024
@llvmbot
Copy link
Member

llvmbot commented Mar 15, 2024

@llvm/pr-subscribers-mlir

@llvm/pr-subscribers-mlir-sparse

Author: Aart Bik (aartbik)

Changes

This change lifts the restriction that purely allocated empty sparse tensors cannot escape the method. Instead it makes a best effort to add a finalizing operation before the escape.

This assumes that
(1) we never build sparse tensors across method boundaries
(e.g. allocate in one, insert in other method)
(2) if we have other uses of the empty allocation in the
same method, we assume that either that op will fail
or will do the finalization for us.

This is best-effort, but fixes some very obvious missing cases.


Full diff: https://github.com/llvm/llvm-project/pull/85482.diff

5 Files Affected:

  • (modified) mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp (-10)
  • (modified) mlir/lib/Dialect/SparseTensor/Transforms/StageSparseOperations.cpp (+33-1)
  • (modified) mlir/test/Dialect/Bufferization/invalid.mlir (-23)
  • (modified) mlir/test/Dialect/SparseTensor/invalid.mlir (-10)
  • (added) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_empty.mlir (+144)
diff --git a/mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp b/mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp
index 34a0c594a5a5a3..2b226c7a1207cf 100644
--- a/mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp
+++ b/mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp
@@ -252,16 +252,6 @@ LogicalResult AllocTensorOp::verify() {
            << getType().getNumDynamicDims() << " dynamic sizes";
   if (getCopy() && getCopy().getType() != getType())
     return emitError("expected that `copy` and return type match");
-
-  // For sparse tensor allocation, we require that none of its
-  // uses escapes the function boundary directly.
-  if (sparse_tensor::getSparseTensorEncoding(getType())) {
-    for (auto &use : getOperation()->getUses())
-      if (isa<func::ReturnOp, func::CallOp, func::CallIndirectOp>(
-              use.getOwner()))
-        return emitError("sparse tensor allocation should not escape function");
-  }
-
   return success();
 }
 
diff --git a/mlir/lib/Dialect/SparseTensor/Transforms/StageSparseOperations.cpp b/mlir/lib/Dialect/SparseTensor/Transforms/StageSparseOperations.cpp
index 5b4395cc31a46b..c370d104e09858 100644
--- a/mlir/lib/Dialect/SparseTensor/Transforms/StageSparseOperations.cpp
+++ b/mlir/lib/Dialect/SparseTensor/Transforms/StageSparseOperations.cpp
@@ -7,6 +7,7 @@
 //===----------------------------------------------------------------------===//
 
 #include "mlir/Dialect/Bufferization/IR/Bufferization.h"
+#include "mlir/Dialect/Func/IR/FuncOps.h"
 #include "mlir/Dialect/SparseTensor/IR/SparseTensor.h"
 #include "mlir/Dialect/SparseTensor/IR/SparseTensorType.h"
 #include "mlir/Dialect/SparseTensor/Transforms/Passes.h"
@@ -16,6 +17,37 @@ using namespace mlir::sparse_tensor;
 
 namespace {
 
+struct GuardSparseAlloc
+    : public OpRewritePattern<bufferization::AllocTensorOp> {
+  using OpRewritePattern<bufferization::AllocTensorOp>::OpRewritePattern;
+
+  LogicalResult matchAndRewrite(bufferization::AllocTensorOp op,
+                                PatternRewriter &rewriter) const override {
+    // Only rewrite sparse allocations.
+    if (!getSparseTensorEncoding(op.getResult().getType()))
+      return failure();
+
+    // Only rewrite sparse allocations that escape the method
+    // without any chance of a finalizing operation in between.
+    // Here we assume that sparse tensor setup never crosses
+    // method boundaries. The current rewriting only repairs
+    // the most obvious allocate-call/return cases.
+    if (!llvm::all_of(op->getUses(), [](OpOperand &use) {
+          return isa<func::ReturnOp, func::CallOp, func::CallIndirectOp>(
+              use.getOwner());
+        }))
+      return failure();
+
+    // Guard escaping empty sparse tensor allocations with a finalizing
+    // operation that leaves the underlying storage in a proper state
+    // before the tensor escapes across the method boundary.
+    rewriter.setInsertionPointAfter(op);
+    auto load = rewriter.create<LoadOp>(op.getLoc(), op.getResult(), true);
+    rewriter.replaceAllUsesExcept(op, load, load);
+    return success();
+  }
+};
+
 template <typename StageWithSortOp>
 struct StageUnorderedSparseOps : public OpRewritePattern<StageWithSortOp> {
   using OpRewritePattern<StageWithSortOp>::OpRewritePattern;
@@ -37,6 +69,6 @@ struct StageUnorderedSparseOps : public OpRewritePattern<StageWithSortOp> {
 } // namespace
 
 void mlir::populateStageSparseOperationsPatterns(RewritePatternSet &patterns) {
-  patterns.add<StageUnorderedSparseOps<ConvertOp>,
+  patterns.add<GuardSparseAlloc, StageUnorderedSparseOps<ConvertOp>,
                StageUnorderedSparseOps<ConcatenateOp>>(patterns.getContext());
 }
diff --git a/mlir/test/Dialect/Bufferization/invalid.mlir b/mlir/test/Dialect/Bufferization/invalid.mlir
index 83f8ef78615432..4ebdb0a8f0490e 100644
--- a/mlir/test/Dialect/Bufferization/invalid.mlir
+++ b/mlir/test/Dialect/Bufferization/invalid.mlir
@@ -26,29 +26,6 @@ func.func @alloc_tensor_copy_and_dims(%t: tensor<?xf32>, %sz: index) {
 
 // -----
 
-#DCSR = #sparse_tensor.encoding<{ map = (d0, d1) -> (d0 : compressed, d1 : compressed) }>
-
-func.func @sparse_alloc_direct_return() -> tensor<20x40xf32, #DCSR> {
-  // expected-error @+1{{sparse tensor allocation should not escape function}}
-  %0 = bufferization.alloc_tensor() : tensor<20x40xf32, #DCSR>
-  return %0 : tensor<20x40xf32, #DCSR>
-}
-
-// -----
-
-#DCSR = #sparse_tensor.encoding<{ map = (d0, d1) -> (d0 : compressed, d1 : compressed) }>
-
-func.func private @foo(tensor<20x40xf32, #DCSR>) -> ()
-
-func.func @sparse_alloc_call() {
-  // expected-error @+1{{sparse tensor allocation should not escape function}}
-  %0 = bufferization.alloc_tensor() : tensor<20x40xf32, #DCSR>
-  call @foo(%0) : (tensor<20x40xf32, #DCSR>) -> ()
-  return
-}
-
-// -----
-
 // expected-error @+1{{invalid value for 'bufferization.access'}}
 func.func private @invalid_buffer_access_type(tensor<*xf32> {bufferization.access = "foo"})
 
diff --git a/mlir/test/Dialect/SparseTensor/invalid.mlir b/mlir/test/Dialect/SparseTensor/invalid.mlir
index 48f28ef390ed53..18851f29d8eaa3 100644
--- a/mlir/test/Dialect/SparseTensor/invalid.mlir
+++ b/mlir/test/Dialect/SparseTensor/invalid.mlir
@@ -868,16 +868,6 @@ func.func @sparse_sort_coo_no_perm(%arg0: index, %arg1: memref<?xindex>) -> (mem
 
 // -----
 
-#CSR = #sparse_tensor.encoding<{map = (d0, d1) -> (d0 : dense, d1 : compressed)}>
-
-func.func @sparse_alloc_escapes(%arg0: index) -> tensor<10x?xf64, #CSR> {
-  // expected-error@+1 {{sparse tensor allocation should not escape function}}
-  %0 = bufferization.alloc_tensor(%arg0) : tensor<10x?xf64, #CSR>
-  return %0: tensor<10x?xf64, #CSR>
-}
-
-// -----
-
 #UnorderedCOO = #sparse_tensor.encoding<{map = (d0, d1) -> (d0 : compressed(nonunique, nonordered), d1 : singleton(nonordered))}>
 #OrderedCOOPerm = #sparse_tensor.encoding<{map = (d0, d1) -> (d1 : compressed(nonunique), d0 : singleton)}>
 
diff --git a/mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_empty.mlir b/mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_empty.mlir
new file mode 100755
index 00000000000000..bcd71f7bd674bd
--- /dev/null
+++ b/mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_empty.mlir
@@ -0,0 +1,144 @@
+//--------------------------------------------------------------------------------------------------
+// WHEN CREATING A NEW TEST, PLEASE JUST COPY & PASTE WITHOUT EDITS.
+//
+// Set-up that's shared across all tests in this directory. In principle, this
+// config could be moved to lit.local.cfg. However, there are downstream users that
+//  do not use these LIT config files. Hence why this is kept inline.
+//
+// DEFINE: %{sparsifier_opts} = enable-runtime-library=true
+// DEFINE: %{sparsifier_opts_sve} = enable-arm-sve=true %{sparsifier_opts}
+// DEFINE: %{compile} = mlir-opt %s --sparsifier="%{sparsifier_opts}"
+// DEFINE: %{compile_sve} = mlir-opt %s --sparsifier="%{sparsifier_opts_sve}"
+// DEFINE: %{run_libs} = -shared-libs=%mlir_c_runner_utils,%mlir_runner_utils
+// DEFINE: %{run_opts} = -e main -entry-point-result=void
+// DEFINE: %{run} = mlir-cpu-runner %{run_opts} %{run_libs}
+// DEFINE: %{run_sve} = %mcr_aarch64_cmd --march=aarch64 --mattr="+sve" %{run_opts} %{run_libs}
+//
+// DEFINE: %{env} =
+//--------------------------------------------------------------------------------------------------
+
+// RUN: %{compile} | %{run} | FileCheck %s
+//
+// Do the same run, but now with direct IR generation.
+// REDEFINE: %{sparsifier_opts} = enable-runtime-library=false enable-buffer-initialization=true
+// RUN: %{compile} | %{run} | FileCheck %s
+//
+// Do the same run, but now with direct IR generation and vectorization.
+// REDEFINE: %{sparsifier_opts} = enable-runtime-library=false enable-buffer-initialization=true vl=2 reassociate-fp-reductions=true enable-index-optimizations=true
+// RUN: %{compile} | %{run} | FileCheck %s
+//
+// Do the same run, but now with direct IR generation and VLA vectorization.
+// RUN: %if mlir_arm_sve_tests %{ %{compile_sve} | %{run_sve} | FileCheck %s %}
+
+
+#map = affine_map<(d0) -> (d0)>
+
+#SV  = #sparse_tensor.encoding<{
+  map = (d0) -> (d0 : compressed)
+}>
+
+module {
+
+  // This directly yields an empty sparse vector.
+  func.func @empty() -> tensor<10xf32, #SV> {
+    %0 = tensor.empty() : tensor<10xf32, #SV>
+    return %0 : tensor<10xf32, #SV>
+  }
+
+  // This also directly yields an empty sparse vector.
+  func.func @empty_alloc() -> tensor<10xf32, #SV> {
+    %0 = bufferization.alloc_tensor() : tensor<10xf32, #SV>
+    return %0 : tensor<10xf32, #SV>
+  }
+
+  // This yields a hidden empty sparse vector (all zeros).
+  func.func @zeros() -> tensor<10xf32, #SV> {
+    %cst = arith.constant 0.0 : f32
+    %0 = bufferization.alloc_tensor() : tensor<10xf32, #SV>
+    %1 = linalg.generic {
+        indexing_maps = [#map],
+	iterator_types = ["parallel"]}
+      outs(%0 : tensor<10xf32, #SV>) {
+         ^bb0(%out: f32):
+            linalg.yield %cst : f32
+    } -> tensor<10xf32, #SV>
+    return %1 : tensor<10xf32, #SV>
+  }
+
+  // This yields a filled sparse vector (all ones).
+  func.func @ones() -> tensor<10xf32, #SV> {
+    %cst = arith.constant 1.0 : f32
+    %0 = bufferization.alloc_tensor() : tensor<10xf32, #SV>
+    %1 = linalg.generic {
+        indexing_maps = [#map],
+	iterator_types = ["parallel"]}
+      outs(%0 : tensor<10xf32, #SV>) {
+         ^bb0(%out: f32):
+            linalg.yield %cst : f32
+    } -> tensor<10xf32, #SV>
+    return %1 : tensor<10xf32, #SV>
+  }
+
+  //
+  // Main driver.
+  //
+  func.func @main() {
+
+    %0 = call @empty()       : () -> tensor<10xf32, #SV>
+    %1 = call @empty_alloc() : () -> tensor<10xf32, #SV>
+    %2 = call @zeros()       : () -> tensor<10xf32, #SV>
+    %3 = call @ones()        : () -> tensor<10xf32, #SV>
+
+    //
+    // Verify the output. In particular, make sure that
+    // all empty sparse vector data structures are properly
+    // finalized with a pair (0,0) for positions.
+    //
+    // CHECK:      ---- Sparse Tensor ----
+    // CHECK-NEXT: nse = 0
+    // CHECK-NEXT: dim = ( 10 )
+    // CHECK-NEXT: lvl = ( 10 )
+    // CHECK-NEXT: pos[0] : ( 0, 0,
+    // CHECK-NEXT: crd[0] : (
+    // CHECK-NEXT: values : (
+    // CHECK-NEXT: ----
+    //
+    // CHECK-NEXT: ---- Sparse Tensor ----
+    // CHECK-NEXT: nse = 0
+    // CHECK-NEXT: dim = ( 10 )
+    // CHECK-NEXT: lvl = ( 10 )
+    // CHECK-NEXT: pos[0] : ( 0, 0,
+    // CHECK-NEXT: crd[0] : (
+    // CHECK-NEXT: values : (
+    // CHECK-NEXT: ----
+    //
+    // CHECK-NEXT: ---- Sparse Tensor ----
+    // CHECK-NEXT: nse = 0
+    // CHECK-NEXT: dim = ( 10 )
+    // CHECK-NEXT: lvl = ( 10 )
+    // CHECK-NEXT: pos[0] : ( 0, 0,
+    // CHECK-NEXT: crd[0] : (
+    // CHECK-NEXT: values : (
+    // CHECK-NEXT: ----
+    //
+    // CHECK-NEXT: ---- Sparse Tensor ----
+    // CHECK-NEXT: nse = 10
+    // CHECK-NEXT: dim = ( 10 )
+    // CHECK-NEXT: lvl = ( 10 )
+    // CHECK-NEXT: pos[0] : ( 0, 10,
+    // CHECK-NEXT: crd[0] : ( 0, 1, 2, 3, 4, 5, 6, 7, 8, 9,
+    // CHECK-NEXT: values : ( 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
+    // CHECK-NEXT: ----
+    //
+    sparse_tensor.print %0 : tensor<10xf32, #SV>
+    sparse_tensor.print %1 : tensor<10xf32, #SV>
+    sparse_tensor.print %2 : tensor<10xf32, #SV>
+    sparse_tensor.print %3 : tensor<10xf32, #SV>
+
+    bufferization.dealloc_tensor %0 : tensor<10xf32, #SV>
+    bufferization.dealloc_tensor %1 : tensor<10xf32, #SV>
+    bufferization.dealloc_tensor %2 : tensor<10xf32, #SV>
+    bufferization.dealloc_tensor %3 : tensor<10xf32, #SV>
+    return
+  }
+}

@llvmbot
Copy link
Member

llvmbot commented Mar 15, 2024

@llvm/pr-subscribers-mlir-bufferization

Author: Aart Bik (aartbik)

Changes

This change lifts the restriction that purely allocated empty sparse tensors cannot escape the method. Instead it makes a best effort to add a finalizing operation before the escape.

This assumes that
(1) we never build sparse tensors across method boundaries
(e.g. allocate in one, insert in other method)
(2) if we have other uses of the empty allocation in the
same method, we assume that either that op will fail
or will do the finalization for us.

This is best-effort, but fixes some very obvious missing cases.


Full diff: https://github.com/llvm/llvm-project/pull/85482.diff

5 Files Affected:

  • (modified) mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp (-10)
  • (modified) mlir/lib/Dialect/SparseTensor/Transforms/StageSparseOperations.cpp (+33-1)
  • (modified) mlir/test/Dialect/Bufferization/invalid.mlir (-23)
  • (modified) mlir/test/Dialect/SparseTensor/invalid.mlir (-10)
  • (added) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_empty.mlir (+144)
diff --git a/mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp b/mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp
index 34a0c594a5a5a3..2b226c7a1207cf 100644
--- a/mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp
+++ b/mlir/lib/Dialect/Bufferization/IR/BufferizationOps.cpp
@@ -252,16 +252,6 @@ LogicalResult AllocTensorOp::verify() {
            << getType().getNumDynamicDims() << " dynamic sizes";
   if (getCopy() && getCopy().getType() != getType())
     return emitError("expected that `copy` and return type match");
-
-  // For sparse tensor allocation, we require that none of its
-  // uses escapes the function boundary directly.
-  if (sparse_tensor::getSparseTensorEncoding(getType())) {
-    for (auto &use : getOperation()->getUses())
-      if (isa<func::ReturnOp, func::CallOp, func::CallIndirectOp>(
-              use.getOwner()))
-        return emitError("sparse tensor allocation should not escape function");
-  }
-
   return success();
 }
 
diff --git a/mlir/lib/Dialect/SparseTensor/Transforms/StageSparseOperations.cpp b/mlir/lib/Dialect/SparseTensor/Transforms/StageSparseOperations.cpp
index 5b4395cc31a46b..c370d104e09858 100644
--- a/mlir/lib/Dialect/SparseTensor/Transforms/StageSparseOperations.cpp
+++ b/mlir/lib/Dialect/SparseTensor/Transforms/StageSparseOperations.cpp
@@ -7,6 +7,7 @@
 //===----------------------------------------------------------------------===//
 
 #include "mlir/Dialect/Bufferization/IR/Bufferization.h"
+#include "mlir/Dialect/Func/IR/FuncOps.h"
 #include "mlir/Dialect/SparseTensor/IR/SparseTensor.h"
 #include "mlir/Dialect/SparseTensor/IR/SparseTensorType.h"
 #include "mlir/Dialect/SparseTensor/Transforms/Passes.h"
@@ -16,6 +17,37 @@ using namespace mlir::sparse_tensor;
 
 namespace {
 
+struct GuardSparseAlloc
+    : public OpRewritePattern<bufferization::AllocTensorOp> {
+  using OpRewritePattern<bufferization::AllocTensorOp>::OpRewritePattern;
+
+  LogicalResult matchAndRewrite(bufferization::AllocTensorOp op,
+                                PatternRewriter &rewriter) const override {
+    // Only rewrite sparse allocations.
+    if (!getSparseTensorEncoding(op.getResult().getType()))
+      return failure();
+
+    // Only rewrite sparse allocations that escape the method
+    // without any chance of a finalizing operation in between.
+    // Here we assume that sparse tensor setup never crosses
+    // method boundaries. The current rewriting only repairs
+    // the most obvious allocate-call/return cases.
+    if (!llvm::all_of(op->getUses(), [](OpOperand &use) {
+          return isa<func::ReturnOp, func::CallOp, func::CallIndirectOp>(
+              use.getOwner());
+        }))
+      return failure();
+
+    // Guard escaping empty sparse tensor allocations with a finalizing
+    // operation that leaves the underlying storage in a proper state
+    // before the tensor escapes across the method boundary.
+    rewriter.setInsertionPointAfter(op);
+    auto load = rewriter.create<LoadOp>(op.getLoc(), op.getResult(), true);
+    rewriter.replaceAllUsesExcept(op, load, load);
+    return success();
+  }
+};
+
 template <typename StageWithSortOp>
 struct StageUnorderedSparseOps : public OpRewritePattern<StageWithSortOp> {
   using OpRewritePattern<StageWithSortOp>::OpRewritePattern;
@@ -37,6 +69,6 @@ struct StageUnorderedSparseOps : public OpRewritePattern<StageWithSortOp> {
 } // namespace
 
 void mlir::populateStageSparseOperationsPatterns(RewritePatternSet &patterns) {
-  patterns.add<StageUnorderedSparseOps<ConvertOp>,
+  patterns.add<GuardSparseAlloc, StageUnorderedSparseOps<ConvertOp>,
                StageUnorderedSparseOps<ConcatenateOp>>(patterns.getContext());
 }
diff --git a/mlir/test/Dialect/Bufferization/invalid.mlir b/mlir/test/Dialect/Bufferization/invalid.mlir
index 83f8ef78615432..4ebdb0a8f0490e 100644
--- a/mlir/test/Dialect/Bufferization/invalid.mlir
+++ b/mlir/test/Dialect/Bufferization/invalid.mlir
@@ -26,29 +26,6 @@ func.func @alloc_tensor_copy_and_dims(%t: tensor<?xf32>, %sz: index) {
 
 // -----
 
-#DCSR = #sparse_tensor.encoding<{ map = (d0, d1) -> (d0 : compressed, d1 : compressed) }>
-
-func.func @sparse_alloc_direct_return() -> tensor<20x40xf32, #DCSR> {
-  // expected-error @+1{{sparse tensor allocation should not escape function}}
-  %0 = bufferization.alloc_tensor() : tensor<20x40xf32, #DCSR>
-  return %0 : tensor<20x40xf32, #DCSR>
-}
-
-// -----
-
-#DCSR = #sparse_tensor.encoding<{ map = (d0, d1) -> (d0 : compressed, d1 : compressed) }>
-
-func.func private @foo(tensor<20x40xf32, #DCSR>) -> ()
-
-func.func @sparse_alloc_call() {
-  // expected-error @+1{{sparse tensor allocation should not escape function}}
-  %0 = bufferization.alloc_tensor() : tensor<20x40xf32, #DCSR>
-  call @foo(%0) : (tensor<20x40xf32, #DCSR>) -> ()
-  return
-}
-
-// -----
-
 // expected-error @+1{{invalid value for 'bufferization.access'}}
 func.func private @invalid_buffer_access_type(tensor<*xf32> {bufferization.access = "foo"})
 
diff --git a/mlir/test/Dialect/SparseTensor/invalid.mlir b/mlir/test/Dialect/SparseTensor/invalid.mlir
index 48f28ef390ed53..18851f29d8eaa3 100644
--- a/mlir/test/Dialect/SparseTensor/invalid.mlir
+++ b/mlir/test/Dialect/SparseTensor/invalid.mlir
@@ -868,16 +868,6 @@ func.func @sparse_sort_coo_no_perm(%arg0: index, %arg1: memref<?xindex>) -> (mem
 
 // -----
 
-#CSR = #sparse_tensor.encoding<{map = (d0, d1) -> (d0 : dense, d1 : compressed)}>
-
-func.func @sparse_alloc_escapes(%arg0: index) -> tensor<10x?xf64, #CSR> {
-  // expected-error@+1 {{sparse tensor allocation should not escape function}}
-  %0 = bufferization.alloc_tensor(%arg0) : tensor<10x?xf64, #CSR>
-  return %0: tensor<10x?xf64, #CSR>
-}
-
-// -----
-
 #UnorderedCOO = #sparse_tensor.encoding<{map = (d0, d1) -> (d0 : compressed(nonunique, nonordered), d1 : singleton(nonordered))}>
 #OrderedCOOPerm = #sparse_tensor.encoding<{map = (d0, d1) -> (d1 : compressed(nonunique), d0 : singleton)}>
 
diff --git a/mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_empty.mlir b/mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_empty.mlir
new file mode 100755
index 00000000000000..bcd71f7bd674bd
--- /dev/null
+++ b/mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_empty.mlir
@@ -0,0 +1,144 @@
+//--------------------------------------------------------------------------------------------------
+// WHEN CREATING A NEW TEST, PLEASE JUST COPY & PASTE WITHOUT EDITS.
+//
+// Set-up that's shared across all tests in this directory. In principle, this
+// config could be moved to lit.local.cfg. However, there are downstream users that
+//  do not use these LIT config files. Hence why this is kept inline.
+//
+// DEFINE: %{sparsifier_opts} = enable-runtime-library=true
+// DEFINE: %{sparsifier_opts_sve} = enable-arm-sve=true %{sparsifier_opts}
+// DEFINE: %{compile} = mlir-opt %s --sparsifier="%{sparsifier_opts}"
+// DEFINE: %{compile_sve} = mlir-opt %s --sparsifier="%{sparsifier_opts_sve}"
+// DEFINE: %{run_libs} = -shared-libs=%mlir_c_runner_utils,%mlir_runner_utils
+// DEFINE: %{run_opts} = -e main -entry-point-result=void
+// DEFINE: %{run} = mlir-cpu-runner %{run_opts} %{run_libs}
+// DEFINE: %{run_sve} = %mcr_aarch64_cmd --march=aarch64 --mattr="+sve" %{run_opts} %{run_libs}
+//
+// DEFINE: %{env} =
+//--------------------------------------------------------------------------------------------------
+
+// RUN: %{compile} | %{run} | FileCheck %s
+//
+// Do the same run, but now with direct IR generation.
+// REDEFINE: %{sparsifier_opts} = enable-runtime-library=false enable-buffer-initialization=true
+// RUN: %{compile} | %{run} | FileCheck %s
+//
+// Do the same run, but now with direct IR generation and vectorization.
+// REDEFINE: %{sparsifier_opts} = enable-runtime-library=false enable-buffer-initialization=true vl=2 reassociate-fp-reductions=true enable-index-optimizations=true
+// RUN: %{compile} | %{run} | FileCheck %s
+//
+// Do the same run, but now with direct IR generation and VLA vectorization.
+// RUN: %if mlir_arm_sve_tests %{ %{compile_sve} | %{run_sve} | FileCheck %s %}
+
+
+#map = affine_map<(d0) -> (d0)>
+
+#SV  = #sparse_tensor.encoding<{
+  map = (d0) -> (d0 : compressed)
+}>
+
+module {
+
+  // This directly yields an empty sparse vector.
+  func.func @empty() -> tensor<10xf32, #SV> {
+    %0 = tensor.empty() : tensor<10xf32, #SV>
+    return %0 : tensor<10xf32, #SV>
+  }
+
+  // This also directly yields an empty sparse vector.
+  func.func @empty_alloc() -> tensor<10xf32, #SV> {
+    %0 = bufferization.alloc_tensor() : tensor<10xf32, #SV>
+    return %0 : tensor<10xf32, #SV>
+  }
+
+  // This yields a hidden empty sparse vector (all zeros).
+  func.func @zeros() -> tensor<10xf32, #SV> {
+    %cst = arith.constant 0.0 : f32
+    %0 = bufferization.alloc_tensor() : tensor<10xf32, #SV>
+    %1 = linalg.generic {
+        indexing_maps = [#map],
+	iterator_types = ["parallel"]}
+      outs(%0 : tensor<10xf32, #SV>) {
+         ^bb0(%out: f32):
+            linalg.yield %cst : f32
+    } -> tensor<10xf32, #SV>
+    return %1 : tensor<10xf32, #SV>
+  }
+
+  // This yields a filled sparse vector (all ones).
+  func.func @ones() -> tensor<10xf32, #SV> {
+    %cst = arith.constant 1.0 : f32
+    %0 = bufferization.alloc_tensor() : tensor<10xf32, #SV>
+    %1 = linalg.generic {
+        indexing_maps = [#map],
+	iterator_types = ["parallel"]}
+      outs(%0 : tensor<10xf32, #SV>) {
+         ^bb0(%out: f32):
+            linalg.yield %cst : f32
+    } -> tensor<10xf32, #SV>
+    return %1 : tensor<10xf32, #SV>
+  }
+
+  //
+  // Main driver.
+  //
+  func.func @main() {
+
+    %0 = call @empty()       : () -> tensor<10xf32, #SV>
+    %1 = call @empty_alloc() : () -> tensor<10xf32, #SV>
+    %2 = call @zeros()       : () -> tensor<10xf32, #SV>
+    %3 = call @ones()        : () -> tensor<10xf32, #SV>
+
+    //
+    // Verify the output. In particular, make sure that
+    // all empty sparse vector data structures are properly
+    // finalized with a pair (0,0) for positions.
+    //
+    // CHECK:      ---- Sparse Tensor ----
+    // CHECK-NEXT: nse = 0
+    // CHECK-NEXT: dim = ( 10 )
+    // CHECK-NEXT: lvl = ( 10 )
+    // CHECK-NEXT: pos[0] : ( 0, 0,
+    // CHECK-NEXT: crd[0] : (
+    // CHECK-NEXT: values : (
+    // CHECK-NEXT: ----
+    //
+    // CHECK-NEXT: ---- Sparse Tensor ----
+    // CHECK-NEXT: nse = 0
+    // CHECK-NEXT: dim = ( 10 )
+    // CHECK-NEXT: lvl = ( 10 )
+    // CHECK-NEXT: pos[0] : ( 0, 0,
+    // CHECK-NEXT: crd[0] : (
+    // CHECK-NEXT: values : (
+    // CHECK-NEXT: ----
+    //
+    // CHECK-NEXT: ---- Sparse Tensor ----
+    // CHECK-NEXT: nse = 0
+    // CHECK-NEXT: dim = ( 10 )
+    // CHECK-NEXT: lvl = ( 10 )
+    // CHECK-NEXT: pos[0] : ( 0, 0,
+    // CHECK-NEXT: crd[0] : (
+    // CHECK-NEXT: values : (
+    // CHECK-NEXT: ----
+    //
+    // CHECK-NEXT: ---- Sparse Tensor ----
+    // CHECK-NEXT: nse = 10
+    // CHECK-NEXT: dim = ( 10 )
+    // CHECK-NEXT: lvl = ( 10 )
+    // CHECK-NEXT: pos[0] : ( 0, 10,
+    // CHECK-NEXT: crd[0] : ( 0, 1, 2, 3, 4, 5, 6, 7, 8, 9,
+    // CHECK-NEXT: values : ( 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
+    // CHECK-NEXT: ----
+    //
+    sparse_tensor.print %0 : tensor<10xf32, #SV>
+    sparse_tensor.print %1 : tensor<10xf32, #SV>
+    sparse_tensor.print %2 : tensor<10xf32, #SV>
+    sparse_tensor.print %3 : tensor<10xf32, #SV>
+
+    bufferization.dealloc_tensor %0 : tensor<10xf32, #SV>
+    bufferization.dealloc_tensor %1 : tensor<10xf32, #SV>
+    bufferization.dealloc_tensor %2 : tensor<10xf32, #SV>
+    bufferization.dealloc_tensor %3 : tensor<10xf32, #SV>
+    return
+  }
+}

@aartbik aartbik merged commit f3a8af0 into llvm:main Mar 15, 2024
@aartbik aartbik deleted the bik branch March 15, 2024 23:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
mlir:bufferization Bufferization infrastructure mlir:sparse Sparse compiler in MLIR mlir
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants