Skip to content

[mlir][bufferization] Switch tests to new deallocation pass pipeline #66471

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Sep 15, 2023

Conversation

maerhart
Copy link
Member

Use the new ownership based deallocation pass pipeline in the regression and integration tests. Some one-shot bufferization tests tested one-shot bufferize and deallocation at the same time. I removed the deallocation pass there because the deallocation pass is already thoroughly tested by itself.

@llvmbot llvmbot added mlir:core MLIR Core Infrastructure mlir:linalg mlir mlir:bufferization Bufferization infrastructure mlir:scf labels Sep 15, 2023
@llvmbot
Copy link
Member

llvmbot commented Sep 15, 2023

@llvm/pr-subscribers-mlir
@llvm/pr-subscribers-mlir-bufferization
@llvm/pr-subscribers-mlir-scf
@llvm/pr-subscribers-mlir-linalg

@llvm/pr-subscribers-mlir-core

Changes Use the new ownership based deallocation pass pipeline in the regression and integration tests. Some one-shot bufferization tests tested one-shot bufferize and deallocation at the same time. I removed the deallocation pass there because the deallocation pass is already thoroughly tested by itself. --

Patch is 32.88 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/66471.diff

12 Files Affected:

  • (modified) mlir/test/Dialect/Bufferization/Transforms/one-shot-bufferize-allow-return-allocs.mlir (+3-8)
  • (modified) mlir/test/Dialect/Bufferization/Transforms/one-shot-bufferize-compat.mlir (+1-1)
  • (modified) mlir/test/Dialect/Bufferization/Transforms/one-shot-module-bufferize-out-params.mlir (+3-11)
  • (modified) mlir/test/Dialect/SCF/one-shot-bufferize.mlir (+24-80)
  • (modified) mlir/test/Integration/Dialect/Linalg/CPU/test-collapse-tensor.mlir (+2-2)
  • (modified) mlir/test/Integration/Dialect/Linalg/CPU/test-elementwise.mlir (+4-4)
  • (modified) mlir/test/Integration/Dialect/Linalg/CPU/test-expand-tensor.mlir (+2-2)
  • (modified) mlir/test/Integration/Dialect/Linalg/CPU/test-padtensor.mlir (+3-2)
  • (modified) mlir/test/Integration/Dialect/Linalg/CPU/test-subtensor-insert-multiple-uses.mlir (+4-2)
  • (modified) mlir/test/Integration/Dialect/Linalg/CPU/test-subtensor-insert.mlir (+4-2)
  • (modified) mlir/test/Integration/Dialect/Linalg/CPU/test-tensor-e2e.mlir (+2-2)
  • (modified) mlir/test/Integration/Dialect/Linalg/CPU/test-tensor-matmul.mlir (+2-2)
diff --git a/mlir/test/Dialect/Bufferization/Transforms/one-shot-bufferize-allow-return-allocs.mlir b/mlir/test/Dialect/Bufferization/Transforms/one-shot-bufferize-allow-return-allocs.mlir
index ab9360e45c65ff3..bccff8ef8d65aaa 100644
--- a/mlir/test/Dialect/Bufferization/Transforms/one-shot-bufferize-allow-return-allocs.mlir
+++ b/mlir/test/Dialect/Bufferization/Transforms/one-shot-bufferize-allow-return-allocs.mlir
@@ -1,4 +1,4 @@
-// RUN: mlir-opt %s -one-shot-bufferize="allow-return-allocs allow-unknown-ops" -buffer-deallocation -canonicalize -split-input-file | FileCheck %s
+// RUN: mlir-opt %s -one-shot-bufferize="allow-return-allocs allow-unknown-ops" -canonicalize -split-input-file | FileCheck %s
 
 // Run fuzzer with different seeds.
 // RUN: mlir-opt %s -one-shot-bufferize="allow-return-allocs test-analysis-only analysis-fuzzer-seed=23" -split-input-file -o /dev/null
@@ -14,20 +14,17 @@ func.func @buffer_not_deallocated(%t : tensor<?xf32>, %c : i1) -> tensor<?xf32>
     // CHECK: %[[some_op:.*]] = "test.some_op"
     // CHECK: %[[alloc:.*]] = memref.alloc(%[[some_op]])
     // CHECK: %[[casted:.*]] = memref.cast %[[alloc]]
-    // CHECK-NOT: dealloc
     // CHECK: scf.yield %[[casted]]
     %sz = "test.some_op"() : () -> (index)
     %0 = bufferization.alloc_tensor(%sz) : tensor<?xf32>
     scf.yield %0 : tensor<?xf32>
   } else {
   // CHECK: } else {
-    // CHECK: %[[cloned:.*]] = bufferization.clone %[[m]]
-    // CHECK: scf.yield %[[cloned]]
+    // CHECK: scf.yield %[[m]]
     scf.yield %t : tensor<?xf32>
   }
   // CHECK: }
   // CHECK: %[[r_tensor:.*]] = bufferization.to_tensor %[[r]]
-  // CHECK: memref.dealloc %[[r]]
   // CHECK: return %[[r_tensor]]
   return %r : tensor<?xf32>
 }
@@ -42,8 +39,7 @@ func.func @write_to_alloc_tensor_or_readonly_tensor(%arg0: tensor<i32>,
 {
   // CHECK: %[[arg0_m:.*]] = bufferization.to_memref %[[arg0]]
   // CHECK: %[[r:.*]] = scf.if {{.*}} {
-  // CHECK:   %[[clone:.*]] = bufferization.clone %[[arg0_m]]
-  // CHECK:   scf.yield %[[clone]]
+  // CHECK:   scf.yield %[[arg0_m]]
   // CHECK: } else {
   // CHECK:   %[[alloc:.*]] = memref.alloc
   // CHECK:   memref.store %{{.*}}, %[[alloc]]
@@ -51,7 +47,6 @@ func.func @write_to_alloc_tensor_or_readonly_tensor(%arg0: tensor<i32>,
   // CHECK:   scf.yield %[[casted]]
   // CHECK: }
   // CHECK: %[[r_t:.*]] = bufferization.to_tensor %[[r]]
-  // CHECK: memref.dealloc %[[r]]
   // CHECK: return %[[r_t]]
   %3 = scf.if %cond -> (tensor<i32>) {
     scf.yield %arg0 : tensor<i32>
diff --git a/mlir/test/Dialect/Bufferization/Transforms/one-shot-bufferize-compat.mlir b/mlir/test/Dialect/Bufferization/Transforms/one-shot-bufferize-compat.mlir
index 06c79d450cea705..9ebd60f50328e14 100644
--- a/mlir/test/Dialect/Bufferization/Transforms/one-shot-bufferize-compat.mlir
+++ b/mlir/test/Dialect/Bufferization/Transforms/one-shot-bufferize-compat.mlir
@@ -5,7 +5,7 @@
 
 // RUN: mlir-opt %s \
 // RUN:     -one-shot-bufferize="allow-unknown-ops create-deallocs=0" \
-// RUN:     -buffer-deallocation | \
+// RUN:     -buffer-deallocation-pipeline | \
 // RUN: FileCheck %s --check-prefix=CHECK-BUFFERDEALLOC
 
 // CHECK-NODEALLOC-LABEL: func @out_of_place_bufferization
diff --git a/mlir/test/Dialect/Bufferization/Transforms/one-shot-module-bufferize-out-params.mlir b/mlir/test/Dialect/Bufferization/Transforms/one-shot-module-bufferize-out-params.mlir
index 5d876e935df1af6..4e4340c9db8acea 100644
--- a/mlir/test/Dialect/Bufferization/Transforms/one-shot-module-bufferize-out-params.mlir
+++ b/mlir/test/Dialect/Bufferization/Transforms/one-shot-module-bufferize-out-params.mlir
@@ -1,6 +1,6 @@
-// RUN: mlir-opt %s -one-shot-bufferize="bufferize-function-boundaries allow-return-allocs function-boundary-type-conversion=fully-dynamic-layout-map" -drop-equivalent-buffer-results -buffer-results-to-out-params -buffer-deallocation -split-input-file | FileCheck %s
-// RUN: mlir-opt %s -one-shot-bufferize="bufferize-function-boundaries allow-return-allocs function-boundary-type-conversion=identity-layout-map" -drop-equivalent-buffer-results -buffer-results-to-out-params -buffer-deallocation -split-input-file | FileCheck %s --check-prefix=CHECK-NO-LAYOUT
-// RUN: mlir-opt %s -one-shot-bufferize="bufferize-function-boundaries allow-return-allocs function-boundary-type-conversion=infer-layout-map" -drop-equivalent-buffer-results -buffer-deallocation -split-input-file | FileCheck %s --check-prefix=CHECK-BASELINE
+// RUN: mlir-opt %s -one-shot-bufferize="bufferize-function-boundaries allow-return-allocs function-boundary-type-conversion=fully-dynamic-layout-map" -drop-equivalent-buffer-results -buffer-results-to-out-params -split-input-file | FileCheck %s
+// RUN: mlir-opt %s -one-shot-bufferize="bufferize-function-boundaries allow-return-allocs function-boundary-type-conversion=identity-layout-map" -drop-equivalent-buffer-results -buffer-results-to-out-params -split-input-file | FileCheck %s --check-prefix=CHECK-NO-LAYOUT
+// RUN: mlir-opt %s -one-shot-bufferize="bufferize-function-boundaries allow-return-allocs function-boundary-type-conversion=infer-layout-map" -drop-equivalent-buffer-results -split-input-file | FileCheck %s --check-prefix=CHECK-BASELINE
 
 // Note: function-boundary-type-conversion=infer-layout-map with
 // promote-buffer-results-to-out-params is an unsupported combination.
@@ -18,7 +18,6 @@
 //       CHECK:   memref.store %{{.*}}, %[[alloc]]
 //       CHECK:   %[[casted:.*]] = memref.cast %[[alloc]]
 //       CHECK:   memref.copy %[[casted]], %[[arg1]]
-//       CHECK:   memref.dealloc %[[alloc]]
 //       CHECK:   return
 //       CHECK: }
 
@@ -29,7 +28,6 @@
 //       CHECK-NO-LAYOUT:   memref.copy %[[arg0]], %[[alloc]]
 //       CHECK-NO-LAYOUT:   memref.store {{.*}}, %[[alloc]]
 //       CHECK-NO-LAYOUT:   memref.copy %[[alloc]], %[[arg1]]
-//       CHECK-NO-LAYOUT:   memref.dealloc %[[alloc]]
 
 // CHECK-BASELINE-LABEL: func @callee(
 //  CHECK-BASELINE-SAME:     %[[arg0:.*]]: memref<5xf32, strided<[?], offset: ?>>) -> memref<5xf32> {
@@ -53,7 +51,6 @@ func.func @callee(%t: tensor<5xf32>) -> (tensor<5xf32>, tensor<5xf32>) {
 // CHECK:   call @callee(%[[arg0]], %[[casted]])
 // CHECK:   %[[l1:.*]] = memref.load %[[arg0]]
 // CHECK:   %[[l2:.*]] = memref.load %[[casted]]
-// CHECK:   memref.dealloc %[[alloc]]
 // CHECK:   return %[[l1]], %[[l2]]
 // CHECK: }
 
@@ -78,7 +75,6 @@ func.func @main(%t: tensor<5xf32>) -> (f32, f32) {
 //       CHECK:   %[[subview:.*]] = memref.subview %[[alloc]]{{.*}} : memref<10x20xf32> to memref<2x5xf32, strided<[20, 1], offset: ?>>
 //       CHECK:   %[[casted:.*]] = memref.cast %[[subview]]
 //       CHECK:   memref.copy %[[casted]], %[[r]]
-//       CHECK:   memref.dealloc %[[alloc]]
 
 // CHECK-NO-LAYOUT-LABEL: func @callee(
 //  CHECK-NO-LAYOUT-SAME:              %{{.*}}: index,
@@ -90,9 +86,7 @@ func.func @main(%t: tensor<5xf32>) -> (f32, f32) {
 // value and function signature.
 //       CHECK-NO-LAYOUT:   %[[alloc2:.*]] = memref.alloc() : memref<2x5xf32>
 //       CHECK-NO-LAYOUT:   memref.copy %[[subview]], %[[alloc2]]
-//       CHECK-NO-LAYOUT:   memref.dealloc %[[alloc]]
 //       CHECK-NO-LAYOUT:   memref.copy %[[alloc2]], %[[r]]
-//       CHECK-NO-LAYOUT:   memref.dealloc %[[alloc2]]
 
 // CHECK-BASELINE-LABEL: func @callee(
 //  CHECK-BASELINE-SAME:     %{{.*}}: index) -> memref<2x5xf32, strided<[20, 1], offset: ?>> {
@@ -110,13 +104,11 @@ func.func @callee(%idx: index) -> tensor<2x5xf32> {
 // CHECK:   %[[casted:.*]] = memref.cast %[[alloc]] : memref<2x5xf32> to memref<2x5xf32, strided<[?, ?], offset: ?>>
 // CHECK:   call @callee(%{{.*}}, %[[casted]])
 // CHECK:   memref.load %[[casted]]
-// CHECK:   memref.dealloc %[[alloc]]
 
 // CHECK-NO-LAYOUT: func @main(
 // CHECK-NO-LAYOUT:   %[[alloc:.*]] = memref.alloc() : memref<2x5xf32>
 // CHECK-NO-LAYOUT:   call @callee(%{{.*}}, %[[alloc]])
 // CHECK-NO-LAYOUT:   memref.load %[[alloc]]
-// CHECK-NO-LAYOUT:   memref.dealloc
 
 // CHECK-BASELINE: func @main(
 // CHECK-BASELINE:   %[[call:.*]] = call @callee
diff --git a/mlir/test/Dialect/SCF/one-shot-bufferize.mlir b/mlir/test/Dialect/SCF/one-shot-bufferize.mlir
index b01592dcad74488..a8c488461c74edd 100644
--- a/mlir/test/Dialect/SCF/one-shot-bufferize.mlir
+++ b/mlir/test/Dialect/SCF/one-shot-bufferize.mlir
@@ -1,5 +1,4 @@
-// RUN: mlir-opt %s -allow-unregistered-dialect -one-shot-bufferize="allow-return-allocs bufferize-function-boundaries" -cse -canonicalize -drop-equivalent-buffer-results -buffer-deallocation -split-input-file | FileCheck %s
-// RUN: mlir-opt %s -allow-unregistered-dialect -one-shot-bufferize="allow-return-allocs bufferize-function-boundaries" -drop-equivalent-buffer-results -split-input-file | FileCheck %s --check-prefix=CHECK-NO-DEALLOC-PASS
+// RUN: mlir-opt %s -allow-unregistered-dialect -one-shot-bufferize="allow-return-allocs bufferize-function-boundaries" -cse -canonicalize -drop-equivalent-buffer-results -split-input-file | FileCheck %s
 
 // Run fuzzer with different seeds.
 // RUN: mlir-opt %s -allow-unregistered-dialect -one-shot-bufferize="allow-return-allocs test-analysis-only analysis-fuzzer-seed=23 bufferize-function-boundaries" -split-input-file -o /dev/null
@@ -7,7 +6,7 @@
 // RUN: mlir-opt %s -allow-unregistered-dialect -one-shot-bufferize="allow-return-allocs test-analysis-only analysis-fuzzer-seed=91 bufferize-function-boundaries" -split-input-file -o /dev/null
 
 // Test bufferization using memref types that have no layout map.
-// RUN: mlir-opt %s -allow-unregistered-dialect -one-shot-bufferize="allow-return-allocs unknown-type-conversion=identity-layout-map function-boundary-type-conversion=identity-layout-map bufferize-function-boundaries" -buffer-deallocation -split-input-file -o /dev/null
+// RUN: mlir-opt %s -allow-unregistered-dialect -one-shot-bufferize="allow-return-allocs unknown-type-conversion=identity-layout-map function-boundary-type-conversion=identity-layout-map bufferize-function-boundaries" -split-input-file -o /dev/null
 
 // CHECK-LABEL: func @scf_for_yield_only(
 //  CHECK-SAME:   %[[A:[a-zA-Z0-9]*]]: memref<?xf32, strided<[?], offset: ?>>,
@@ -52,8 +51,7 @@ func.func @scf_for_is_reading(%A : tensor<?xf32>, %B : tensor<?xf32>,
 
   // CHECK: %[[alloc:.*]] = memref.alloc
   // CHECK: memref.copy %[[A]], %[[alloc]]
-  // CHECK: %[[clone:.*]] = bufferization.clone %[[alloc]]
-  // CHECK: scf.for {{.*}} iter_args(%{{.*}} = %[[clone]])
+  // CHECK: scf.for {{.*}} iter_args(%{{.*}} = %[[alloc]])
   %0 = scf.for %iv = %lb to %ub step %c1 iter_args(%1 = %A) -> tensor<?xf32> {
     %r = linalg.fill ins(%cst : f32) outs(%1 : tensor<?xf32>) -> tensor<?xf32>
     scf.yield %B : tensor<?xf32>
@@ -235,7 +233,6 @@ func.func @scf_if_non_equiv_yields(
 // CHECK-LABEL: func @scf_execute_region_yield_non_equivalent(
 //       CHECK:   %[[alloc:.*]] = memref.alloc(%{{.*}})
 //       CHECK:   %[[r:.*]] = memref.load %[[alloc]][%{{.*}}]
-//       CHECK:   memref.dealloc %[[alloc]]
 //       CHECK:   return %[[r]]
 func.func @scf_execute_region_yield_non_equivalent(%i: index, %j: index) -> f32 {
   %r = scf.execute_region -> (tensor<?xf32>) {
@@ -256,16 +253,11 @@ func.func @scf_execute_region_yield_non_equivalent(%i: index, %j: index) -> f32
 //  CHECK-SAME:     %[[t:.*]]: memref<?xf32
 //       CHECK:   %[[alloc:.*]] = memref.alloc(%{{.*}})
 //       CHECK:   memref.copy %[[t]], %[[alloc]]
-//       CHECK:   %[[cloned:.*]] = bufferization.clone %[[t]]
-//       CHECK:   %[[for:.*]] = scf.for {{.*}} iter_args(%[[iter:.*]] = %[[cloned]])
-//   CHECK-DAG:     memref.dealloc %[[iter]]
+//       CHECK:   %[[for:.*]] = scf.for {{.*}} iter_args(%[[iter:.*]] = %[[t]])
 //   CHECK-DAG:     %[[alloc2:.*]] = memref.alloc(%{{.*}})
 //       CHECK:     memref.copy %[[alloc]], %[[alloc2]]
 //       CHECK:     %[[alloc2_casted:.*]] = memref.cast %[[alloc2]]
-//       CHECK:     %[[cloned2:.*]] = bufferization.clone %[[alloc2_casted]]
-//       CHECK:     memref.dealloc %[[alloc2]]
-//       CHECK:     scf.yield %[[cloned2]]
-//       CHECK:   memref.dealloc %[[alloc]]
+//       CHECK:     scf.yield %[[alloc2_casted]]
 //       CHECK:   return %[[for]]
 func.func @scf_for_yield_non_equivalent(
     %t: tensor<?xf32>, %lb : index, %ub : index, %step : index) -> tensor<?xf32> {
@@ -284,19 +276,14 @@ func.func @scf_for_yield_non_equivalent(
 
 // CHECK-LABEL: func @scf_for_yield_allocation(
 //  CHECK-SAME:     %[[t:.*]]: memref<?xf32
-//       CHECK:   %[[cloned:.*]] = bufferization.clone %[[t]]
-//       CHECK:   %[[for:.*]] = scf.for {{.*}} iter_args(%[[iter:.*]] = %[[cloned]])
+//       CHECK:   %[[for:.*]] = scf.for {{.*}} iter_args(%[[iter:.*]] = %[[t]])
 // This alloc is for the bufferization.alloc_tensor.
 //   CHECK-DAG:     %[[alloc2:.*]] = memref.alloc(%{{.*}})
-//   CHECK-DAG:     memref.dealloc %[[iter]]
 // This alloc is for the scf.yield.
 //       CHECK:     %[[alloc3:.*]] = memref.alloc(%{{.*}})
 //       CHECK:     memref.copy %[[alloc2]], %[[alloc3]]
-//       CHECK:     memref.dealloc %[[alloc2]]
 //       CHECK:     %[[casted3:.*]] = memref.cast %[[alloc3]]
-//       CHECK:     %[[cloned3:.*]] = bufferization.clone %[[casted3]]
-//       CHECK:     memref.dealloc %[[alloc3]]
-//       CHECK:     scf.yield %[[cloned3]]
+//       CHECK:     scf.yield %[[casted3]]
 //       CHECK:   return %[[for]]
 func.func @scf_for_yield_allocation(%t: tensor<?xf32>, %lb : index, %ub : index,
                                %step : index) -> tensor<?xf32> {
@@ -320,9 +307,7 @@ func.func @scf_for_swapping_yields(
     %C : tensor<4xf32>, %lb : index, %ub : index, %step : index)
   -> (f32, f32)
 {
-//   CHECK-DAG:   %[[clone1:.*]] = bufferization.clone %[[A]]
-//   CHECK-DAG:   %[[clone2:.*]] = bufferization.clone %[[B]]
-//       CHECK:   %[[for:.*]]:2 = scf.for {{.*}} iter_args(%[[iter1:.*]] = %[[clone1]], %[[iter2:.*]] = %[[clone2]])
+//       CHECK:   %[[for:.*]]:2 = scf.for {{.*}} iter_args(%[[iter1:.*]] = %[[A]], %[[iter2:.*]] = %[[B]])
   %r0:2 = scf.for %i = %lb to %ub step %step iter_args(%tA = %A, %tB = %B)
       -> (tensor<?xf32>, tensor<?xf32>)
   {
@@ -335,25 +320,17 @@ func.func @scf_for_swapping_yields(
 
 //       CHECK:     %[[alloc2:.*]] = memref.alloc(%{{.*}})
 //       CHECK:     memref.copy %[[iter2]], %[[alloc2]]
-//       CHECK:     memref.dealloc %[[iter2]]
 //       CHECK:     %[[alloc1:.*]] = memref.alloc(%{{.*}})
 //       CHECK:     memref.copy %[[iter1]], %[[alloc1]]
-//       CHECK:     memref.dealloc %[[iter1]]
 //       CHECK:     %[[casted2:.*]] = memref.cast %[[alloc2]]
 //       CHECK:     %[[casted1:.*]] = memref.cast %[[alloc1]]
-//       CHECK:     %[[cloned1:.*]] = bufferization.clone %[[casted1]]
-//       CHECK:     memref.dealloc %[[alloc1]]
-//       CHECK:     %[[cloned2:.*]] = bufferization.clone %[[casted2]]
-//       CHECK:     memref.dealloc %[[alloc2]]
-//       CHECK:     scf.yield %[[cloned2]], %[[cloned1]]
+//       CHECK:     scf.yield %[[casted2]], %[[casted1]]
     // Yield tensors in different order.
     scf.yield %ttB, %ttA : tensor<?xf32>, tensor<?xf32>
   }
 
 //       CHECK:     %[[r0:.*]] = memref.load %[[for]]#0
-//       CHECK:     memref.dealloc %[[for]]#0
 //       CHECK:     %[[r1:.*]] = memref.load %[[for]]#1
-//       CHECK:     memref.dealloc %[[for]]#1
   %f0 = tensor.extract %r0#0[%step] : tensor<?xf32>
   %f1 = tensor.extract %r0#1[%step] : tensor<?xf32>
 //       CHECK:     return %[[r0]], %[[r1]]
@@ -399,23 +376,15 @@ func.func @scf_while_non_equiv_condition(%arg0: tensor<5xi1>,
                                          %idx: index)
   -> (tensor<5xi1>, tensor<5xi1>)
 {
-  // CHECK: %[[clone1:.*]] = bufferization.clone %[[arg1]]
-  // CHECK: %[[clone0:.*]] = bufferization.clone %[[arg0]]
-  // CHECK: %[[loop:.*]]:2 = scf.while (%[[w0:.*]] = %[[clone0]], %[[w1:.*]] = %[[clone1]]) {{.*}} {
+  // CHECK: %[[loop:.*]]:2 = scf.while (%[[w0:.*]] = %[[arg0]], %[[w1:.*]] = %[[arg1]]) {{.*}} {
   %r0, %r1 = scf.while (%w0 = %arg0, %w1 = %arg1)
       : (tensor<5xi1>, tensor<5xi1>) -> (tensor<5xi1>, tensor<5xi1>) {
     // CHECK: %[[condition:.*]] = memref.load %[[w0]]
     // CHECK: %[[a1:.*]] = memref.alloc() {{.*}} : memref<5xi1>
     // CHECK: memref.copy %[[w1]], %[[a1]]
-    // CHECK: memref.dealloc %[[w1]]
     // CHECK: %[[a0:.*]] = memref.alloc() {{.*}} : memref<5xi1>
     // CHECK: memref.copy %[[w0]], %[[a0]]
-    // CHECK: memref.dealloc %[[w0]]
-    // CHECK: %[[cloned1:.*]] = bufferization.clone %[[a1]]
-    // CHECK: memref.dealloc %[[a1]]
-    // CHECK: %[[cloned0:.*]] = bufferization.clone %[[a0]]
-    // CHECK: memref.dealloc %[[a0]]
-    // CHECK: scf.condition(%[[condition]]) %[[cloned1]], %[[cloned0]]
+    // CHECK: scf.condition(%[[condition]]) %[[a1]], %[[a0]]
     %condition = tensor.extract %w0[%idx] : tensor<5xi1>
     scf.condition(%condition) %w1, %w0 : tensor<5xi1>, tensor<5xi1>
   } do {
@@ -425,11 +394,7 @@ func.func @scf_while_non_equiv_condition(%arg0: tensor<5xi1>,
     // CHECK: memref.store %{{.*}}, %[[b0]]
     // CHECK: %[[casted0:.*]] = memref.cast %[[b0]] : memref<5xi1> to memref<5xi1, strided{{.*}}>
     // CHECK: %[[casted1:.*]] = memref.cast %[[b1]] : memref<5xi1> to memref<5xi1, strided{{.*}}>
-    // CHECK: %[[cloned2:.*]] = bufferization.clone %[[casted1]]
-    // CHECK: memref.dealloc %[[b1]]
-    // CHECK: %[[cloned3:.*]] = bufferization.clone %[[casted0]]
-    // CHECK: memref.dealloc %[[b0]]
-    // CHECK: scf.yield %[[cloned3]], %[[cloned2]]
+    // CHECK: scf.yield %[[casted0]], %[[casted1]]
     // CHECK: }
     %pos = "dummy.some_op"() : () -> (index)
     %val = "dummy.another_op"() : () -> (i1)
@@ -452,23 +417,15 @@ func.func @scf_while_non_equiv_condition_and_body(%arg0: tensor<5xi1>,
                                                   %idx: index)
   -> (tensor<5xi1>, tensor<5xi1>)
 {
-  // CHECK-DAG: %[[clone1:.*]] = bufferization.clone ...

@maerhart maerhart merged commit ea42b49 into llvm:main Sep 15, 2023
@maerhart maerhart deleted the merhart_switch_to_new_dealloc_pass branch September 15, 2023 09:08
maerhart added a commit that referenced this pull request Sep 15, 2023
…ipeline (#66471)"

This reverts commit ea42b49.

Some GPU integration tests are failing that I didn't observe locally.
Reverting until I have a fix.
@maerhart
Copy link
Member Author

FYI: I reverted this commit again because some nvidia integration tests started failing on the main branch that weren't covered by the PR CI

maerhart added a commit to maerhart/llvm-project that referenced this pull request Sep 15, 2023
Use the new ownership based deallocation pass pipeline in the regression
and integration tests. Some one-shot bufferization tests tested one-shot
bufferize and deallocation at the same time. I removed the deallocation
pass there because the deallocation pass is already thoroughly tested by
itself.

Fixed version of llvm#66471
maerhart added a commit that referenced this pull request Sep 18, 2023
…66517)

Use the new ownership based deallocation pass pipeline in the regression
and integration tests. Some one-shot bufferization tests tested one-shot
bufferize and deallocation at the same time. I removed the deallocation
pass there because the deallocation pass is already thoroughly tested by
itself.

Fixed version of #66471
ZijunZhaoCCK pushed a commit to ZijunZhaoCCK/llvm-project that referenced this pull request Sep 19, 2023
…lvm#66471)

Use the new ownership based deallocation pass pipeline in the regression
and integration tests. Some one-shot bufferization tests tested one-shot
bufferize and deallocation at the same time. I removed the deallocation
pass there because the deallocation pass is already thoroughly tested by
itself.
ZijunZhaoCCK pushed a commit to ZijunZhaoCCK/llvm-project that referenced this pull request Sep 19, 2023
…ipeline (llvm#66471)"

This reverts commit ea42b49.

Some GPU integration tests are failing that I didn't observe locally.
Reverting until I have a fix.
ZijunZhaoCCK pushed a commit to ZijunZhaoCCK/llvm-project that referenced this pull request Sep 19, 2023
…lvm#66517)

Use the new ownership based deallocation pass pipeline in the regression
and integration tests. Some one-shot bufferization tests tested one-shot
bufferize and deallocation at the same time. I removed the deallocation
pass there because the deallocation pass is already thoroughly tested by
itself.

Fixed version of llvm#66471
zahiraam pushed a commit to tahonermann/llvm-project that referenced this pull request Oct 24, 2023
…lvm#66517)

Use the new ownership based deallocation pass pipeline in the regression
and integration tests. Some one-shot bufferization tests tested one-shot
bufferize and deallocation at the same time. I removed the deallocation
pass there because the deallocation pass is already thoroughly tested by
itself.

Fixed version of llvm#66471
zahiraam pushed a commit to tahonermann/llvm-project that referenced this pull request Oct 24, 2023
…lvm#66517)

Use the new ownership based deallocation pass pipeline in the regression
and integration tests. Some one-shot bufferization tests tested one-shot
bufferize and deallocation at the same time. I removed the deallocation
pass there because the deallocation pass is already thoroughly tested by
itself.

Fixed version of llvm#66471
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
mlir:bufferization Bufferization infrastructure mlir:core MLIR Core Infrastructure mlir:linalg mlir:scf mlir
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants