Skip to content

[mlir][SPIRV] Add support for dense_resource in arith to spirv #91318

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
May 17, 2024

Conversation

maxbartel
Copy link
Contributor

@maxbartel maxbartel commented May 7, 2024

This adds support for dense_resource in arith to spirv.

Note that this inlines the blob into the IR. Another possibility would be to add proper dense_resource support to spirv, but there is a lot of special handling going on to convert a DenseElementsAttr to the correct SPIRV type. Some of that even iterates over all the values in the Attribute. For proper support of a DenseResourceElementsAttr this probably needs a redesign. I would like to hear some opinions on that!

The test is disabled on non little Endian machines. See #63469 for more informations.

@llvmbot
Copy link
Member

llvmbot commented May 7, 2024

@llvm/pr-subscribers-mlir-spirv

@llvm/pr-subscribers-mlir

Author: None (maxbartel)

Changes

This adds support for dense_resource in arith to spirv.

Note that this inlines the blob into the IR. Another possibility would be to add proper dense_resource support to spirv, but there is a lot of special handling going on to convert a DenseElementsAttr to the correct SPIRV type. Some of that even iterates over all the values in the Attribute. For proper support of a DenseResourceElementsAttr this probably needs a redesign. I would like to hear some opinions on that!

The test is disabled on non little Endian machines. See #6346 for more informations.


Full diff: https://github.com/llvm/llvm-project/pull/91318.diff

2 Files Affected:

  • (modified) mlir/lib/Conversion/ArithToSPIRV/ArithToSPIRV.cpp (+18-4)
  • (added) mlir/test/Conversion/ArithToSPIRV/arith-to-spirv-le-specific.mlir (+38)
diff --git a/mlir/lib/Conversion/ArithToSPIRV/ArithToSPIRV.cpp b/mlir/lib/Conversion/ArithToSPIRV/ArithToSPIRV.cpp
index 806981728561c..9a1808354e161 100644
--- a/mlir/lib/Conversion/ArithToSPIRV/ArithToSPIRV.cpp
+++ b/mlir/lib/Conversion/ArithToSPIRV/ArithToSPIRV.cpp
@@ -17,6 +17,7 @@
 #include "mlir/Dialect/SPIRV/Transforms/SPIRVConversion.h"
 #include "mlir/IR/BuiltinAttributes.h"
 #include "mlir/IR/BuiltinTypes.h"
+#include "mlir/IR/DialectResourceBlobManager.h"
 #include "llvm/ADT/APInt.h"
 #include "llvm/ADT/ArrayRef.h"
 #include "llvm/ADT/STLExtras.h"
@@ -229,16 +230,29 @@ struct ConstantCompositeOpPattern final
     if (!srcType || srcType.getNumElements() == 1)
       return failure();
 
-    // arith.constant should only have vector or tenor types.
+    // arith.constant should only have vector or tensor types.
     assert((isa<VectorType, RankedTensorType>(srcType)));
 
     Type dstType = getTypeConverter()->convertType(srcType);
     if (!dstType)
       return failure();
 
-    auto dstElementsAttr = dyn_cast<DenseElementsAttr>(constOp.getValue());
-    if (!dstElementsAttr)
-      return failure();
+    // Import the resource into the IR to make use of the special handling of
+    // element types later on.
+    mlir::DenseElementsAttr dstElementsAttr;
+    if (auto denseElementsAttr =
+            dyn_cast<DenseElementsAttr>(constOp.getValue())) {
+      dstElementsAttr = denseElementsAttr;
+    } else if (auto resourceAttr =
+                   dyn_cast<DenseResourceElementsAttr>(constOp.getValue())) {
+
+      ArrayRef<char> ptr = resourceAttr.getRawHandle().getBlob()->getData();
+      dstElementsAttr =
+          DenseElementsAttr::getFromRawBuffer(resourceAttr.getType(), ptr);
+    } else {
+      return rewriter.notifyMatchFailure(constOp,
+                                         "Could not decode ElementsAttr");
+    }
 
     ShapedType dstAttrType = dstElementsAttr.getType();
 
diff --git a/mlir/test/Conversion/ArithToSPIRV/arith-to-spirv-le-specific.mlir b/mlir/test/Conversion/ArithToSPIRV/arith-to-spirv-le-specific.mlir
new file mode 100644
index 0000000000000..7233a8bfffa9d
--- /dev/null
+++ b/mlir/test/Conversion/ArithToSPIRV/arith-to-spirv-le-specific.mlir
@@ -0,0 +1,38 @@
+// RUN: mlir-opt -split-input-file -convert-arith-to-spirv -verify-diagnostics %s | FileCheck %s
+
+
+//===----------------------------------------------------------------------===//
+// arith.constant dense_resource
+//
+// The decoding of dense_resource differs between little and big endian
+// machines. At the moment only litte endian is supported.
+// See https://github.com/llvm/llvm-project/issues/63469 for more infos.
+//
+//===----------------------------------------------------------------------===//
+
+// XFAIL: target=s390x-{{.*}}
+
+module attributes {
+  spirv.target_env = #spirv.target_env<
+    #spirv.vce<v1.0, [Int8, Int16, Int64, Float16, Float64], []>, #spirv.resource_limits<>>
+} {
+func.func @constant_dense_resource() {
+  // CHECK:    %{{.*}} = spirv.Constant dense<[0.203224242, -0.254296064, -0.365104556, -0.469196141, 0.466041982]> : tensor<5xf32> : !spirv.array<5 x f32>
+  %0 = arith.constant dense_resource<dense_resource_test_5xf32> : tensor<5xf32>  
+  // CHECK:    %{{.*}} = spirv.Constant dense<[1, 2]> : vector<2xi32>
+  %1 = arith.constant dense_resource<dense_resource_test_2xi32> : vector<2xi32>  
+  // CHECK:    %{{.*}} = spirv.Constant dense<[0.35476172, 0.351080596, -0.0795008316, 0.366843373]> : tensor<4xf32> : !spirv.array<4 x f32>
+  %2 = arith.constant dense_resource<dense_resource_test_2x2xf32> : tensor<1x2x2xf32>  
+  return
+  }
+}
+// Resources are kept at end of file. New tests should be added above this.
+{-#
+  dialect_resources: {
+    builtin: {
+      dense_resource_test_2xi32: "0x400000000100000002000000",
+      dense_resource_test_5xf32: "0x08000000041A503E183382BEFCEEBABE7A3AF0BE0E9DEE3E",
+      dense_resource_test_2x2xf32: "0x0800000054A3B53ED6C0B33E55D1A2BDE5D2BB3E"
+    }
+  }
+#-}

Copy link
Member

@Groverkss Groverkss left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Just a few comments

@@ -229,16 +230,29 @@ struct ConstantCompositeOpPattern final
if (!srcType || srcType.getNumElements() == 1)
return failure();

// arith.constant should only have vector or tenor types.
// arith.constant should only have vector or tensor types.
assert((isa<VectorType, RankedTensorType>(srcType)));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this the only supported case or is this the only possible case? Can you clarify this in the comment? If this is the only supported case, let's emit a failure with a nice error message instead.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good questions, there are TODOs all over MLIR to disallow other types, but at the moment almost everything is allowed. I emit a failure now.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good to me.

@Groverkss
Copy link
Member

The mentioned issue seems unrelated. Can you mention the correct issue instead?

@maxbartel
Copy link
Contributor Author

Thanks for the review!

The mentioned issue seems unrelated. Can you mention the correct issue instead?

Lost a digit there. Thanks for pointing that out!

@maxbartel maxbartel requested a review from Groverkss May 15, 2024 16:22
Copy link
Member

@Groverkss Groverkss left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM Thanks!

@maxbartel maxbartel force-pushed the add_dense_resource_arith_to_spirv branch from 091eae1 to fdbadbe Compare May 17, 2024 08:28
@maxbartel
Copy link
Contributor Author

rebased on main

@kuhar kuhar merged commit 9ba0a77 into llvm:main May 17, 2024
3 of 4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants