Skip to content

[mlir][py] Add NVGPU's TensorMapDescriptorType in py bindings #88855

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Apr 17, 2024

Conversation

grypp
Copy link
Member

@grypp grypp commented Apr 16, 2024

This PR adds NVGPU dialects' TensorMapDescriptorType in the py bindings.

This is a follow-up issue from this PR

This PR adds NVGPU dialects' TensorMapDescriptorType in the py bindings.
@grypp grypp requested review from jpienaar and manishucsd April 16, 2024 08:28
@llvmbot llvmbot added mlir:python MLIR Python bindings mlir labels Apr 16, 2024
@llvmbot
Copy link
Member

llvmbot commented Apr 16, 2024

@llvm/pr-subscribers-mlir

Author: Guray Ozen (grypp)

Changes

This PR adds NVGPU dialects' TensorMapDescriptorType in the py bindings.

This is a follow-up issue from this PR


Full diff: https://github.com/llvm/llvm-project/pull/88855.diff

6 Files Affected:

  • (modified) mlir/include/mlir-c/Dialect/NVGPU.h (+11)
  • (added) mlir/lib/Bindings/Python/DialectNVGPU.cpp (+46)
  • (modified) mlir/lib/CAPI/Dialect/NVGPU.cpp (+18)
  • (modified) mlir/python/CMakeLists.txt (+13)
  • (modified) mlir/python/mlir/dialects/nvgpu.py (+1)
  • (modified) mlir/test/python/dialects/nvgpu.py (+13)
diff --git a/mlir/include/mlir-c/Dialect/NVGPU.h b/mlir/include/mlir-c/Dialect/NVGPU.h
index 580d566794c09f..143284ee32b5ae 100644
--- a/mlir/include/mlir-c/Dialect/NVGPU.h
+++ b/mlir/include/mlir-c/Dialect/NVGPU.h
@@ -11,6 +11,7 @@
 #define MLIR_C_DIALECT_NVGPU_H
 
 #include "mlir-c/IR.h"
+#include "mlir-c/Support.h"
 
 #ifdef __cplusplus
 extern "C" {
@@ -18,6 +19,16 @@ extern "C" {
 
 MLIR_DECLARE_CAPI_DIALECT_REGISTRATION(NVGPU, nvgpu);
 
+//===---------------------------------------------------------------------===//
+// TensorMapDescriptorType
+//===---------------------------------------------------------------------===//
+
+MLIR_CAPI_EXPORTED bool mlirTypeIsANVGPUTensorMapDescriptorType(MlirType type);
+
+MLIR_CAPI_EXPORTED MlirType mlirNVGPUTensorMapDescriptorTypeGet(
+    MlirContext ctx, MlirType tensorType, int swizzle, int l2promo, int oob,
+    int interleave);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/mlir/lib/Bindings/Python/DialectNVGPU.cpp b/mlir/lib/Bindings/Python/DialectNVGPU.cpp
new file mode 100644
index 00000000000000..d0f7b2500085d1
--- /dev/null
+++ b/mlir/lib/Bindings/Python/DialectNVGPU.cpp
@@ -0,0 +1,46 @@
+//===- DialectLinalg.cpp - Pybind module for Nvgpu dialect API support --===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===----------------------------------------------------------------------===//
+
+#include "IRModule.h"
+#include "mlir-c/Dialect/NVGPU.h"
+#include "mlir-c/IR.h"
+#include "mlir/Bindings/Python/PybindAdaptors.h"
+#include "mlir/Dialect/NVGPU/IR/NVGPUDialect.h"
+#include "llvm/Support/raw_ostream.h"
+#include <cstdint>
+#include <pybind11/pybind11.h>
+
+namespace py = pybind11;
+using namespace llvm;
+using namespace mlir;
+using namespace mlir::python;
+using namespace mlir::python::adaptors;
+
+static void populateDialectNvgpuSubmodule(const pybind11::module &m) {
+  auto nvgpuTensorMapDescriptorType = mlir_type_subclass(
+      m, "TensorMapDescriptorType", mlirTypeIsANVGPUTensorMapDescriptorType);
+
+  nvgpuTensorMapDescriptorType.def_classmethod(
+      "get",
+      [](py::object cls, MlirType tensorType, int swizzle, int l2promo, int oob,
+         int interleave, MlirContext ctx) {
+        return cls(mlirNVGPUTensorMapDescriptorTypeGet(
+            ctx, tensorType, swizzle, l2promo, oob, interleave));
+      },
+      "Gets an instance of RangeType in the same context as the provided "
+      "element type.",
+      py::arg("cls"), py::arg("tensor_type"), py::arg("swizzle"),
+      py::arg("l2promo"), py::arg("oob"), py::arg("interleave"),
+      py::arg("ctx") = py::none());
+}
+
+PYBIND11_MODULE(_mlirDialectsNvgpu, m) {
+  m.doc() = "MLIR NVGPU dialect.";
+
+  populateDialectNvgpuSubmodule(m);
+}
diff --git a/mlir/lib/CAPI/Dialect/NVGPU.cpp b/mlir/lib/CAPI/Dialect/NVGPU.cpp
index 02d10954a03776..7536a525e98778 100644
--- a/mlir/lib/CAPI/Dialect/NVGPU.cpp
+++ b/mlir/lib/CAPI/Dialect/NVGPU.cpp
@@ -9,5 +9,23 @@
 #include "mlir-c/Dialect/NVGPU.h"
 #include "mlir/CAPI/Registration.h"
 #include "mlir/Dialect/NVGPU/IR/NVGPUDialect.h"
+#include "mlir/IR/BuiltinTypes.h"
+
+using namespace mlir;
+using namespace mlir::nvgpu;
 
 MLIR_DEFINE_CAPI_DIALECT_REGISTRATION(NVGPU, nvgpu, mlir::nvgpu::NVGPUDialect)
+
+bool mlirTypeIsANVGPUTensorMapDescriptorType(MlirType type) {
+  return isa<nvgpu::TensorMapDescriptorType>(unwrap(type));
+}
+
+MlirType mlirNVGPUTensorMapDescriptorTypeGet(MlirContext ctx,
+                                             MlirType tensorType, int swizzle,
+                                             int l2promo, int oob,
+                                             int interleave) {
+  return wrap(nvgpu::TensorMapDescriptorType::get(
+      unwrap(ctx), cast<MemRefType>(unwrap(tensorType)),
+      TensorMapSwizzleKind(swizzle), TensorMapL2PromoKind(l2promo),
+      TensorMapOOBKind(oob), TensorMapInterleaveKind(interleave)));
+}
diff --git a/mlir/python/CMakeLists.txt b/mlir/python/CMakeLists.txt
index c27ee688a04087..0a2dc0754c09d0 100644
--- a/mlir/python/CMakeLists.txt
+++ b/mlir/python/CMakeLists.txt
@@ -524,6 +524,19 @@ declare_mlir_python_extension(MLIRPythonExtension.Dialects.Quant.Pybind
     MLIRCAPIQuant
 )
 
+declare_mlir_python_extension(MLIRPythonExtension.Dialects.NVGPU.Pybind
+  MODULE_NAME _mlirDialectsNvgpu
+  ADD_TO_PARENT MLIRPythonSources.Dialects.nvgpu
+  ROOT_DIR "${PYTHON_SOURCE_DIR}"
+  SOURCES
+    DialectNVGPU.cpp
+  PRIVATE_LINK_LIBS
+    LLVMSupport
+  EMBED_CAPI_LINK_LIBS
+    MLIRCAPIIR
+    MLIRCAPINVGPU
+)
+
 declare_mlir_python_extension(MLIRPythonExtension.Dialects.PDL.Pybind
   MODULE_NAME _mlirDialectsPDL
   ADD_TO_PARENT MLIRPythonSources.Dialects.pdl
diff --git a/mlir/python/mlir/dialects/nvgpu.py b/mlir/python/mlir/dialects/nvgpu.py
index 2f6993b768ca53..e93fc956a10dd5 100644
--- a/mlir/python/mlir/dialects/nvgpu.py
+++ b/mlir/python/mlir/dialects/nvgpu.py
@@ -4,3 +4,4 @@
 
 from ._nvgpu_ops_gen import *
 from ._nvgpu_enum_gen import *
+from .._mlir_libs._mlirDialectsNvgpu import *
\ No newline at end of file
diff --git a/mlir/test/python/dialects/nvgpu.py b/mlir/test/python/dialects/nvgpu.py
index 3158388f0e6869..2b2fec587352af 100644
--- a/mlir/test/python/dialects/nvgpu.py
+++ b/mlir/test/python/dialects/nvgpu.py
@@ -14,6 +14,19 @@ def constructAndPrintInModule(f):
         print(module)
     return f
 
+# CHECK-LABEL: testTypes
+@constructAndPrintInModule
+def testTypes():
+    tensorType = MemRefType.get((128,64), F16Type.get(), memory_space=Attribute.parse("3"))
+    # CHECK: !nvgpu.tensormap.descriptor<tensor = memref<128x64xf16, 3>, swizzle = swizzle_128b, l2promo = l2promo_256b, oob = nan, interleave = none>
+    tma_desc = nvgpu.TensorMapDescriptorType.get(
+                tensorType,
+                nvgpu.TensorMapSwizzleKind.SWIZZLE_128B,
+                nvgpu.TensorMapL2PromoKind.L2PROMO_256B,
+                nvgpu.TensorMapOOBKind.OOB_NAN,
+                nvgpu.TensorMapInterleaveKind.INTERLEAVE_NONE)
+    print(tma_desc)
+
 
 # CHECK-LABEL: testSmoke
 @constructAndPrintInModule

Copy link

github-actions bot commented Apr 16, 2024

✅ With the latest revision this PR passed the Python code formatter.

Copy link
Contributor

@makslevental makslevental left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM modulo the smal naming nits (I can now delete my same parsed thing 😃)

Comment on lines 32 to 34
py::arg("cls"), py::arg("tensor_type"), py::arg("swizzle"),
py::arg("l2promo"), py::arg("oob"), py::arg("interleave"),
py::arg("ctx") = py::none());
Copy link
Contributor

@makslevental makslevental Apr 16, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you wanted to be ambitious you could define a couple properties so this type isn't completely opaque, like this

operationType.def_property_readonly(
but if there's not a current need for introspection we can leave it till someone asks/needs.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks cool! Is the advantage here that we don't have to hardcode parameter names like l2promo or swizzle?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well that would be a def_property which I don't think is possible/supported for attributes but otherwise the benefit is just that you could do descriptor.swizzle and back just the swizzle type. Just yesterday I put this together for gpu.object.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
mlir:python MLIR Python bindings mlir
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants