Skip to content

[NVPTX] Add TMA Bulk Copy Intrinsics #138679

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

durga4github
Copy link
Contributor

@durga4github durga4github commented May 6, 2025

This patch adds a new variant of TMA Bulk Copy
intrinsics introduced in sm100+. This variant
has an additional byte_mask to select the bytes
for the copy operation.

  • Selection is all done through table-gen now.
    So, this patch removes the corresponding
    SelectCpAsyncBulkS2G() function.
  • lit tests are verified with a cuda-12.8 ptxas
    executable.

PTX Spec link:
https://docs.nvidia.com/cuda/parallel-thread-execution/#data-movement-and-conversion-instructions-bulk-copy

@llvmbot
Copy link
Member

llvmbot commented May 6, 2025

@llvm/pr-subscribers-llvm-ir

Author: Durgadoss R (durga4github)

Changes

This patch adds a new variant of TMA Bulk Copy
intrinsics introduced in sm100+. This variant
has an additional byte_mask to select the bytes
for the copy operation. This patch adds the
relevant documentation and lit tests as well.

PTX Spec link:
https://docs.nvidia.com/cuda/parallel-thread-execution/#data-movement-and-conversion-instructions-bulk-copy


Full diff: https://github.com/llvm/llvm-project/pull/138679.diff

6 Files Affected:

  • (modified) llvm/docs/NVPTXUsage.rst (+5-1)
  • (modified) llvm/include/llvm/IR/IntrinsicsNVVM.td (+14)
  • (modified) llvm/lib/Target/NVPTX/NVPTXISelDAGToDAG.cpp (+36-15)
  • (modified) llvm/lib/Target/NVPTX/NVPTXISelDAGToDAG.h (+1-1)
  • (modified) llvm/lib/Target/NVPTX/NVPTXIntrinsics.td (+21-10)
  • (added) llvm/test/CodeGen/NVPTX/cp-async-bulk-s2g-sm100.ll (+46)
diff --git a/llvm/docs/NVPTXUsage.rst b/llvm/docs/NVPTXUsage.rst
index c1426823d87af..2a091e27358eb 100644
--- a/llvm/docs/NVPTXUsage.rst
+++ b/llvm/docs/NVPTXUsage.rst
@@ -616,6 +616,7 @@ Syntax:
 .. code-block:: llvm
 
   declare void @llvm.nvvm.cp.async.bulk.shared.cta.to.global(ptr addrspace(1) %dst, ptr addrspace(3) %src, i32 %size, i64 %ch, i1 %flag_ch)
+  declare void @llvm.nvvm.cp.async.bulk.shared.cta.to.global.bytemask(..., i32 %size, i16 %mask, i64 %ch, i1 %flag_ch)
 
 Overview:
 """""""""
@@ -624,7 +625,10 @@ The '``@llvm.nvvm.cp.async.bulk.shared.cta.to.global``' intrinsic
 corresponds to the ``cp.async.bulk.global.shared::cta.*`` set of PTX
 instructions. These instructions initiate an asynchronous copy from
 shared::cta to global memory. The 32-bit operand ``%size`` specifies
-the amount of memory to be copied and it must be a multiple of 16.
+the amount of memory to be copied (in bytes) and it must be a multiple
+of 16. For the ``.bytemask`` variant, the 16-bit wide mask operand
+specifies whether the i-th byte of each 16-byte wide chunk of source
+data is copied to the destination.
 
 * The last argument to these intrinsics is a boolean flag
   indicating support for cache_hint. This flag argument must
diff --git a/llvm/include/llvm/IR/IntrinsicsNVVM.td b/llvm/include/llvm/IR/IntrinsicsNVVM.td
index 8b87822d3fdda..abe1395bb9f47 100644
--- a/llvm/include/llvm/IR/IntrinsicsNVVM.td
+++ b/llvm/include/llvm/IR/IntrinsicsNVVM.td
@@ -5323,6 +5323,20 @@ def int_nvvm_cp_async_bulk_shared_cta_to_global
        NoCapture<ArgIndex<0>>, NoCapture<ArgIndex<1>>,
        ImmArg<ArgIndex<4>>]>;
 
+// From Shared CTA to Global memory with bytemask
+def int_nvvm_cp_async_bulk_shared_cta_to_global_bytemask
+  : DefaultAttrsIntrinsic<[],
+      [llvm_global_ptr_ty, // dst_gmem_ptr
+       llvm_shared_ptr_ty, // src_smem_ptr
+       llvm_i32_ty,        // copy_size
+       llvm_i16_ty,        // byte_mask
+       llvm_i64_ty,        // cache_hint
+       llvm_i1_ty],        // Flag for cache_hint
+      [IntrConvergent, IntrArgMemOnly,
+       WriteOnly<ArgIndex<0>>, ReadOnly<ArgIndex<1>>,
+       NoCapture<ArgIndex<0>>, NoCapture<ArgIndex<1>>,
+       ImmArg<ArgIndex<5>>]>;
+
 // Intrinsics for Bulk Copy Prefetch L2
 def int_nvvm_cp_async_bulk_prefetch_L2
   : DefaultAttrsIntrinsic<[],
diff --git a/llvm/lib/Target/NVPTX/NVPTXISelDAGToDAG.cpp b/llvm/lib/Target/NVPTX/NVPTXISelDAGToDAG.cpp
index 6f6084b99dda2..fd352598fdaf4 100644
--- a/llvm/lib/Target/NVPTX/NVPTXISelDAGToDAG.cpp
+++ b/llvm/lib/Target/NVPTX/NVPTXISelDAGToDAG.cpp
@@ -2720,28 +2720,46 @@ void NVPTXDAGToDAGISel::SelectCpAsyncBulkTensorReduceCommon(SDNode *N,
   ReplaceNode(N, CurDAG->getMachineNode(Opcode, DL, N->getVTList(), Ops));
 }
 
-void NVPTXDAGToDAGISel::SelectCpAsyncBulkS2G(SDNode *N) {
+void NVPTXDAGToDAGISel::SelectCpAsyncBulkS2GCommon(SDNode *N, bool HasMask) {
   // We have {Chain, Intrinsic-ID} followed by the actual intrisic args:
-  // dst, src, size, cache_hint, cache_hint_flag
+  // dst, src, size, mask, cache_hint, cache_hint_flag
   // NumOperands = {Chain, IID} + {Actual intrinsic args}
-  //             = {2}          + {5}
+  //             = {2}          + {6}
   size_t NumOps = N->getNumOperands();
   bool IsCacheHint = N->getConstantOperandVal(NumOps - 1) == 1;
-  size_t NumArgs = IsCacheHint ? 4 : 3; // src, dst, size, cache_hint
+  size_t CacheHintIdx = NumOps - 2;
 
   SDLoc DL(N);
-  SmallVector<SDValue, 8> Ops(N->ops().slice(2, NumArgs));
-  Ops.push_back(N->getOperand(0)); // Chain operand
+  SDValue Offset, Base;
+  SelectADDR(N->getOperand(3), Base, Offset); // src
 
-  bool IsShared32 =
-      CurDAG->getDataLayout().getPointerSizeInBits(ADDRESS_SPACE_SHARED) == 32;
-  unsigned Opcode;
+  SmallVector<SDValue, 8> Ops;
+  // BaseArgs: {dst, src, size}
+  Ops.push_back(N->getOperand(2)); // dst
+  Ops.push_back(Base);             // src
+  Ops.push_back(Offset);           // src
+  Ops.push_back(N->getOperand(4)); // size
+
+  // Push Mask operand, if available
+  if (HasMask)
+    Ops.push_back(N->getOperand(CacheHintIdx - 1));
+
+  // Push CacheHint operand, if available
   if (IsCacheHint)
-    Opcode = IsShared32 ? NVPTX::CP_ASYNC_BULK_S2G_SHARED32_CH
-                        : NVPTX::CP_ASYNC_BULK_S2G_CH;
-  else
-    Opcode = IsShared32 ? NVPTX::CP_ASYNC_BULK_S2G_SHARED32
-                        : NVPTX::CP_ASYNC_BULK_S2G;
+    Ops.push_back(N->getOperand(CacheHintIdx));
+
+  // Finally, the chain operand
+  Ops.push_back(N->getOperand(0));
+
+  unsigned Opcode = [&]() {
+    if (HasMask && IsCacheHint)
+      return NVPTX::CP_ASYNC_BULK_S2G_BM_CH;
+    if (HasMask)
+      return NVPTX::CP_ASYNC_BULK_S2G_BM;
+    if (IsCacheHint)
+      return NVPTX::CP_ASYNC_BULK_S2G_CH;
+    return NVPTX::CP_ASYNC_BULK_S2G;
+  }();
   ReplaceNode(N, CurDAG->getMachineNode(Opcode, DL, N->getVTList(), Ops));
 }
 
@@ -2928,7 +2946,10 @@ bool NVPTXDAGToDAGISel::tryIntrinsicVoid(SDNode *N) {
     SelectCpAsyncBulkG2S(N);
     return true;
   case Intrinsic::nvvm_cp_async_bulk_shared_cta_to_global:
-    SelectCpAsyncBulkS2G(N);
+    SelectCpAsyncBulkS2GCommon(N);
+    return true;
+  case Intrinsic::nvvm_cp_async_bulk_shared_cta_to_global_bytemask:
+    SelectCpAsyncBulkS2GCommon(N, /*HasMask=*/true);
     return true;
   case Intrinsic::nvvm_cp_async_bulk_prefetch_L2:
     SelectCpAsyncBulkPrefetchL2(N);
diff --git a/llvm/lib/Target/NVPTX/NVPTXISelDAGToDAG.h b/llvm/lib/Target/NVPTX/NVPTXISelDAGToDAG.h
index 23cbd458571a0..53f6c060405e7 100644
--- a/llvm/lib/Target/NVPTX/NVPTXISelDAGToDAG.h
+++ b/llvm/lib/Target/NVPTX/NVPTXISelDAGToDAG.h
@@ -93,7 +93,7 @@ class LLVM_LIBRARY_VISIBILITY NVPTXDAGToDAGISel : public SelectionDAGISel {
   void SelectV2I64toI128(SDNode *N);
   void SelectI128toV2I64(SDNode *N);
   void SelectCpAsyncBulkG2S(SDNode *N);
-  void SelectCpAsyncBulkS2G(SDNode *N);
+  void SelectCpAsyncBulkS2GCommon(SDNode *N, bool HasMask = false);
   void SelectCpAsyncBulkPrefetchL2(SDNode *N);
   void SelectCpAsyncBulkTensorG2SCommon(SDNode *N, bool IsIm2Col = false);
   void SelectCpAsyncBulkTensorS2GCommon(SDNode *N, bool IsIm2Col = false);
diff --git a/llvm/lib/Target/NVPTX/NVPTXIntrinsics.td b/llvm/lib/Target/NVPTX/NVPTXIntrinsics.td
index 3eedb43e4c81a..65f1b524aec20 100644
--- a/llvm/lib/Target/NVPTX/NVPTXIntrinsics.td
+++ b/llvm/lib/Target/NVPTX/NVPTXIntrinsics.td
@@ -516,6 +516,9 @@ class CpAsyncBulkStr<bit mc, bit ch> {
   string S2G = "cp.async.bulk.global.shared::cta.bulk_group"
                # !if(ch, ".L2::cache_hint", "");
 
+  // Shared to Global memory with bytemask
+  string S2G_BM = S2G # ".cp_mask";
+
   // Global to Shared cluster memory
   string G2S = "cp.async.bulk.shared::cluster.global.mbarrier::complete_tx::bytes"
                # !if(mc, ".multicast::cluster", "")
@@ -525,18 +528,26 @@ class CpAsyncBulkStr<bit mc, bit ch> {
   string C2C = "cp.async.bulk.shared::cluster.shared::cta.mbarrier::complete_tx::bytes";
 }
 
-multiclass CP_ASYNC_BULK_S2G<NVPTXRegClass rc> {
-  def NAME: NVPTXInst<(outs),
-            (ins Int64Regs:$dst, rc:$src, Int32Regs:$size),
+def CP_ASYNC_BULK_S2G : NVPTXInst<(outs),
+            (ins Int64Regs:$dst, ADDR:$src, Int32Regs:$size),
             !strconcat(CpAsyncBulkStr<0, 0>.S2G, " [$dst], [$src], $size;"), []>,
             Requires<[hasPTX<80>, hasSM<90>]>;
-  def NAME # _CH: NVPTXInst<(outs),
-                  (ins Int64Regs:$dst, rc:$src, Int32Regs:$size, Int64Regs:$ch),
-                  !strconcat(CpAsyncBulkStr<0, 1>.S2G, " [$dst], [$src], $size, $ch;"), []>,
-                  Requires<[hasPTX<80>, hasSM<90>]>;
-}
-defm CP_ASYNC_BULK_S2G : CP_ASYNC_BULK_S2G<Int64Regs>;
-defm CP_ASYNC_BULK_S2G_SHARED32 : CP_ASYNC_BULK_S2G<Int32Regs>;
+
+def CP_ASYNC_BULK_S2G_CH : NVPTXInst<(outs),
+            (ins Int64Regs:$dst, ADDR:$src, Int32Regs:$size, Int64Regs:$ch),
+            !strconcat(CpAsyncBulkStr<0, 1>.S2G, " [$dst], [$src], $size, $ch;"), []>,
+            Requires<[hasPTX<80>, hasSM<90>]>;
+
+// Variants with bytemask
+def CP_ASYNC_BULK_S2G_BM : NVPTXInst<(outs),
+                (ins Int64Regs:$dst, ADDR:$src, Int32Regs:$size, Int16Regs:$mask),
+                !strconcat(CpAsyncBulkStr<0, 0>.S2G_BM, " [$dst], [$src], $size, $mask;"), []>,
+                Requires<[hasPTX<86>, hasSM<100>]>;
+
+def CP_ASYNC_BULK_S2G_BM_CH : NVPTXInst<(outs),
+                   (ins Int64Regs:$dst, ADDR:$src, Int32Regs:$size, Int16Regs:$mask, Int64Regs:$ch),
+                   !strconcat(CpAsyncBulkStr<0, 1>.S2G_BM, " [$dst], [$src], $size, $ch, $mask;"), []>,
+                   Requires<[hasPTX<86>, hasSM<100>]>;
 
 multiclass CP_ASYNC_BULK_G2S<NVPTXRegClass rc> {
   def NAME: NVPTXInst<(outs),
diff --git a/llvm/test/CodeGen/NVPTX/cp-async-bulk-s2g-sm100.ll b/llvm/test/CodeGen/NVPTX/cp-async-bulk-s2g-sm100.ll
new file mode 100644
index 0000000000000..c2e2b26ed6882
--- /dev/null
+++ b/llvm/test/CodeGen/NVPTX/cp-async-bulk-s2g-sm100.ll
@@ -0,0 +1,46 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5
+; RUN: llc < %s -mtriple=nvptx64 -mcpu=sm_100 -mattr=+ptx86| FileCheck --check-prefixes=CHECK-PTX64 %s
+; RUN: llc < %s -mtriple=nvptx64 -mcpu=sm_100 -mattr=+ptx86 --nvptx-short-ptr| FileCheck --check-prefixes=CHECK-PTX-SHARED32 %s
+; RUN: %if ptxas-12.8 %{ llc < %s -mtriple=nvptx64 -mcpu=sm_100 -mattr=+ptx86| %ptxas-verify -arch=sm_100 %}
+; RUN: %if ptxas-12.8 %{ llc < %s -mtriple=nvptx64 -mcpu=sm_100 -mattr=+ptx86 --nvptx-short-ptr| %ptxas-verify -arch=sm_100 %}
+
+target triple = "nvptx64-nvidia-cuda"
+
+declare void @llvm.nvvm.cp.async.bulk.shared.cta.to.global.bytemask(ptr addrspace(1), ptr addrspace(3), i32, i16, i64, i1)
+
+define void @cp_async_bulk_s2g_bytemask(ptr addrspace(1) %dst, ptr addrspace(3) %src, i32 %size, i16 %mask, i64 %ch) {
+; CHECK-PTX64-LABEL: cp_async_bulk_s2g_bytemask(
+; CHECK-PTX64:       {
+; CHECK-PTX64-NEXT:    .reg .b16 %rs<2>;
+; CHECK-PTX64-NEXT:    .reg .b32 %r<2>;
+; CHECK-PTX64-NEXT:    .reg .b64 %rd<4>;
+; CHECK-PTX64-EMPTY:
+; CHECK-PTX64-NEXT:  // %bb.0:
+; CHECK-PTX64-NEXT:    ld.param.u64 %rd1, [cp_async_bulk_s2g_bytemask_param_0];
+; CHECK-PTX64-NEXT:    ld.param.u64 %rd2, [cp_async_bulk_s2g_bytemask_param_1];
+; CHECK-PTX64-NEXT:    ld.param.u32 %r1, [cp_async_bulk_s2g_bytemask_param_2];
+; CHECK-PTX64-NEXT:    ld.param.u16 %rs1, [cp_async_bulk_s2g_bytemask_param_3];
+; CHECK-PTX64-NEXT:    ld.param.u64 %rd3, [cp_async_bulk_s2g_bytemask_param_4];
+; CHECK-PTX64-NEXT:    cp.async.bulk.global.shared::cta.bulk_group.L2::cache_hint.cp_mask [%rd1], [%rd2], %r1, %rd3, %rs1;
+; CHECK-PTX64-NEXT:    cp.async.bulk.global.shared::cta.bulk_group.cp_mask [%rd1], [%rd2], %r1, %rs1;
+; CHECK-PTX64-NEXT:    ret;
+;
+; CHECK-PTX-SHARED32-LABEL: cp_async_bulk_s2g_bytemask(
+; CHECK-PTX-SHARED32:       {
+; CHECK-PTX-SHARED32-NEXT:    .reg .b16 %rs<2>;
+; CHECK-PTX-SHARED32-NEXT:    .reg .b32 %r<3>;
+; CHECK-PTX-SHARED32-NEXT:    .reg .b64 %rd<3>;
+; CHECK-PTX-SHARED32-EMPTY:
+; CHECK-PTX-SHARED32-NEXT:  // %bb.0:
+; CHECK-PTX-SHARED32-NEXT:    ld.param.u64 %rd1, [cp_async_bulk_s2g_bytemask_param_0];
+; CHECK-PTX-SHARED32-NEXT:    ld.param.u32 %r1, [cp_async_bulk_s2g_bytemask_param_1];
+; CHECK-PTX-SHARED32-NEXT:    ld.param.u32 %r2, [cp_async_bulk_s2g_bytemask_param_2];
+; CHECK-PTX-SHARED32-NEXT:    ld.param.u16 %rs1, [cp_async_bulk_s2g_bytemask_param_3];
+; CHECK-PTX-SHARED32-NEXT:    ld.param.u64 %rd2, [cp_async_bulk_s2g_bytemask_param_4];
+; CHECK-PTX-SHARED32-NEXT:    cp.async.bulk.global.shared::cta.bulk_group.L2::cache_hint.cp_mask [%rd1], [%r1], %r2, %rd2, %rs1;
+; CHECK-PTX-SHARED32-NEXT:    cp.async.bulk.global.shared::cta.bulk_group.cp_mask [%rd1], [%r1], %r2, %rs1;
+; CHECK-PTX-SHARED32-NEXT:    ret;
+  tail call void @llvm.nvvm.cp.async.bulk.shared.cta.to.global.bytemask(ptr addrspace(1) %dst, ptr addrspace(3) %src, i32 %size, i16 %mask, i64 %ch, i1 1)
+  tail call void @llvm.nvvm.cp.async.bulk.shared.cta.to.global.bytemask(ptr addrspace(1) %dst, ptr addrspace(3) %src, i32 %size, i16 %mask, i64 0, i1 0)
+  ret void
+}

@llvmbot
Copy link
Member

llvmbot commented May 6, 2025

@llvm/pr-subscribers-backend-nvptx

Author: Durgadoss R (durga4github)

Changes

This patch adds a new variant of TMA Bulk Copy
intrinsics introduced in sm100+. This variant
has an additional byte_mask to select the bytes
for the copy operation. This patch adds the
relevant documentation and lit tests as well.

PTX Spec link:
https://docs.nvidia.com/cuda/parallel-thread-execution/#data-movement-and-conversion-instructions-bulk-copy


Full diff: https://github.com/llvm/llvm-project/pull/138679.diff

6 Files Affected:

  • (modified) llvm/docs/NVPTXUsage.rst (+5-1)
  • (modified) llvm/include/llvm/IR/IntrinsicsNVVM.td (+14)
  • (modified) llvm/lib/Target/NVPTX/NVPTXISelDAGToDAG.cpp (+36-15)
  • (modified) llvm/lib/Target/NVPTX/NVPTXISelDAGToDAG.h (+1-1)
  • (modified) llvm/lib/Target/NVPTX/NVPTXIntrinsics.td (+21-10)
  • (added) llvm/test/CodeGen/NVPTX/cp-async-bulk-s2g-sm100.ll (+46)
diff --git a/llvm/docs/NVPTXUsage.rst b/llvm/docs/NVPTXUsage.rst
index c1426823d87af..2a091e27358eb 100644
--- a/llvm/docs/NVPTXUsage.rst
+++ b/llvm/docs/NVPTXUsage.rst
@@ -616,6 +616,7 @@ Syntax:
 .. code-block:: llvm
 
   declare void @llvm.nvvm.cp.async.bulk.shared.cta.to.global(ptr addrspace(1) %dst, ptr addrspace(3) %src, i32 %size, i64 %ch, i1 %flag_ch)
+  declare void @llvm.nvvm.cp.async.bulk.shared.cta.to.global.bytemask(..., i32 %size, i16 %mask, i64 %ch, i1 %flag_ch)
 
 Overview:
 """""""""
@@ -624,7 +625,10 @@ The '``@llvm.nvvm.cp.async.bulk.shared.cta.to.global``' intrinsic
 corresponds to the ``cp.async.bulk.global.shared::cta.*`` set of PTX
 instructions. These instructions initiate an asynchronous copy from
 shared::cta to global memory. The 32-bit operand ``%size`` specifies
-the amount of memory to be copied and it must be a multiple of 16.
+the amount of memory to be copied (in bytes) and it must be a multiple
+of 16. For the ``.bytemask`` variant, the 16-bit wide mask operand
+specifies whether the i-th byte of each 16-byte wide chunk of source
+data is copied to the destination.
 
 * The last argument to these intrinsics is a boolean flag
   indicating support for cache_hint. This flag argument must
diff --git a/llvm/include/llvm/IR/IntrinsicsNVVM.td b/llvm/include/llvm/IR/IntrinsicsNVVM.td
index 8b87822d3fdda..abe1395bb9f47 100644
--- a/llvm/include/llvm/IR/IntrinsicsNVVM.td
+++ b/llvm/include/llvm/IR/IntrinsicsNVVM.td
@@ -5323,6 +5323,20 @@ def int_nvvm_cp_async_bulk_shared_cta_to_global
        NoCapture<ArgIndex<0>>, NoCapture<ArgIndex<1>>,
        ImmArg<ArgIndex<4>>]>;
 
+// From Shared CTA to Global memory with bytemask
+def int_nvvm_cp_async_bulk_shared_cta_to_global_bytemask
+  : DefaultAttrsIntrinsic<[],
+      [llvm_global_ptr_ty, // dst_gmem_ptr
+       llvm_shared_ptr_ty, // src_smem_ptr
+       llvm_i32_ty,        // copy_size
+       llvm_i16_ty,        // byte_mask
+       llvm_i64_ty,        // cache_hint
+       llvm_i1_ty],        // Flag for cache_hint
+      [IntrConvergent, IntrArgMemOnly,
+       WriteOnly<ArgIndex<0>>, ReadOnly<ArgIndex<1>>,
+       NoCapture<ArgIndex<0>>, NoCapture<ArgIndex<1>>,
+       ImmArg<ArgIndex<5>>]>;
+
 // Intrinsics for Bulk Copy Prefetch L2
 def int_nvvm_cp_async_bulk_prefetch_L2
   : DefaultAttrsIntrinsic<[],
diff --git a/llvm/lib/Target/NVPTX/NVPTXISelDAGToDAG.cpp b/llvm/lib/Target/NVPTX/NVPTXISelDAGToDAG.cpp
index 6f6084b99dda2..fd352598fdaf4 100644
--- a/llvm/lib/Target/NVPTX/NVPTXISelDAGToDAG.cpp
+++ b/llvm/lib/Target/NVPTX/NVPTXISelDAGToDAG.cpp
@@ -2720,28 +2720,46 @@ void NVPTXDAGToDAGISel::SelectCpAsyncBulkTensorReduceCommon(SDNode *N,
   ReplaceNode(N, CurDAG->getMachineNode(Opcode, DL, N->getVTList(), Ops));
 }
 
-void NVPTXDAGToDAGISel::SelectCpAsyncBulkS2G(SDNode *N) {
+void NVPTXDAGToDAGISel::SelectCpAsyncBulkS2GCommon(SDNode *N, bool HasMask) {
   // We have {Chain, Intrinsic-ID} followed by the actual intrisic args:
-  // dst, src, size, cache_hint, cache_hint_flag
+  // dst, src, size, mask, cache_hint, cache_hint_flag
   // NumOperands = {Chain, IID} + {Actual intrinsic args}
-  //             = {2}          + {5}
+  //             = {2}          + {6}
   size_t NumOps = N->getNumOperands();
   bool IsCacheHint = N->getConstantOperandVal(NumOps - 1) == 1;
-  size_t NumArgs = IsCacheHint ? 4 : 3; // src, dst, size, cache_hint
+  size_t CacheHintIdx = NumOps - 2;
 
   SDLoc DL(N);
-  SmallVector<SDValue, 8> Ops(N->ops().slice(2, NumArgs));
-  Ops.push_back(N->getOperand(0)); // Chain operand
+  SDValue Offset, Base;
+  SelectADDR(N->getOperand(3), Base, Offset); // src
 
-  bool IsShared32 =
-      CurDAG->getDataLayout().getPointerSizeInBits(ADDRESS_SPACE_SHARED) == 32;
-  unsigned Opcode;
+  SmallVector<SDValue, 8> Ops;
+  // BaseArgs: {dst, src, size}
+  Ops.push_back(N->getOperand(2)); // dst
+  Ops.push_back(Base);             // src
+  Ops.push_back(Offset);           // src
+  Ops.push_back(N->getOperand(4)); // size
+
+  // Push Mask operand, if available
+  if (HasMask)
+    Ops.push_back(N->getOperand(CacheHintIdx - 1));
+
+  // Push CacheHint operand, if available
   if (IsCacheHint)
-    Opcode = IsShared32 ? NVPTX::CP_ASYNC_BULK_S2G_SHARED32_CH
-                        : NVPTX::CP_ASYNC_BULK_S2G_CH;
-  else
-    Opcode = IsShared32 ? NVPTX::CP_ASYNC_BULK_S2G_SHARED32
-                        : NVPTX::CP_ASYNC_BULK_S2G;
+    Ops.push_back(N->getOperand(CacheHintIdx));
+
+  // Finally, the chain operand
+  Ops.push_back(N->getOperand(0));
+
+  unsigned Opcode = [&]() {
+    if (HasMask && IsCacheHint)
+      return NVPTX::CP_ASYNC_BULK_S2G_BM_CH;
+    if (HasMask)
+      return NVPTX::CP_ASYNC_BULK_S2G_BM;
+    if (IsCacheHint)
+      return NVPTX::CP_ASYNC_BULK_S2G_CH;
+    return NVPTX::CP_ASYNC_BULK_S2G;
+  }();
   ReplaceNode(N, CurDAG->getMachineNode(Opcode, DL, N->getVTList(), Ops));
 }
 
@@ -2928,7 +2946,10 @@ bool NVPTXDAGToDAGISel::tryIntrinsicVoid(SDNode *N) {
     SelectCpAsyncBulkG2S(N);
     return true;
   case Intrinsic::nvvm_cp_async_bulk_shared_cta_to_global:
-    SelectCpAsyncBulkS2G(N);
+    SelectCpAsyncBulkS2GCommon(N);
+    return true;
+  case Intrinsic::nvvm_cp_async_bulk_shared_cta_to_global_bytemask:
+    SelectCpAsyncBulkS2GCommon(N, /*HasMask=*/true);
     return true;
   case Intrinsic::nvvm_cp_async_bulk_prefetch_L2:
     SelectCpAsyncBulkPrefetchL2(N);
diff --git a/llvm/lib/Target/NVPTX/NVPTXISelDAGToDAG.h b/llvm/lib/Target/NVPTX/NVPTXISelDAGToDAG.h
index 23cbd458571a0..53f6c060405e7 100644
--- a/llvm/lib/Target/NVPTX/NVPTXISelDAGToDAG.h
+++ b/llvm/lib/Target/NVPTX/NVPTXISelDAGToDAG.h
@@ -93,7 +93,7 @@ class LLVM_LIBRARY_VISIBILITY NVPTXDAGToDAGISel : public SelectionDAGISel {
   void SelectV2I64toI128(SDNode *N);
   void SelectI128toV2I64(SDNode *N);
   void SelectCpAsyncBulkG2S(SDNode *N);
-  void SelectCpAsyncBulkS2G(SDNode *N);
+  void SelectCpAsyncBulkS2GCommon(SDNode *N, bool HasMask = false);
   void SelectCpAsyncBulkPrefetchL2(SDNode *N);
   void SelectCpAsyncBulkTensorG2SCommon(SDNode *N, bool IsIm2Col = false);
   void SelectCpAsyncBulkTensorS2GCommon(SDNode *N, bool IsIm2Col = false);
diff --git a/llvm/lib/Target/NVPTX/NVPTXIntrinsics.td b/llvm/lib/Target/NVPTX/NVPTXIntrinsics.td
index 3eedb43e4c81a..65f1b524aec20 100644
--- a/llvm/lib/Target/NVPTX/NVPTXIntrinsics.td
+++ b/llvm/lib/Target/NVPTX/NVPTXIntrinsics.td
@@ -516,6 +516,9 @@ class CpAsyncBulkStr<bit mc, bit ch> {
   string S2G = "cp.async.bulk.global.shared::cta.bulk_group"
                # !if(ch, ".L2::cache_hint", "");
 
+  // Shared to Global memory with bytemask
+  string S2G_BM = S2G # ".cp_mask";
+
   // Global to Shared cluster memory
   string G2S = "cp.async.bulk.shared::cluster.global.mbarrier::complete_tx::bytes"
                # !if(mc, ".multicast::cluster", "")
@@ -525,18 +528,26 @@ class CpAsyncBulkStr<bit mc, bit ch> {
   string C2C = "cp.async.bulk.shared::cluster.shared::cta.mbarrier::complete_tx::bytes";
 }
 
-multiclass CP_ASYNC_BULK_S2G<NVPTXRegClass rc> {
-  def NAME: NVPTXInst<(outs),
-            (ins Int64Regs:$dst, rc:$src, Int32Regs:$size),
+def CP_ASYNC_BULK_S2G : NVPTXInst<(outs),
+            (ins Int64Regs:$dst, ADDR:$src, Int32Regs:$size),
             !strconcat(CpAsyncBulkStr<0, 0>.S2G, " [$dst], [$src], $size;"), []>,
             Requires<[hasPTX<80>, hasSM<90>]>;
-  def NAME # _CH: NVPTXInst<(outs),
-                  (ins Int64Regs:$dst, rc:$src, Int32Regs:$size, Int64Regs:$ch),
-                  !strconcat(CpAsyncBulkStr<0, 1>.S2G, " [$dst], [$src], $size, $ch;"), []>,
-                  Requires<[hasPTX<80>, hasSM<90>]>;
-}
-defm CP_ASYNC_BULK_S2G : CP_ASYNC_BULK_S2G<Int64Regs>;
-defm CP_ASYNC_BULK_S2G_SHARED32 : CP_ASYNC_BULK_S2G<Int32Regs>;
+
+def CP_ASYNC_BULK_S2G_CH : NVPTXInst<(outs),
+            (ins Int64Regs:$dst, ADDR:$src, Int32Regs:$size, Int64Regs:$ch),
+            !strconcat(CpAsyncBulkStr<0, 1>.S2G, " [$dst], [$src], $size, $ch;"), []>,
+            Requires<[hasPTX<80>, hasSM<90>]>;
+
+// Variants with bytemask
+def CP_ASYNC_BULK_S2G_BM : NVPTXInst<(outs),
+                (ins Int64Regs:$dst, ADDR:$src, Int32Regs:$size, Int16Regs:$mask),
+                !strconcat(CpAsyncBulkStr<0, 0>.S2G_BM, " [$dst], [$src], $size, $mask;"), []>,
+                Requires<[hasPTX<86>, hasSM<100>]>;
+
+def CP_ASYNC_BULK_S2G_BM_CH : NVPTXInst<(outs),
+                   (ins Int64Regs:$dst, ADDR:$src, Int32Regs:$size, Int16Regs:$mask, Int64Regs:$ch),
+                   !strconcat(CpAsyncBulkStr<0, 1>.S2G_BM, " [$dst], [$src], $size, $ch, $mask;"), []>,
+                   Requires<[hasPTX<86>, hasSM<100>]>;
 
 multiclass CP_ASYNC_BULK_G2S<NVPTXRegClass rc> {
   def NAME: NVPTXInst<(outs),
diff --git a/llvm/test/CodeGen/NVPTX/cp-async-bulk-s2g-sm100.ll b/llvm/test/CodeGen/NVPTX/cp-async-bulk-s2g-sm100.ll
new file mode 100644
index 0000000000000..c2e2b26ed6882
--- /dev/null
+++ b/llvm/test/CodeGen/NVPTX/cp-async-bulk-s2g-sm100.ll
@@ -0,0 +1,46 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5
+; RUN: llc < %s -mtriple=nvptx64 -mcpu=sm_100 -mattr=+ptx86| FileCheck --check-prefixes=CHECK-PTX64 %s
+; RUN: llc < %s -mtriple=nvptx64 -mcpu=sm_100 -mattr=+ptx86 --nvptx-short-ptr| FileCheck --check-prefixes=CHECK-PTX-SHARED32 %s
+; RUN: %if ptxas-12.8 %{ llc < %s -mtriple=nvptx64 -mcpu=sm_100 -mattr=+ptx86| %ptxas-verify -arch=sm_100 %}
+; RUN: %if ptxas-12.8 %{ llc < %s -mtriple=nvptx64 -mcpu=sm_100 -mattr=+ptx86 --nvptx-short-ptr| %ptxas-verify -arch=sm_100 %}
+
+target triple = "nvptx64-nvidia-cuda"
+
+declare void @llvm.nvvm.cp.async.bulk.shared.cta.to.global.bytemask(ptr addrspace(1), ptr addrspace(3), i32, i16, i64, i1)
+
+define void @cp_async_bulk_s2g_bytemask(ptr addrspace(1) %dst, ptr addrspace(3) %src, i32 %size, i16 %mask, i64 %ch) {
+; CHECK-PTX64-LABEL: cp_async_bulk_s2g_bytemask(
+; CHECK-PTX64:       {
+; CHECK-PTX64-NEXT:    .reg .b16 %rs<2>;
+; CHECK-PTX64-NEXT:    .reg .b32 %r<2>;
+; CHECK-PTX64-NEXT:    .reg .b64 %rd<4>;
+; CHECK-PTX64-EMPTY:
+; CHECK-PTX64-NEXT:  // %bb.0:
+; CHECK-PTX64-NEXT:    ld.param.u64 %rd1, [cp_async_bulk_s2g_bytemask_param_0];
+; CHECK-PTX64-NEXT:    ld.param.u64 %rd2, [cp_async_bulk_s2g_bytemask_param_1];
+; CHECK-PTX64-NEXT:    ld.param.u32 %r1, [cp_async_bulk_s2g_bytemask_param_2];
+; CHECK-PTX64-NEXT:    ld.param.u16 %rs1, [cp_async_bulk_s2g_bytemask_param_3];
+; CHECK-PTX64-NEXT:    ld.param.u64 %rd3, [cp_async_bulk_s2g_bytemask_param_4];
+; CHECK-PTX64-NEXT:    cp.async.bulk.global.shared::cta.bulk_group.L2::cache_hint.cp_mask [%rd1], [%rd2], %r1, %rd3, %rs1;
+; CHECK-PTX64-NEXT:    cp.async.bulk.global.shared::cta.bulk_group.cp_mask [%rd1], [%rd2], %r1, %rs1;
+; CHECK-PTX64-NEXT:    ret;
+;
+; CHECK-PTX-SHARED32-LABEL: cp_async_bulk_s2g_bytemask(
+; CHECK-PTX-SHARED32:       {
+; CHECK-PTX-SHARED32-NEXT:    .reg .b16 %rs<2>;
+; CHECK-PTX-SHARED32-NEXT:    .reg .b32 %r<3>;
+; CHECK-PTX-SHARED32-NEXT:    .reg .b64 %rd<3>;
+; CHECK-PTX-SHARED32-EMPTY:
+; CHECK-PTX-SHARED32-NEXT:  // %bb.0:
+; CHECK-PTX-SHARED32-NEXT:    ld.param.u64 %rd1, [cp_async_bulk_s2g_bytemask_param_0];
+; CHECK-PTX-SHARED32-NEXT:    ld.param.u32 %r1, [cp_async_bulk_s2g_bytemask_param_1];
+; CHECK-PTX-SHARED32-NEXT:    ld.param.u32 %r2, [cp_async_bulk_s2g_bytemask_param_2];
+; CHECK-PTX-SHARED32-NEXT:    ld.param.u16 %rs1, [cp_async_bulk_s2g_bytemask_param_3];
+; CHECK-PTX-SHARED32-NEXT:    ld.param.u64 %rd2, [cp_async_bulk_s2g_bytemask_param_4];
+; CHECK-PTX-SHARED32-NEXT:    cp.async.bulk.global.shared::cta.bulk_group.L2::cache_hint.cp_mask [%rd1], [%r1], %r2, %rd2, %rs1;
+; CHECK-PTX-SHARED32-NEXT:    cp.async.bulk.global.shared::cta.bulk_group.cp_mask [%rd1], [%r1], %r2, %rs1;
+; CHECK-PTX-SHARED32-NEXT:    ret;
+  tail call void @llvm.nvvm.cp.async.bulk.shared.cta.to.global.bytemask(ptr addrspace(1) %dst, ptr addrspace(3) %src, i32 %size, i16 %mask, i64 %ch, i1 1)
+  tail call void @llvm.nvvm.cp.async.bulk.shared.cta.to.global.bytemask(ptr addrspace(1) %dst, ptr addrspace(3) %src, i32 %size, i16 %mask, i64 0, i1 0)
+  ret void
+}

@durga4github durga4github requested a review from Artem-B May 6, 2025 12:04
@durga4github durga4github force-pushed the durgadossr/nvptx_tma_copy_bytemask branch 2 times, most recently from 5c97fa7 to 865636a Compare May 8, 2025 08:07
@durga4github durga4github force-pushed the durgadossr/nvptx_tma_copy_bytemask branch 2 times, most recently from 63537ec to c6fea06 Compare May 12, 2025 06:12
@durga4github
Copy link
Contributor Author

@Artem-B / @AlexMaclean , Ping for review on the latest revision..

Copy link
Member

@AlexMaclean AlexMaclean left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@durga4github durga4github force-pushed the durgadossr/nvptx_tma_copy_bytemask branch from c6fea06 to b69e699 Compare May 14, 2025 07:48
This patch adds a new variant of TMA Bulk Copy
intrinsics introduced in sm100+. This variant
has an additional byte_mask to select the bytes
for the copy operation.

* Selection is all done through tablegen now.
  So, this patch removes the corresponding
  SelectCpAsyncBulkS2G() function.
* lit tests are verified with a cuda-12.8 ptxas
  executable.

Signed-off-by: Durgadoss R <[email protected]>
@durga4github durga4github force-pushed the durgadossr/nvptx_tma_copy_bytemask branch from b69e699 to 2c4cce1 Compare May 15, 2025 09:22
@durga4github
Copy link
Contributor Author

Rebased on top of the latest,

@durga4github durga4github merged commit c507a08 into llvm:main May 15, 2025
12 checks passed
@durga4github durga4github deleted the durgadossr/nvptx_tma_copy_bytemask branch May 15, 2025 10:38
@llvm-ci
Copy link
Collaborator

llvm-ci commented May 15, 2025

LLVM Buildbot has detected a new failure on builder lldb-x86_64-debian running on lldb-x86_64-debian while building llvm at step 6 "test".

Full details are available at: https://lab.llvm.org/buildbot/#/builders/162/builds/22455

Here is the relevant piece of the build log for the reference
Step 6 (test) failure: build (failure)
...
UNSUPPORTED: lldb-shell :: SymbolFile/PDB/vbases.test (2924 of 2935)
UNSUPPORTED: lldb-shell :: ScriptInterpreter/Python/Crashlog/text.test (2925 of 2935)
UNSUPPORTED: lldb-shell :: ScriptInterpreter/Lua/watchpoint_callback.test (2926 of 2935)
UNSUPPORTED: lldb-shell :: ScriptInterpreter/Lua/fail_breakpoint_oneline.test (2927 of 2935)
UNSUPPORTED: lldb-shell :: ScriptInterpreter/Lua/breakpoint_function_callback.test (2928 of 2935)
UNSUPPORTED: lldb-shell :: ScriptInterpreter/Lua/command_script_import.test (2929 of 2935)
UNSUPPORTED: lldb-shell :: ScriptInterpreter/Python/Crashlog/interactive_crashlog_json.test (2930 of 2935)
PASS: lldb-api :: api/multithreaded/TestMultithreaded.py (2931 of 2935)
PASS: lldb-api :: terminal/TestEditlineCompletions.py (2932 of 2935)
UNRESOLVED: lldb-api :: tools/lldb-dap/launch/TestDAP_launch.py (2933 of 2935)
******************** TEST 'lldb-api :: tools/lldb-dap/launch/TestDAP_launch.py' FAILED ********************
Script:
--
/usr/bin/python3 /home/worker/2.0.1/lldb-x86_64-debian/llvm-project/lldb/test/API/dotest.py -u CXXFLAGS -u CFLAGS --env LLVM_LIBS_DIR=/home/worker/2.0.1/lldb-x86_64-debian/build/./lib --env LLVM_INCLUDE_DIR=/home/worker/2.0.1/lldb-x86_64-debian/build/include --env LLVM_TOOLS_DIR=/home/worker/2.0.1/lldb-x86_64-debian/build/./bin --arch x86_64 --build-dir /home/worker/2.0.1/lldb-x86_64-debian/build/lldb-test-build.noindex --lldb-module-cache-dir /home/worker/2.0.1/lldb-x86_64-debian/build/lldb-test-build.noindex/module-cache-lldb/lldb-api --clang-module-cache-dir /home/worker/2.0.1/lldb-x86_64-debian/build/lldb-test-build.noindex/module-cache-clang/lldb-api --executable /home/worker/2.0.1/lldb-x86_64-debian/build/./bin/lldb --compiler /home/worker/2.0.1/lldb-x86_64-debian/build/./bin/clang --dsymutil /home/worker/2.0.1/lldb-x86_64-debian/build/./bin/dsymutil --make /usr/bin/gmake --llvm-tools-dir /home/worker/2.0.1/lldb-x86_64-debian/build/./bin --lldb-obj-root /home/worker/2.0.1/lldb-x86_64-debian/build/tools/lldb --lldb-libs-dir /home/worker/2.0.1/lldb-x86_64-debian/build/./lib -t /home/worker/2.0.1/lldb-x86_64-debian/llvm-project/lldb/test/API/tools/lldb-dap/launch -p TestDAP_launch.py
--
Exit Code: 1

Command Output (stdout):
--
lldb version 21.0.0git (https://github.com/llvm/llvm-project.git revision c507a0830df2e4fd0c234eee035aac2109de6d6e)
  clang revision c507a0830df2e4fd0c234eee035aac2109de6d6e
  llvm revision c507a0830df2e4fd0c234eee035aac2109de6d6e
Skipping the following test categories: ['libc++', 'dsym', 'gmodules', 'debugserver', 'objc']

--
Command Output (stderr):
--
Change dir to: /home/worker/2.0.1/lldb-x86_64-debian/llvm-project/lldb/test/API/tools/lldb-dap/launch
runCmd: settings clear --all

output: 

runCmd: settings set symbols.enable-external-lookup false

output: 

runCmd: settings set target.inherit-tcc true

output: 

runCmd: settings set target.disable-aslr false

output: 

runCmd: settings set target.detach-on-error false

output: 

runCmd: settings set target.auto-apply-fixits false

TIFitis pushed a commit to TIFitis/llvm-project that referenced this pull request May 19, 2025
This patch adds a new variant of TMA Bulk Copy
intrinsics introduced in sm100+. This variant
has an additional byte_mask to select the bytes
for the copy operation.

* Selection is all done through table-gen now.
  So, this patch removes the corresponding
  SelectCpAsyncBulkS2G() function.
* lit tests are verified with a cuda-12.8 ptxas
  executable.

PTX Spec link:

https://docs.nvidia.com/cuda/parallel-thread-execution/#data-movement-and-conversion-instructions-bulk-copy

Signed-off-by: Durgadoss R <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants