Skip to content

[NVPTX] Add 'activemask' builtin and intrinsic support #79768

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Jan 29, 2024

Conversation

jhuber6
Copy link
Contributor

@jhuber6 jhuber6 commented Jan 28, 2024

Summary:
This patch adds support for getting the 'activemask' instruction's value
without needing to use inline assembly. See the relevant PTX reference
for details.

https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#parallel-synchronization-and-communication-instructions-activemask

@llvmbot llvmbot added clang Clang issues not falling into any other category clang:frontend Language frontend issues, e.g. anything involving "Sema" llvm:ir labels Jan 28, 2024
@llvmbot
Copy link
Member

llvmbot commented Jan 28, 2024

@llvm/pr-subscribers-clang

Author: Joseph Huber (jhuber6)

Changes

Summary:
This patch adds support for getting the 'activemask' instruction's value
without needing to use inline assembly. See the relevant PTX reference
for details.

https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#parallel-synchronization-and-communication-instructions-activemask


Full diff: https://github.com/llvm/llvm-project/pull/79768.diff

6 Files Affected:

  • (modified) clang/include/clang/Basic/BuiltinsNVPTX.def (+7-1)
  • (modified) clang/test/CodeGen/builtins-nvptx.c (+8)
  • (modified) llvm/include/llvm/IR/IntrinsicsNVVM.td (+8)
  • (modified) llvm/lib/Target/NVPTX/NVPTX.td (+2-2)
  • (modified) llvm/lib/Target/NVPTX/NVPTXIntrinsics.td (+6)
  • (added) llvm/test/CodeGen/NVPTX/activemask.ll (+38)
diff --git a/clang/include/clang/Basic/BuiltinsNVPTX.def b/clang/include/clang/Basic/BuiltinsNVPTX.def
index 0f2e8260143be78..506288547a15822 100644
--- a/clang/include/clang/Basic/BuiltinsNVPTX.def
+++ b/clang/include/clang/Basic/BuiltinsNVPTX.def
@@ -44,6 +44,7 @@
 #pragma push_macro("PTX42")
 #pragma push_macro("PTX60")
 #pragma push_macro("PTX61")
+#pragma push_macro("PTX62")
 #pragma push_macro("PTX63")
 #pragma push_macro("PTX64")
 #pragma push_macro("PTX65")
@@ -76,7 +77,8 @@
 #define PTX65 "ptx65|" PTX70
 #define PTX64 "ptx64|" PTX65
 #define PTX63 "ptx63|" PTX64
-#define PTX61 "ptx61|" PTX63
+#define PTX62 "ptx62|" PTX63
+#define PTX61 "ptx61|" PTX62
 #define PTX60 "ptx60|" PTX61
 #define PTX42 "ptx42|" PTX60
 
@@ -632,6 +634,9 @@ TARGET_BUILTIN(__nvvm_vote_any_sync, "bUib", "", PTX60)
 TARGET_BUILTIN(__nvvm_vote_uni_sync, "bUib", "", PTX60)
 TARGET_BUILTIN(__nvvm_vote_ballot_sync, "UiUib", "", PTX60)
 
+// Mask
+TARGET_BUILTIN(__nvvm_activemask, "i", "n", PTX62)
+
 // Match
 TARGET_BUILTIN(__nvvm_match_any_sync_i32, "UiUiUi", "", AND(SM_70,PTX60))
 TARGET_BUILTIN(__nvvm_match_any_sync_i64, "UiUiWi", "", AND(SM_70,PTX60))
@@ -1065,6 +1070,7 @@ TARGET_BUILTIN(__nvvm_getctarank_shared_cluster, "iv*3", "", AND(SM_90,PTX78))
 #pragma pop_macro("PTX42")
 #pragma pop_macro("PTX60")
 #pragma pop_macro("PTX61")
+#pragma pop_macro("PTX62")
 #pragma pop_macro("PTX63")
 #pragma pop_macro("PTX64")
 #pragma pop_macro("PTX65")
diff --git a/clang/test/CodeGen/builtins-nvptx.c b/clang/test/CodeGen/builtins-nvptx.c
index 353f3ebb608c2b1..e571d1cd61c41d9 100644
--- a/clang/test/CodeGen/builtins-nvptx.c
+++ b/clang/test/CodeGen/builtins-nvptx.c
@@ -165,6 +165,14 @@ __device__ void sync() {
 
 }
 
+__device__ void activemask() {
+
+// CHECK: call i32 @llvm.nvvm.activemask()
+
+  __nvvm_activemask(0);
+
+}
+
 
 // NVVM intrinsics
 
diff --git a/llvm/include/llvm/IR/IntrinsicsNVVM.td b/llvm/include/llvm/IR/IntrinsicsNVVM.td
index 5a5ba2592e1467e..0640fb1f74aa5eb 100644
--- a/llvm/include/llvm/IR/IntrinsicsNVVM.td
+++ b/llvm/include/llvm/IR/IntrinsicsNVVM.td
@@ -4599,6 +4599,14 @@ def int_nvvm_vote_ballot_sync :
             [IntrInaccessibleMemOnly, IntrConvergent, IntrNoCallback], "llvm.nvvm.vote.ballot.sync">,
   ClangBuiltin<"__nvvm_vote_ballot_sync">;
 
+//
+// ACTIVEMASK
+//
+def int_nvvm_activemask :
+  Intrinsic<[llvm_i32_ty], [],
+            [IntrInaccessibleMemOnly, IntrConvergent, IntrNoCallback], "llvm.nvvm.activemask">,
+  ClangBuiltin<"__nvvm_activemask">;
+
 //
 // MATCH.SYNC
 //
diff --git a/llvm/lib/Target/NVPTX/NVPTX.td b/llvm/lib/Target/NVPTX/NVPTX.td
index f2a4ce381b40b48..a2233d3882b236d 100644
--- a/llvm/lib/Target/NVPTX/NVPTX.td
+++ b/llvm/lib/Target/NVPTX/NVPTX.td
@@ -40,7 +40,7 @@ foreach sm = [20, 21, 30, 32, 35, 37, 50, 52, 53,
 
 def SM90a: FeatureSM<"90a", 901>;
 
-foreach version = [32, 40, 41, 42, 43, 50, 60, 61, 63, 64, 65,
+foreach version = [32, 40, 41, 42, 43, 50, 60, 61, 62, 63, 64, 65,
                    70, 71, 72, 73, 74, 75, 76, 77, 78, 80, 81, 82, 83] in
   def PTX#version: FeaturePTX<version>;
 
@@ -65,7 +65,7 @@ def : Proc<"sm_61", [SM61, PTX50]>;
 def : Proc<"sm_62", [SM62, PTX50]>;
 def : Proc<"sm_70", [SM70, PTX60]>;
 def : Proc<"sm_72", [SM72, PTX61]>;
-def : Proc<"sm_75", [SM75, PTX63]>;
+def : Proc<"sm_75", [SM75, PTX62, PTX63]>;
 def : Proc<"sm_80", [SM80, PTX70]>;
 def : Proc<"sm_86", [SM86, PTX71]>;
 def : Proc<"sm_87", [SM87, PTX74]>;
diff --git a/llvm/lib/Target/NVPTX/NVPTXIntrinsics.td b/llvm/lib/Target/NVPTX/NVPTXIntrinsics.td
index 33f1e4a43e072af..2df931597616566 100644
--- a/llvm/lib/Target/NVPTX/NVPTXIntrinsics.td
+++ b/llvm/lib/Target/NVPTX/NVPTXIntrinsics.td
@@ -263,6 +263,12 @@ multiclass MATCH_ANY_SYNC<NVPTXRegClass regclass, string ptxtype, Intrinsic IntO
            Requires<[hasPTX<60>, hasSM<70>]>;
 }
 
+// activemask.b32
+def ACTIVEMASK : NVPTXInst<(outs Int32Regs:$dest), (ins),
+                    "activemask.b32 \t$dest;", 
+                    [(set Int32Regs:$dest, (int_nvvm_activemask))]>,
+                 Requires<[hasPTX<62>, hasSM<30>]>;
+
 defm MATCH_ANY_SYNC_32 : MATCH_ANY_SYNC<Int32Regs, "b32", int_nvvm_match_any_sync_i32,
                                         i32imm>;
 defm MATCH_ANY_SYNC_64 : MATCH_ANY_SYNC<Int64Regs, "b64", int_nvvm_match_any_sync_i64,
diff --git a/llvm/test/CodeGen/NVPTX/activemask.ll b/llvm/test/CodeGen/NVPTX/activemask.ll
new file mode 100644
index 000000000000000..1496b2ebdd44270
--- /dev/null
+++ b/llvm/test/CodeGen/NVPTX/activemask.ll
@@ -0,0 +1,38 @@
+; RUN: llc < %s -march=nvptx64 -O2 -mcpu=sm_52 -mattr=+ptx62 | FileCheck %s
+; RUN: %if ptxas %{ llc < %s -march=nvptx64 -mcpu=sm_52 -mattr=+ptx62 | %ptxas-verify %}
+
+declare i32 @llvm.nvvm.activemask()
+
+; CHECK-LABEL: activemask(
+;
+;      CHECK: activemask.b32  %[[REG:.+]];
+; CHECK-NEXT: st.param.b32    [func_retval0+0], %[[REG]];
+; CHECK-NEXT: ret;
+define dso_local i32 @activemask() {
+entry:
+  %mask = call i32 @llvm.nvvm.activemask()
+  ret i32 %mask
+}
+
+; CHECK-LABEL: convergent(
+;
+;      CHECK: activemask.b32  %[[REG:.+]];
+;      CHECK: activemask.b32  %[[REG]];
+;      CHECK: .param.b32    [func_retval0+0], %[[REG]];
+; CHECK-NEXT: ret;
+define dso_local i32 @convergent(i1 %cond) {
+entry:
+  br i1 %cond, label %if.else, label %if.then
+
+if.then:
+  %0 = call i32 @llvm.nvvm.activemask()
+  br label %if.end
+
+if.else:
+  %1 = call i32 @llvm.nvvm.activemask()
+  br label %if.end
+
+if.end:
+  %mask = phi i32 [ %0, %if.then ], [ %1, %if.else ]
+  ret i32 %mask
+}

@llvmbot
Copy link
Member

llvmbot commented Jan 28, 2024

@llvm/pr-subscribers-llvm-ir

Author: Joseph Huber (jhuber6)

Changes

Summary:
This patch adds support for getting the 'activemask' instruction's value
without needing to use inline assembly. See the relevant PTX reference
for details.

https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#parallel-synchronization-and-communication-instructions-activemask


Full diff: https://github.com/llvm/llvm-project/pull/79768.diff

6 Files Affected:

  • (modified) clang/include/clang/Basic/BuiltinsNVPTX.def (+7-1)
  • (modified) clang/test/CodeGen/builtins-nvptx.c (+8)
  • (modified) llvm/include/llvm/IR/IntrinsicsNVVM.td (+8)
  • (modified) llvm/lib/Target/NVPTX/NVPTX.td (+2-2)
  • (modified) llvm/lib/Target/NVPTX/NVPTXIntrinsics.td (+6)
  • (added) llvm/test/CodeGen/NVPTX/activemask.ll (+38)
diff --git a/clang/include/clang/Basic/BuiltinsNVPTX.def b/clang/include/clang/Basic/BuiltinsNVPTX.def
index 0f2e8260143be78..506288547a15822 100644
--- a/clang/include/clang/Basic/BuiltinsNVPTX.def
+++ b/clang/include/clang/Basic/BuiltinsNVPTX.def
@@ -44,6 +44,7 @@
 #pragma push_macro("PTX42")
 #pragma push_macro("PTX60")
 #pragma push_macro("PTX61")
+#pragma push_macro("PTX62")
 #pragma push_macro("PTX63")
 #pragma push_macro("PTX64")
 #pragma push_macro("PTX65")
@@ -76,7 +77,8 @@
 #define PTX65 "ptx65|" PTX70
 #define PTX64 "ptx64|" PTX65
 #define PTX63 "ptx63|" PTX64
-#define PTX61 "ptx61|" PTX63
+#define PTX62 "ptx62|" PTX63
+#define PTX61 "ptx61|" PTX62
 #define PTX60 "ptx60|" PTX61
 #define PTX42 "ptx42|" PTX60
 
@@ -632,6 +634,9 @@ TARGET_BUILTIN(__nvvm_vote_any_sync, "bUib", "", PTX60)
 TARGET_BUILTIN(__nvvm_vote_uni_sync, "bUib", "", PTX60)
 TARGET_BUILTIN(__nvvm_vote_ballot_sync, "UiUib", "", PTX60)
 
+// Mask
+TARGET_BUILTIN(__nvvm_activemask, "i", "n", PTX62)
+
 // Match
 TARGET_BUILTIN(__nvvm_match_any_sync_i32, "UiUiUi", "", AND(SM_70,PTX60))
 TARGET_BUILTIN(__nvvm_match_any_sync_i64, "UiUiWi", "", AND(SM_70,PTX60))
@@ -1065,6 +1070,7 @@ TARGET_BUILTIN(__nvvm_getctarank_shared_cluster, "iv*3", "", AND(SM_90,PTX78))
 #pragma pop_macro("PTX42")
 #pragma pop_macro("PTX60")
 #pragma pop_macro("PTX61")
+#pragma pop_macro("PTX62")
 #pragma pop_macro("PTX63")
 #pragma pop_macro("PTX64")
 #pragma pop_macro("PTX65")
diff --git a/clang/test/CodeGen/builtins-nvptx.c b/clang/test/CodeGen/builtins-nvptx.c
index 353f3ebb608c2b1..e571d1cd61c41d9 100644
--- a/clang/test/CodeGen/builtins-nvptx.c
+++ b/clang/test/CodeGen/builtins-nvptx.c
@@ -165,6 +165,14 @@ __device__ void sync() {
 
 }
 
+__device__ void activemask() {
+
+// CHECK: call i32 @llvm.nvvm.activemask()
+
+  __nvvm_activemask(0);
+
+}
+
 
 // NVVM intrinsics
 
diff --git a/llvm/include/llvm/IR/IntrinsicsNVVM.td b/llvm/include/llvm/IR/IntrinsicsNVVM.td
index 5a5ba2592e1467e..0640fb1f74aa5eb 100644
--- a/llvm/include/llvm/IR/IntrinsicsNVVM.td
+++ b/llvm/include/llvm/IR/IntrinsicsNVVM.td
@@ -4599,6 +4599,14 @@ def int_nvvm_vote_ballot_sync :
             [IntrInaccessibleMemOnly, IntrConvergent, IntrNoCallback], "llvm.nvvm.vote.ballot.sync">,
   ClangBuiltin<"__nvvm_vote_ballot_sync">;
 
+//
+// ACTIVEMASK
+//
+def int_nvvm_activemask :
+  Intrinsic<[llvm_i32_ty], [],
+            [IntrInaccessibleMemOnly, IntrConvergent, IntrNoCallback], "llvm.nvvm.activemask">,
+  ClangBuiltin<"__nvvm_activemask">;
+
 //
 // MATCH.SYNC
 //
diff --git a/llvm/lib/Target/NVPTX/NVPTX.td b/llvm/lib/Target/NVPTX/NVPTX.td
index f2a4ce381b40b48..a2233d3882b236d 100644
--- a/llvm/lib/Target/NVPTX/NVPTX.td
+++ b/llvm/lib/Target/NVPTX/NVPTX.td
@@ -40,7 +40,7 @@ foreach sm = [20, 21, 30, 32, 35, 37, 50, 52, 53,
 
 def SM90a: FeatureSM<"90a", 901>;
 
-foreach version = [32, 40, 41, 42, 43, 50, 60, 61, 63, 64, 65,
+foreach version = [32, 40, 41, 42, 43, 50, 60, 61, 62, 63, 64, 65,
                    70, 71, 72, 73, 74, 75, 76, 77, 78, 80, 81, 82, 83] in
   def PTX#version: FeaturePTX<version>;
 
@@ -65,7 +65,7 @@ def : Proc<"sm_61", [SM61, PTX50]>;
 def : Proc<"sm_62", [SM62, PTX50]>;
 def : Proc<"sm_70", [SM70, PTX60]>;
 def : Proc<"sm_72", [SM72, PTX61]>;
-def : Proc<"sm_75", [SM75, PTX63]>;
+def : Proc<"sm_75", [SM75, PTX62, PTX63]>;
 def : Proc<"sm_80", [SM80, PTX70]>;
 def : Proc<"sm_86", [SM86, PTX71]>;
 def : Proc<"sm_87", [SM87, PTX74]>;
diff --git a/llvm/lib/Target/NVPTX/NVPTXIntrinsics.td b/llvm/lib/Target/NVPTX/NVPTXIntrinsics.td
index 33f1e4a43e072af..2df931597616566 100644
--- a/llvm/lib/Target/NVPTX/NVPTXIntrinsics.td
+++ b/llvm/lib/Target/NVPTX/NVPTXIntrinsics.td
@@ -263,6 +263,12 @@ multiclass MATCH_ANY_SYNC<NVPTXRegClass regclass, string ptxtype, Intrinsic IntO
            Requires<[hasPTX<60>, hasSM<70>]>;
 }
 
+// activemask.b32
+def ACTIVEMASK : NVPTXInst<(outs Int32Regs:$dest), (ins),
+                    "activemask.b32 \t$dest;", 
+                    [(set Int32Regs:$dest, (int_nvvm_activemask))]>,
+                 Requires<[hasPTX<62>, hasSM<30>]>;
+
 defm MATCH_ANY_SYNC_32 : MATCH_ANY_SYNC<Int32Regs, "b32", int_nvvm_match_any_sync_i32,
                                         i32imm>;
 defm MATCH_ANY_SYNC_64 : MATCH_ANY_SYNC<Int64Regs, "b64", int_nvvm_match_any_sync_i64,
diff --git a/llvm/test/CodeGen/NVPTX/activemask.ll b/llvm/test/CodeGen/NVPTX/activemask.ll
new file mode 100644
index 000000000000000..1496b2ebdd44270
--- /dev/null
+++ b/llvm/test/CodeGen/NVPTX/activemask.ll
@@ -0,0 +1,38 @@
+; RUN: llc < %s -march=nvptx64 -O2 -mcpu=sm_52 -mattr=+ptx62 | FileCheck %s
+; RUN: %if ptxas %{ llc < %s -march=nvptx64 -mcpu=sm_52 -mattr=+ptx62 | %ptxas-verify %}
+
+declare i32 @llvm.nvvm.activemask()
+
+; CHECK-LABEL: activemask(
+;
+;      CHECK: activemask.b32  %[[REG:.+]];
+; CHECK-NEXT: st.param.b32    [func_retval0+0], %[[REG]];
+; CHECK-NEXT: ret;
+define dso_local i32 @activemask() {
+entry:
+  %mask = call i32 @llvm.nvvm.activemask()
+  ret i32 %mask
+}
+
+; CHECK-LABEL: convergent(
+;
+;      CHECK: activemask.b32  %[[REG:.+]];
+;      CHECK: activemask.b32  %[[REG]];
+;      CHECK: .param.b32    [func_retval0+0], %[[REG]];
+; CHECK-NEXT: ret;
+define dso_local i32 @convergent(i1 %cond) {
+entry:
+  br i1 %cond, label %if.else, label %if.then
+
+if.then:
+  %0 = call i32 @llvm.nvvm.activemask()
+  br label %if.end
+
+if.else:
+  %1 = call i32 @llvm.nvvm.activemask()
+  br label %if.end
+
+if.end:
+  %mask = phi i32 [ %0, %if.then ], [ %1, %if.else ]
+  ret i32 %mask
+}

Summary:
This patch adds support for getting the 'activemask' instruction's value
without needing to use inline assembly. See the relevant PTX reference
for details.

https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#parallel-synchronization-and-communication-instructions-activemask
@jlebar
Copy link
Member

jlebar commented Jan 29, 2024

Unlike the other PRs, this one has a CUDA function, __activemask(). Presumably we should make that one work by hacking our headers?

@jhuber6
Copy link
Contributor Author

jhuber6 commented Jan 29, 2024

Unlike the other PRs, this one has a CUDA function, __activemask(). Presumably we should make that one work by hacking our headers?

That is currently defined here https://github.com/llvm/llvm-project/blob/main/clang/lib/Headers/__clang_cuda_intrinsics.h#L214. I was planning on updating this to use the new instrinsic for the newer version. Alternatively we could make __activemask the builtin which expands to both versions, but I'm somewhat averse since we should target the instruction directly I feel.

@jlebar
Copy link
Member

jlebar commented Jan 29, 2024

I was planning on updating this to use the new instrinsic for the newer version. Alternatively we could make __activemask the builtin which expands to both versions, but I'm somewhat averse since we should target the instruction directly I feel.

Yes, I agree that the builtin shouldn't have a "polyfill". At least, the LLVM builtin should not have a polyfill -- I guess I'm neutral on whether the clang builtin does.

You can change clang in this same patch if you want, but if you want to do it separately, that's also fine by me. I'll approve this one. I think that covers all my outstanding review requests from you? LMK if I missed any.

@jhuber6
Copy link
Contributor Author

jhuber6 commented Jan 29, 2024

I was planning on updating this to use the new instrinsic for the newer version. Alternatively we could make __activemask the builtin which expands to both versions, but I'm somewhat averse since we should target the instruction directly I feel.

Yes, I agree that the builtin shouldn't have a "polyfill". At least, the LLVM builtin should not have a polyfill -- I guess I'm neutral on whether the clang builtin does.

You can change clang in this same patch if you want, but if you want to do it separately, that's also fine by me. I'll approve this one. I think that covers all my outstanding review requests from you? LMK if I missed any.

Thanks for the reviews. I'll probably have one for nanosleep coming in soonish, but for now I believe you've got it covered.

@Artem-B
Copy link
Member

Artem-B commented Jan 29, 2024

'activemask' is a rather peculiar instruction which may not be a good candidate for exposing it to LLVM.

The problem is that it can 'observe' the past branch decisions and reflects the state of not-yet-reconverged conditional branches. LLVM does not take it into account. Opaque inline assembly is the sledgehammer which stops LLVM from doing anything fancy with it. The intrinsic will need to have appropriately conservative attributes, at the very least.

I think we've had a bug about that and, if I recall correctly, we could not come up with a good way to handle activemask. Let me try finding the details.

@Artem-B
Copy link
Member

Artem-B commented Jan 29, 2024

@jhuber6
Copy link
Contributor Author

jhuber6 commented Jan 29, 2024

https://bugs.llvm.org/show_bug.cgi?id=35249

Yeah, there's constant issues with convergence analysis. I included one of the tests to try to show that it won't merge with the covergent attribute. Since this is a general issue for all of these things. In the past I usually add instructions like wave level syncs to prevent this. I think there's some extra attributes I can add here to prevent this fully.

I think we need

IntrHasSideEffects

attribute maybe.

@jhuber6
Copy link
Contributor Author

jhuber6 commented Jan 29, 2024

Added side effects attribute, I believe this matches the current behavior of the inline asm better.

@@ -65,7 +65,7 @@ def : Proc<"sm_61", [SM61, PTX50]>;
def : Proc<"sm_62", [SM62, PTX50]>;
def : Proc<"sm_70", [SM70, PTX60]>;
def : Proc<"sm_72", [SM72, PTX61]>;
def : Proc<"sm_75", [SM75, PTX63]>;
def : Proc<"sm_75", [SM75, PTX62, PTX63]>;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are we adding PTX62 here?

According to PTX docs sm_75 has been introduced in PTX ISA 6.3 in CUDA-10.0.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I wasn't sure where it should go. The docs specify it's PTX62, but I couldn't find which one that came from, so I just put it before 63. Maybe on 72?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What are you trying to do with PTX62 feature to start with? Why do you need to add it here to start with?

In general, the features will be supplied externally. This particular place just sets the minimum required to support this particular GPU variant.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, so I can just get rid of it for this definition and it will still work? I could've just said it came with 63 and been lazy I suppose.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm confused a bit here. Constraints on PTX version for GPU and for instrunctions are independent. You need both satisfied in order to use a given instruction on a given GPU.

So, to use activemask on sm_75, you do need PTX63.
To use it on sm_52, you only need PTX62.

You do not need to change anything here. You already have correct predicates applied to the instruction itself and to the target builtin.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, so I'll remove it from the definition here and just add the PTX62. I don't have the fullest understanding of how this pTX stuff works.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be fixed now, also I added the one for nanosleep in #79888.

@jhuber6 jhuber6 merged commit d492faa into llvm:main Jan 29, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
clang:frontend Language frontend issues, e.g. anything involving "Sema" clang Clang issues not falling into any other category llvm:ir
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants