Skip to content

[RISCV] Rename suffixes on VCPOP/VMSBF/VMSET/etc pseudos. NFC #119785

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Dec 13, 2024

Conversation

topperc
Copy link
Collaborator

@topperc topperc commented Dec 12, 2024

These are suffixed with B1, B2, B4, B8, B16, B32, or B64 which I think these were supposed to match the naming of the vbool types from C where the number should be SEW/LMUL. So the smallest mask is 64 and the largest is 1. This provides a compact syntax for describing the 7 possible ratios between LMUL and SEW.

We had the instruction names in the opposite order.

These are suffixed with B1, B2, B4, B8, B16, B32, or B64 which
I think these were supposed to match the naming of the vbool types
from C where the number should be SEW/LMUL. So the smallest mask
is 64 and the largest is 1. This provides a compact syntax for
describing the 7 possible ratios between LMUL and SEW.

We had the instruction names in the opposite order.
@llvmbot
Copy link
Member

llvmbot commented Dec 12, 2024

@llvm/pr-subscribers-backend-risc-v

@llvm/pr-subscribers-llvm-globalisel

Author: Craig Topper (topperc)

Changes

These are suffixed with B1, B2, B4, B8, B16, B32, or B64 which I think these were supposed to match the naming of the vbool types from C where the number should be SEW/LMUL. So the smallest mask is 64 and the largest is 1. This provides a compact syntax for describing the 7 possible ratios between LMUL and SEW.

We had the instruction names in the opposite order.


Full diff: https://github.com/llvm/llvm-project/pull/119785.diff

7 Files Affected:

  • (modified) llvm/lib/Target/RISCV/RISCVISelDAGToDAG.cpp (+12-12)
  • (modified) llvm/lib/Target/RISCV/RISCVInstrInfoVPseudos.td (+6-6)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/instruction-select/rvv/render-vlop-rv32.mir (+8-8)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/instruction-select/rvv/render-vlop-rv64.mir (+8-8)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/instruction-select/rvv/vmclr-rv32.mir (+12-12)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/instruction-select/rvv/vmclr-rv64.mir (+12-12)
  • (modified) llvm/test/CodeGen/RISCV/rvv/vsetvli-insert-crossbb.mir (+6-6)
diff --git a/llvm/lib/Target/RISCV/RISCVISelDAGToDAG.cpp b/llvm/lib/Target/RISCV/RISCVISelDAGToDAG.cpp
index c3922e38729dc3..7ae68ebadd3e85 100644
--- a/llvm/lib/Target/RISCV/RISCVISelDAGToDAG.cpp
+++ b/llvm/lib/Target/RISCV/RISCVISelDAGToDAG.cpp
@@ -1673,13 +1673,13 @@ void RISCVDAGToDAGISel::Select(SDNode *Node) {
     VMNANDOpcode = RISCV::PseudoVMNAND_MM_##suffix;                            \
     VMSetOpcode = RISCV::PseudoVMSET_M_##suffix_b;                             \
     break;
-        CASE_VMSLT_VMNAND_VMSET_OPCODES(LMUL_F8, MF8, B1)
-        CASE_VMSLT_VMNAND_VMSET_OPCODES(LMUL_F4, MF4, B2)
-        CASE_VMSLT_VMNAND_VMSET_OPCODES(LMUL_F2, MF2, B4)
+        CASE_VMSLT_VMNAND_VMSET_OPCODES(LMUL_F8, MF8, B64)
+        CASE_VMSLT_VMNAND_VMSET_OPCODES(LMUL_F4, MF4, B32)
+        CASE_VMSLT_VMNAND_VMSET_OPCODES(LMUL_F2, MF2, B16)
         CASE_VMSLT_VMNAND_VMSET_OPCODES(LMUL_1, M1, B8)
-        CASE_VMSLT_VMNAND_VMSET_OPCODES(LMUL_2, M2, B16)
-        CASE_VMSLT_VMNAND_VMSET_OPCODES(LMUL_4, M4, B32)
-        CASE_VMSLT_VMNAND_VMSET_OPCODES(LMUL_8, M8, B64)
+        CASE_VMSLT_VMNAND_VMSET_OPCODES(LMUL_2, M2, B4)
+        CASE_VMSLT_VMNAND_VMSET_OPCODES(LMUL_4, M4, B2)
+        CASE_VMSLT_VMNAND_VMSET_OPCODES(LMUL_8, M8, B1)
 #undef CASE_VMSLT_VMNAND_VMSET_OPCODES
       }
       SDValue SEW = CurDAG->getTargetConstant(
@@ -1751,13 +1751,13 @@ void RISCVDAGToDAGISel::Select(SDNode *Node) {
     VMSGTMaskOpcode = IsUnsigned ? RISCV::PseudoVMSGTU_VX_##suffix##_MASK      \
                                  : RISCV::PseudoVMSGT_VX_##suffix##_MASK;      \
     break;
-        CASE_VMSLT_OPCODES(LMUL_F8, MF8, B1)
-        CASE_VMSLT_OPCODES(LMUL_F4, MF4, B2)
-        CASE_VMSLT_OPCODES(LMUL_F2, MF2, B4)
+        CASE_VMSLT_OPCODES(LMUL_F8, MF8, B64)
+        CASE_VMSLT_OPCODES(LMUL_F4, MF4, B32)
+        CASE_VMSLT_OPCODES(LMUL_F2, MF2, B16)
         CASE_VMSLT_OPCODES(LMUL_1, M1, B8)
-        CASE_VMSLT_OPCODES(LMUL_2, M2, B16)
-        CASE_VMSLT_OPCODES(LMUL_4, M4, B32)
-        CASE_VMSLT_OPCODES(LMUL_8, M8, B64)
+        CASE_VMSLT_OPCODES(LMUL_2, M2, B4)
+        CASE_VMSLT_OPCODES(LMUL_4, M4, B2)
+        CASE_VMSLT_OPCODES(LMUL_8, M8, B1)
 #undef CASE_VMSLT_OPCODES
       }
       // Mask operations use the LMUL from the mask type.
diff --git a/llvm/lib/Target/RISCV/RISCVInstrInfoVPseudos.td b/llvm/lib/Target/RISCV/RISCVInstrInfoVPseudos.td
index 6c4e41711440e6..7e2d106a227bf1 100644
--- a/llvm/lib/Target/RISCV/RISCVInstrInfoVPseudos.td
+++ b/llvm/lib/Target/RISCV/RISCVInstrInfoVPseudos.td
@@ -415,13 +415,13 @@ class MTypeInfo<ValueType Mas, LMULInfo M, string Bx> {
 
 defset list<MTypeInfo> AllMasks = {
   // vbool<n>_t, <n> = SEW/LMUL, we assume SEW=8 and corresponding LMUL.
-  def : MTypeInfo<vbool64_t, V_MF8, "B1">;
-  def : MTypeInfo<vbool32_t, V_MF4, "B2">;
-  def : MTypeInfo<vbool16_t, V_MF2, "B4">;
+  def : MTypeInfo<vbool64_t, V_MF8, "B64">;
+  def : MTypeInfo<vbool32_t, V_MF4, "B32">;
+  def : MTypeInfo<vbool16_t, V_MF2, "B16">;
   def : MTypeInfo<vbool8_t, V_M1, "B8">;
-  def : MTypeInfo<vbool4_t, V_M2, "B16">;
-  def : MTypeInfo<vbool2_t, V_M4, "B32">;
-  def : MTypeInfo<vbool1_t, V_M8, "B64">;
+  def : MTypeInfo<vbool4_t, V_M2, "B4">;
+  def : MTypeInfo<vbool2_t, V_M4, "B2">;
+  def : MTypeInfo<vbool1_t, V_M8, "B1">;
 }
 
 class VTypeInfoToWide<VTypeInfo vti, VTypeInfo wti> {
diff --git a/llvm/test/CodeGen/RISCV/GlobalISel/instruction-select/rvv/render-vlop-rv32.mir b/llvm/test/CodeGen/RISCV/GlobalISel/instruction-select/rvv/render-vlop-rv32.mir
index 5600e351aa3987..7610ebe7ed026b 100644
--- a/llvm/test/CodeGen/RISCV/GlobalISel/instruction-select/rvv/render-vlop-rv32.mir
+++ b/llvm/test/CodeGen/RISCV/GlobalISel/instruction-select/rvv/render-vlop-rv32.mir
@@ -11,8 +11,8 @@ body:             |
   bb.1:
     ; CHECK-LABEL: name: negative_vl
     ; CHECK: [[ADDI:%[0-9]+]]:gprnox0 = ADDI $x0, -2
-    ; CHECK-NEXT: [[PseudoVMCLR_M_B1_:%[0-9]+]]:vr = PseudoVMCLR_M_B1 [[ADDI]], 0 /* e8 */
-    ; CHECK-NEXT: $v0 = COPY [[PseudoVMCLR_M_B1_]]
+    ; CHECK-NEXT: [[PseudoVMCLR_M_B64_:%[0-9]+]]:vr = PseudoVMCLR_M_B64 [[ADDI]], 0 /* e8 */
+    ; CHECK-NEXT: $v0 = COPY [[PseudoVMCLR_M_B64_]]
     ; CHECK-NEXT: PseudoRET implicit $v0
     %0:gprb(s32) = G_CONSTANT i32 -2
     %1:vrb(<vscale x 1 x s1>) = G_VMCLR_VL %0(s32)
@@ -31,8 +31,8 @@ body:             |
     ; CHECK: liveins: $x10
     ; CHECK-NEXT: {{  $}}
     ; CHECK-NEXT: [[COPY:%[0-9]+]]:gprnox0 = COPY $x10
-    ; CHECK-NEXT: [[PseudoVMCLR_M_B1_:%[0-9]+]]:vr = PseudoVMCLR_M_B1 [[COPY]], 0 /* e8 */
-    ; CHECK-NEXT: $v0 = COPY [[PseudoVMCLR_M_B1_]]
+    ; CHECK-NEXT: [[PseudoVMCLR_M_B64_:%[0-9]+]]:vr = PseudoVMCLR_M_B64 [[COPY]], 0 /* e8 */
+    ; CHECK-NEXT: $v0 = COPY [[PseudoVMCLR_M_B64_]]
     ; CHECK-NEXT: PseudoRET implicit $v0
     %0:gprb(s32) = COPY $x10
     %1:vrb(<vscale x 1 x s1>) = G_VMCLR_VL %0(s32)
@@ -48,8 +48,8 @@ tracksRegLiveness: true
 body:             |
   bb.1:
     ; CHECK-LABEL: name: nonzero_vl
-    ; CHECK: [[PseudoVMCLR_M_B1_:%[0-9]+]]:vr = PseudoVMCLR_M_B1 1, 0 /* e8 */
-    ; CHECK-NEXT: $v0 = COPY [[PseudoVMCLR_M_B1_]]
+    ; CHECK: [[PseudoVMCLR_M_B64_:%[0-9]+]]:vr = PseudoVMCLR_M_B64 1, 0 /* e8 */
+    ; CHECK-NEXT: $v0 = COPY [[PseudoVMCLR_M_B64_]]
     ; CHECK-NEXT: PseudoRET implicit $v0
     %0:gprb(s32) = G_CONSTANT i32 1
     %1:vrb(<vscale x 1 x s1>) = G_VMCLR_VL %0(s32)
@@ -65,8 +65,8 @@ tracksRegLiveness: true
 body:             |
   bb.1:
     ; CHECK-LABEL: name: zero_vl
-    ; CHECK: [[PseudoVMCLR_M_B1_:%[0-9]+]]:vr = PseudoVMCLR_M_B1 0, 0 /* e8 */
-    ; CHECK-NEXT: $v0 = COPY [[PseudoVMCLR_M_B1_]]
+    ; CHECK: [[PseudoVMCLR_M_B64_:%[0-9]+]]:vr = PseudoVMCLR_M_B64 0, 0 /* e8 */
+    ; CHECK-NEXT: $v0 = COPY [[PseudoVMCLR_M_B64_]]
     ; CHECK-NEXT: PseudoRET implicit $v0
     %0:gprb(s32) = G_CONSTANT i32 0
     %1:vrb(<vscale x 1 x s1>) = G_VMCLR_VL %0(s32)
diff --git a/llvm/test/CodeGen/RISCV/GlobalISel/instruction-select/rvv/render-vlop-rv64.mir b/llvm/test/CodeGen/RISCV/GlobalISel/instruction-select/rvv/render-vlop-rv64.mir
index c2c0ed72be7b7c..de78ceb2f5e13c 100644
--- a/llvm/test/CodeGen/RISCV/GlobalISel/instruction-select/rvv/render-vlop-rv64.mir
+++ b/llvm/test/CodeGen/RISCV/GlobalISel/instruction-select/rvv/render-vlop-rv64.mir
@@ -11,8 +11,8 @@ body:             |
   bb.1:
     ; CHECK-LABEL: name: negative_vl
     ; CHECK: [[ADDI:%[0-9]+]]:gprnox0 = ADDI $x0, -2
-    ; CHECK-NEXT: [[PseudoVMCLR_M_B1_:%[0-9]+]]:vr = PseudoVMCLR_M_B1 [[ADDI]], 0 /* e8 */
-    ; CHECK-NEXT: $v0 = COPY [[PseudoVMCLR_M_B1_]]
+    ; CHECK-NEXT: [[PseudoVMCLR_M_B64_:%[0-9]+]]:vr = PseudoVMCLR_M_B64 [[ADDI]], 0 /* e8 */
+    ; CHECK-NEXT: $v0 = COPY [[PseudoVMCLR_M_B64_]]
     ; CHECK-NEXT: PseudoRET implicit $v0
     %0:gprb(s64) = G_CONSTANT i64 -2
     %1:vrb(<vscale x 1 x s1>) = G_VMCLR_VL %0(s64)
@@ -31,8 +31,8 @@ body:             |
     ; CHECK: liveins: $x10
     ; CHECK-NEXT: {{  $}}
     ; CHECK-NEXT: [[COPY:%[0-9]+]]:gprnox0 = COPY $x10
-    ; CHECK-NEXT: [[PseudoVMCLR_M_B1_:%[0-9]+]]:vr = PseudoVMCLR_M_B1 [[COPY]], 0 /* e8 */
-    ; CHECK-NEXT: $v0 = COPY [[PseudoVMCLR_M_B1_]]
+    ; CHECK-NEXT: [[PseudoVMCLR_M_B64_:%[0-9]+]]:vr = PseudoVMCLR_M_B64 [[COPY]], 0 /* e8 */
+    ; CHECK-NEXT: $v0 = COPY [[PseudoVMCLR_M_B64_]]
     ; CHECK-NEXT: PseudoRET implicit $v0
     %0:gprb(s64) = COPY $x10
     %1:vrb(<vscale x 1 x s1>) = G_VMCLR_VL %0(s64)
@@ -48,8 +48,8 @@ tracksRegLiveness: true
 body:             |
   bb.1:
     ; CHECK-LABEL: name: nonzero_vl
-    ; CHECK: [[PseudoVMCLR_M_B1_:%[0-9]+]]:vr = PseudoVMCLR_M_B1 1, 0 /* e8 */
-    ; CHECK-NEXT: $v0 = COPY [[PseudoVMCLR_M_B1_]]
+    ; CHECK: [[PseudoVMCLR_M_B64_:%[0-9]+]]:vr = PseudoVMCLR_M_B64 1, 0 /* e8 */
+    ; CHECK-NEXT: $v0 = COPY [[PseudoVMCLR_M_B64_]]
     ; CHECK-NEXT: PseudoRET implicit $v0
     %0:gprb(s64) = G_CONSTANT i64 1
     %1:vrb(<vscale x 1 x s1>) = G_VMCLR_VL %0(s64)
@@ -65,8 +65,8 @@ tracksRegLiveness: true
 body:             |
   bb.1:
     ; CHECK-LABEL: name: zero_vl
-    ; CHECK: [[PseudoVMCLR_M_B1_:%[0-9]+]]:vr = PseudoVMCLR_M_B1 0, 0 /* e8 */
-    ; CHECK-NEXT: $v0 = COPY [[PseudoVMCLR_M_B1_]]
+    ; CHECK: [[PseudoVMCLR_M_B64_:%[0-9]+]]:vr = PseudoVMCLR_M_B64 0, 0 /* e8 */
+    ; CHECK-NEXT: $v0 = COPY [[PseudoVMCLR_M_B64_]]
     ; CHECK-NEXT: PseudoRET implicit $v0
     %0:gprb(s64) = G_CONSTANT i64 0
     %1:vrb(<vscale x 1 x s1>) = G_VMCLR_VL %0(s64)
diff --git a/llvm/test/CodeGen/RISCV/GlobalISel/instruction-select/rvv/vmclr-rv32.mir b/llvm/test/CodeGen/RISCV/GlobalISel/instruction-select/rvv/vmclr-rv32.mir
index 1ef1312cc17c0e..ab91b3d80bd9bc 100644
--- a/llvm/test/CodeGen/RISCV/GlobalISel/instruction-select/rvv/vmclr-rv32.mir
+++ b/llvm/test/CodeGen/RISCV/GlobalISel/instruction-select/rvv/vmclr-rv32.mir
@@ -10,8 +10,8 @@ tracksRegLiveness: true
 body:             |
   bb.1:
     ; CHECK-LABEL: name: splat_zero_nxv1i1
-    ; CHECK: [[PseudoVMCLR_M_B1_:%[0-9]+]]:vr = PseudoVMCLR_M_B1 -1, 0 /* e8 */
-    ; CHECK-NEXT: $v0 = COPY [[PseudoVMCLR_M_B1_]]
+    ; CHECK: [[PseudoVMCLR_M_B64_:%[0-9]+]]:vr = PseudoVMCLR_M_B64 -1, 0 /* e8 */
+    ; CHECK-NEXT: $v0 = COPY [[PseudoVMCLR_M_B64_]]
     ; CHECK-NEXT: PseudoRET implicit $v0
     %0:gprb(s32) = G_CONSTANT i32 -1
     %1:vrb(<vscale x 1 x s1>) = G_VMCLR_VL %0(s32)
@@ -27,8 +27,8 @@ tracksRegLiveness: true
 body:             |
   bb.1:
     ; CHECK-LABEL: name: splat_zero_nxv2i1
-    ; CHECK: [[PseudoVMCLR_M_B2_:%[0-9]+]]:vr = PseudoVMCLR_M_B2 -1, 0 /* e8 */
-    ; CHECK-NEXT: $v0 = COPY [[PseudoVMCLR_M_B2_]]
+    ; CHECK: [[PseudoVMCLR_M_B32_:%[0-9]+]]:vr = PseudoVMCLR_M_B32 -1, 0 /* e8 */
+    ; CHECK-NEXT: $v0 = COPY [[PseudoVMCLR_M_B32_]]
     ; CHECK-NEXT: PseudoRET implicit $v0
     %0:gprb(s32) = G_CONSTANT i32 -1
     %1:vrb(<vscale x 2 x s1>) = G_VMCLR_VL %0(s32)
@@ -44,8 +44,8 @@ tracksRegLiveness: true
 body:             |
   bb.1:
     ; CHECK-LABEL: name: splat_zero_nxv4i1
-    ; CHECK: [[PseudoVMCLR_M_B4_:%[0-9]+]]:vr = PseudoVMCLR_M_B4 -1, 0 /* e8 */
-    ; CHECK-NEXT: $v0 = COPY [[PseudoVMCLR_M_B4_]]
+    ; CHECK: [[PseudoVMCLR_M_B16_:%[0-9]+]]:vr = PseudoVMCLR_M_B16 -1, 0 /* e8 */
+    ; CHECK-NEXT: $v0 = COPY [[PseudoVMCLR_M_B16_]]
     ; CHECK-NEXT: PseudoRET implicit $v0
     %0:gprb(s32) = G_CONSTANT i32 -1
     %1:vrb(<vscale x 4 x s1>) = G_VMCLR_VL %0(s32)
@@ -78,8 +78,8 @@ tracksRegLiveness: true
 body:             |
   bb.1:
     ; CHECK-LABEL: name: splat_zero_nxv16i1
-    ; CHECK: [[PseudoVMCLR_M_B16_:%[0-9]+]]:vr = PseudoVMCLR_M_B16 -1, 0 /* e8 */
-    ; CHECK-NEXT: $v0 = COPY [[PseudoVMCLR_M_B16_]]
+    ; CHECK: [[PseudoVMCLR_M_B4_:%[0-9]+]]:vr = PseudoVMCLR_M_B4 -1, 0 /* e8 */
+    ; CHECK-NEXT: $v0 = COPY [[PseudoVMCLR_M_B4_]]
     ; CHECK-NEXT: PseudoRET implicit $v0
     %0:gprb(s32) = G_CONSTANT i32 -1
     %1:vrb(<vscale x 16 x s1>) = G_VMCLR_VL %0(s32)
@@ -95,8 +95,8 @@ tracksRegLiveness: true
 body:             |
   bb.1:
     ; CHECK-LABEL: name: splat_zero_nxv32i1
-    ; CHECK: [[PseudoVMCLR_M_B32_:%[0-9]+]]:vr = PseudoVMCLR_M_B32 -1, 0 /* e8 */
-    ; CHECK-NEXT: $v0 = COPY [[PseudoVMCLR_M_B32_]]
+    ; CHECK: [[PseudoVMCLR_M_B2_:%[0-9]+]]:vr = PseudoVMCLR_M_B2 -1, 0 /* e8 */
+    ; CHECK-NEXT: $v0 = COPY [[PseudoVMCLR_M_B2_]]
     ; CHECK-NEXT: PseudoRET implicit $v0
     %0:gprb(s32) = G_CONSTANT i32 -1
     %1:vrb(<vscale x 32 x s1>) = G_VMCLR_VL %0(s32)
@@ -112,8 +112,8 @@ tracksRegLiveness: true
 body:             |
   bb.1:
     ; CHECK-LABEL: name: splat_zero_nxv64i1
-    ; CHECK: [[PseudoVMCLR_M_B64_:%[0-9]+]]:vr = PseudoVMCLR_M_B64 -1, 0 /* e8 */
-    ; CHECK-NEXT: $v0 = COPY [[PseudoVMCLR_M_B64_]]
+    ; CHECK: [[PseudoVMCLR_M_B1_:%[0-9]+]]:vr = PseudoVMCLR_M_B1 -1, 0 /* e8 */
+    ; CHECK-NEXT: $v0 = COPY [[PseudoVMCLR_M_B1_]]
     ; CHECK-NEXT: PseudoRET implicit $v0
     %0:gprb(s32) = G_CONSTANT i32 -1
     %1:vrb(<vscale x 64 x s1>) = G_VMCLR_VL %0(s32)
diff --git a/llvm/test/CodeGen/RISCV/GlobalISel/instruction-select/rvv/vmclr-rv64.mir b/llvm/test/CodeGen/RISCV/GlobalISel/instruction-select/rvv/vmclr-rv64.mir
index b7541cd4e96fb4..403a5f6a14ac97 100644
--- a/llvm/test/CodeGen/RISCV/GlobalISel/instruction-select/rvv/vmclr-rv64.mir
+++ b/llvm/test/CodeGen/RISCV/GlobalISel/instruction-select/rvv/vmclr-rv64.mir
@@ -10,8 +10,8 @@ tracksRegLiveness: true
 body:             |
   bb.1:
     ; CHECK-LABEL: name: splat_zero_nxv1i1
-    ; CHECK: [[PseudoVMCLR_M_B1_:%[0-9]+]]:vr = PseudoVMCLR_M_B1 -1, 0 /* e8 */
-    ; CHECK-NEXT: $v0 = COPY [[PseudoVMCLR_M_B1_]]
+    ; CHECK: [[PseudoVMCLR_M_B64_:%[0-9]+]]:vr = PseudoVMCLR_M_B64 -1, 0 /* e8 */
+    ; CHECK-NEXT: $v0 = COPY [[PseudoVMCLR_M_B64_]]
     ; CHECK-NEXT: PseudoRET implicit $v0
     %0:gprb(s64) = G_CONSTANT i64 -1
     %1:vrb(<vscale x 1 x s1>) = G_VMCLR_VL %0(s64)
@@ -27,8 +27,8 @@ tracksRegLiveness: true
 body:             |
   bb.1:
     ; CHECK-LABEL: name: splat_zero_nxv2i1
-    ; CHECK: [[PseudoVMCLR_M_B2_:%[0-9]+]]:vr = PseudoVMCLR_M_B2 -1, 0 /* e8 */
-    ; CHECK-NEXT: $v0 = COPY [[PseudoVMCLR_M_B2_]]
+    ; CHECK: [[PseudoVMCLR_M_B32_:%[0-9]+]]:vr = PseudoVMCLR_M_B32 -1, 0 /* e8 */
+    ; CHECK-NEXT: $v0 = COPY [[PseudoVMCLR_M_B32_]]
     ; CHECK-NEXT: PseudoRET implicit $v0
     %0:gprb(s64) = G_CONSTANT i64 -1
     %1:vrb(<vscale x 2 x s1>) = G_VMCLR_VL %0(s64)
@@ -44,8 +44,8 @@ tracksRegLiveness: true
 body:             |
   bb.1:
     ; CHECK-LABEL: name: splat_zero_nxv4i1
-    ; CHECK: [[PseudoVMCLR_M_B4_:%[0-9]+]]:vr = PseudoVMCLR_M_B4 -1, 0 /* e8 */
-    ; CHECK-NEXT: $v0 = COPY [[PseudoVMCLR_M_B4_]]
+    ; CHECK: [[PseudoVMCLR_M_B16_:%[0-9]+]]:vr = PseudoVMCLR_M_B16 -1, 0 /* e8 */
+    ; CHECK-NEXT: $v0 = COPY [[PseudoVMCLR_M_B16_]]
     ; CHECK-NEXT: PseudoRET implicit $v0
     %0:gprb(s64) = G_CONSTANT i64 -1
     %1:vrb(<vscale x 4 x s1>) = G_VMCLR_VL %0(s64)
@@ -78,8 +78,8 @@ tracksRegLiveness: true
 body:             |
   bb.1:
     ; CHECK-LABEL: name: splat_zero_nxv16i1
-    ; CHECK: [[PseudoVMCLR_M_B16_:%[0-9]+]]:vr = PseudoVMCLR_M_B16 -1, 0 /* e8 */
-    ; CHECK-NEXT: $v0 = COPY [[PseudoVMCLR_M_B16_]]
+    ; CHECK: [[PseudoVMCLR_M_B4_:%[0-9]+]]:vr = PseudoVMCLR_M_B4 -1, 0 /* e8 */
+    ; CHECK-NEXT: $v0 = COPY [[PseudoVMCLR_M_B4_]]
     ; CHECK-NEXT: PseudoRET implicit $v0
     %0:gprb(s64) = G_CONSTANT i64 -1
     %1:vrb(<vscale x 16 x s1>) = G_VMCLR_VL %0(s64)
@@ -95,8 +95,8 @@ tracksRegLiveness: true
 body:             |
   bb.1:
     ; CHECK-LABEL: name: splat_zero_nxv32i1
-    ; CHECK: [[PseudoVMCLR_M_B32_:%[0-9]+]]:vr = PseudoVMCLR_M_B32 -1, 0 /* e8 */
-    ; CHECK-NEXT: $v0 = COPY [[PseudoVMCLR_M_B32_]]
+    ; CHECK: [[PseudoVMCLR_M_B2_:%[0-9]+]]:vr = PseudoVMCLR_M_B2 -1, 0 /* e8 */
+    ; CHECK-NEXT: $v0 = COPY [[PseudoVMCLR_M_B2_]]
     ; CHECK-NEXT: PseudoRET implicit $v0
     %0:gprb(s64) = G_CONSTANT i64 -1
     %1:vrb(<vscale x 32 x s1>) = G_VMCLR_VL %0(s64)
@@ -112,8 +112,8 @@ tracksRegLiveness: true
 body:             |
   bb.1:
     ; CHECK-LABEL: name: splat_zero_nxv64i1
-    ; CHECK: [[PseudoVMCLR_M_B64_:%[0-9]+]]:vr = PseudoVMCLR_M_B64 -1, 0 /* e8 */
-    ; CHECK-NEXT: $v0 = COPY [[PseudoVMCLR_M_B64_]]
+    ; CHECK: [[PseudoVMCLR_M_B1_:%[0-9]+]]:vr = PseudoVMCLR_M_B1 -1, 0 /* e8 */
+    ; CHECK-NEXT: $v0 = COPY [[PseudoVMCLR_M_B1_]]
     ; CHECK-NEXT: PseudoRET implicit $v0
     %0:gprb(s64) = G_CONSTANT i64 -1
     %1:vrb(<vscale x 64 x s1>) = G_VMCLR_VL %0(s64)
diff --git a/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert-crossbb.mir b/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert-crossbb.mir
index 55cefbbea81b20..6f97abcd0fadec 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert-crossbb.mir
+++ b/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert-crossbb.mir
@@ -512,9 +512,9 @@ body:             |
   ; CHECK-NEXT:   dead $x0 = PseudoVSETVLIX0 killed $x0, 23 /* e32, mf2, tu, mu */, implicit-def $vl, implicit-def $vtype, implicit $vl
   ; CHECK-NEXT:   [[PseudoVLE32_V_MF2_MASK:%[0-9]+]]:vrnov0 = PseudoVLE32_V_MF2_MASK [[PseudoVMV_V_I_MF2_]], [[COPY]], $v0, -1, 5 /* e32 */, 0 /* tu, mu */, implicit $vl, implicit $vtype
   ; CHECK-NEXT:   dead $x0 = PseudoVSETVLIX0 killed $x0, 197 /* e8, mf8, ta, ma */, implicit-def $vl, implicit-def $vtype, implicit $vl
-  ; CHECK-NEXT:   [[PseudoVCPOP_M_B1_:%[0-9]+]]:gpr = PseudoVCPOP_M_B1 [[PseudoVMSEQ_VI_MF2_]], -1, 0 /* e8 */, implicit $vl, implicit $vtype
+  ; CHECK-NEXT:   [[PseudoVCPOP_M_B64_:%[0-9]+]]:gpr = PseudoVCPOP_M_B64 [[PseudoVMSEQ_VI_MF2_]], -1, 0 /* e8 */, implicit $vl, implicit $vtype
   ; CHECK-NEXT:   [[DEF:%[0-9]+]]:gpr = IMPLICIT_DEF
-  ; CHECK-NEXT:   BEQ [[PseudoVCPOP_M_B1_]], $x0, %bb.3
+  ; CHECK-NEXT:   BEQ [[PseudoVCPOP_M_B64_]], $x0, %bb.3
   ; CHECK-NEXT:   PseudoBR %bb.2
   ; CHECK-NEXT: {{  $}}
   ; CHECK-NEXT: bb.2:
@@ -543,7 +543,7 @@ body:             |
     %5:vmv0 = PseudoVMSEQ_VI_MF2 killed %3, 0, -1, 5
     $v0 = COPY %5
     %6:vrnov0 = PseudoVLE32_V_MF2_MASK %4, killed %0, $v0, -1, 5, 0
-    %7:gpr = PseudoVCPOP_M_B1 %5, -1, 0
+    %7:gpr = PseudoVCPOP_M_B64 %5, -1, 0
     %8:gpr = COPY $x0
     BEQ killed %7, %8, %bb.3
     PseudoBR %bb.2
@@ -906,8 +906,8 @@ body:             |
   ; CHECK-NEXT:   dead $x0 = PseudoVSETVLIX0 killed $x0, 216 /* e64, m1, ta, ma */, implicit-def $vl, implicit-def $vtype, implicit $vl
   ; CHECK-NEXT:   [[PseudoVADD_VX_M1_:%[0-9]+]]:vr = PseudoVADD_VX_M1 undef $noreg, [[PseudoVID_V_M1_]], [[ADD]], -1, 6 /* e64 */, 0 /* tu, mu */, implicit $vl, implicit $vtype
   ; CHECK-NEXT:   [[PseudoVMSLTU_VX_M1_:%[0-9]+]]:vr = PseudoVMSLTU_VX_M1 [[PseudoVADD_VX_M1_]], [[COPY1]], -1, 6 /* e64 */, implicit $vl, implicit $vtype
-  ; CHECK-NEXT:   [[PseudoVCPOP_M_B1_:%[0-9]+]]:gpr = PseudoVCPOP_M_B1 [[PseudoVMSLTU_VX_M1_]], -1, 0 /* e8 */, implicit $vl, implicit $vtype
-  ; CHECK-NEXT:   BEQ [[PseudoVCPOP_M_B1_]], $x0, %bb.3
+  ; CHECK-NEXT:   [[PseudoVCPOP_M_B64_:%[0-9]+]]:gpr = PseudoVCPOP_M_B64 [[PseudoVMSLTU_VX_M1_]], -1, 0 /* e8 */, implicit $vl, implicit $vtype
+  ; CHECK-NEXT:   BEQ [[PseudoVCPOP_M_B64_]], $x0, %bb.3
   ; CHECK-NEXT:   PseudoBR %bb.2
   ; CHECK-NEXT: {{  $}}
   ; CHECK-NEXT: bb.2:
@@ -952,7 +952,7 @@ body:             |
     %61:gpr = ADD %12, %26
     %27:vr = PseudoVADD_VX_M1 undef $noreg, %10, killed %61, -1, 6, 0
     %62:vr = PseudoVMSLTU_VX_M1 %27, %11, -1, 6
-    %63:gpr = PseudoVCPOP_M_B1 %62, -1, 0
+    %63:gpr = PseudoVCPOP_M_B64 %62, -1, 0
     %64:gpr = COPY $x0
     BEQ killed %63, %64, %bb.3
     PseudoBR %bb.2

Copy link
Collaborator

@rofirrim rofirrim left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thanks @topperc .

@topperc topperc merged commit 88c18da into llvm:main Dec 13, 2024
9 of 11 checks passed
@topperc topperc deleted the pr/bool-suffix branch December 13, 2024 00:17
@llvm-ci
Copy link
Collaborator

llvm-ci commented Dec 13, 2024

LLVM Buildbot has detected a new failure on builder llvm-clang-aarch64-darwin running on doug-worker-4 while building llvm at step 6 "test-build-unified-tree-check-all".

Full details are available at: https://lab.llvm.org/buildbot/#/builders/190/builds/11271

Here is the relevant piece of the build log for the reference
Step 6 (test-build-unified-tree-check-all) failure: test (failure)
******************** TEST 'lld :: MachO/arm64-thunk-starvation.s' FAILED ********************
Exit Code: 1

Command Output (stderr):
--
RUN: at line 2: /Users/buildbot/buildbot-root/aarch64-darwin/build/bin/llvm-mc -filetype=obj -triple=arm64-apple-darwin /Users/buildbot/buildbot-root/aarch64-darwin/llvm-project/lld/test/MachO/arm64-thunk-starvation.s -o /Users/buildbot/buildbot-root/aarch64-darwin/build/tools/lld/test/MachO/Output/arm64-thunk-starvation.s.tmp.o
+ /Users/buildbot/buildbot-root/aarch64-darwin/build/bin/llvm-mc -filetype=obj -triple=arm64-apple-darwin /Users/buildbot/buildbot-root/aarch64-darwin/llvm-project/lld/test/MachO/arm64-thunk-starvation.s -o /Users/buildbot/buildbot-root/aarch64-darwin/build/tools/lld/test/MachO/Output/arm64-thunk-starvation.s.tmp.o
error: No space left on device

--

********************


Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants