Skip to content

[GlobalISel] Combine [s,z]ext of undef into 0 #117439

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Dec 3, 2024

Conversation

tschuett
Copy link

Alternative for #113764

It builds on a minimalistic approach with the legality check in match and a blind apply. The precise patterns are used for better compile-time and modularity. It also moves the pattern check into combiner. While unary_undef_to_zero and propagate_undef_any_op rely on custom C++ code for pattern matching.

Is there a limit on the number of patterns?

G_ANYEXT of undef -> undef
G_SEXT of undef -> 0
G_ZEXT of undef -> 0

The combine is not a member of the post legalizer combiner for AArch64.

Test:
llvm/test/CodeGen/AArch64/GlobalISel/combine-cast.mir

Alternative for llvm#113764

It builds on a minimalistic approach with the legality check in match
and a blind apply. The precise patterns are used for better
compile-time and modularity. It also moves the pattern check into
combiner. While unary_undef_to_zero and propagate_undef_any_op rely on
custom C++ code for pattern matching.

Is there a limit on the number of patterns?

G_ANYEXT of undef -> undef
G_SEXT of undef -> 0
G_ZEXT of undef -> 0

The combine is not a member of the post legalizer combiner for AArch64.

Test:
llvm/test/CodeGen/AArch64/GlobalISel/combine-cast.mir
@llvmbot
Copy link
Member

llvmbot commented Nov 23, 2024

@llvm/pr-subscribers-backend-aarch64

@llvm/pr-subscribers-backend-amdgpu

Author: Thorsten Schütt (tschuett)

Changes

Alternative for #113764

It builds on a minimalistic approach with the legality check in match and a blind apply. The precise patterns are used for better compile-time and modularity. It also moves the pattern check into combiner. While unary_undef_to_zero and propagate_undef_any_op rely on custom C++ code for pattern matching.

Is there a limit on the number of patterns?

G_ANYEXT of undef -> undef
G_SEXT of undef -> 0
G_ZEXT of undef -> 0

The combine is not a member of the post legalizer combiner for AArch64.

Test:
llvm/test/CodeGen/AArch64/GlobalISel/combine-cast.mir


Full diff: https://github.com/llvm/llvm-project/pull/117439.diff

7 Files Affected:

  • (modified) llvm/include/llvm/CodeGen/GlobalISel/CombinerHelper.h (+4)
  • (modified) llvm/include/llvm/Target/GlobalISel/Combine.td (+26-2)
  • (modified) llvm/lib/CodeGen/GlobalISel/CombinerHelper.cpp (+4)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-cast.mir (+52)
  • (modified) llvm/test/CodeGen/AArch64/extract-vector-elt.ll (+4-11)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/combine-amdgpu-cvt-f32-ubyte.mir (+2-4)
  • (modified) llvm/test/CodeGen/AMDGPU/shrink-add-sub-constant.ll (+1-6)
diff --git a/llvm/include/llvm/CodeGen/GlobalISel/CombinerHelper.h b/llvm/include/llvm/CodeGen/GlobalISel/CombinerHelper.h
index 55c3b72c8e027f..6662c1055aa17a 100644
--- a/llvm/include/llvm/CodeGen/GlobalISel/CombinerHelper.h
+++ b/llvm/include/llvm/CodeGen/GlobalISel/CombinerHelper.h
@@ -150,6 +150,10 @@ class CombinerHelper {
   /// is a legal integer constant type on the target.
   bool isConstantLegalOrBeforeLegalizer(const LLT Ty) const;
 
+  /// \return true if the combine is running prior to legalization, or if \p Ty
+  /// is a legal undef type on the target.
+  bool isUndefLegalOrBeforeLegalizer(const LLT Ty) const;
+
   /// MachineRegisterInfo::replaceRegWith() and inform the observer of the changes
   void replaceRegWith(MachineRegisterInfo &MRI, Register FromReg, Register ToReg) const;
 
diff --git a/llvm/include/llvm/Target/GlobalISel/Combine.td b/llvm/include/llvm/Target/GlobalISel/Combine.td
index b0c63fc7c7b806..fee695c4333d99 100644
--- a/llvm/include/llvm/Target/GlobalISel/Combine.td
+++ b/llvm/include/llvm/Target/GlobalISel/Combine.td
@@ -428,7 +428,7 @@ def unary_undef_to_zero: GICombineRule<
 // replaced with undef.
 def propagate_undef_any_op: GICombineRule<
   (defs root:$root),
-  (match (wip_match_opcode G_ADD, G_FPTOSI, G_FPTOUI, G_SUB, G_XOR, G_TRUNC, G_BITCAST, G_ANYEXT):$root,
+  (match (wip_match_opcode G_ADD, G_FPTOSI, G_FPTOUI, G_SUB, G_XOR, G_TRUNC, G_BITCAST):$root,
          [{ return Helper.matchAnyExplicitUseIsUndef(*${root}); }]),
   (apply [{ Helper.replaceInstWithUndef(*${root}); }])>;
 
@@ -1857,6 +1857,27 @@ class integer_of_opcode<Instruction castOpcode> : GICombineRule <
 
 def integer_of_truncate : integer_of_opcode<G_TRUNC>;
 
+def anyext_undef: GICombineRule<
+   (defs root:$root),
+   (match (G_IMPLICIT_DEF $undef),
+          (G_ANYEXT $root, $undef):$Aext,
+   [{ return Helper.isUndefLegalOrBeforeLegalizer(MRI.getType(${Aext}->getOperand(0).getReg())); }]),
+   (apply [{ Helper.replaceInstWithUndef(*${Aext}); }])>;
+
+def zext_undef: GICombineRule<
+   (defs root:$root),
+   (match (G_IMPLICIT_DEF $undef),
+          (G_ZEXT $root, $undef):$Zext,
+   [{ return Helper.isConstantLegalOrBeforeLegalizer(MRI.getType(${Zext}->getOperand(0).getReg())); }]),
+   (apply [{ Helper.replaceInstWithConstant(*${Zext}, 0); }])>;
+
+def sext_undef: GICombineRule<
+   (defs root:$root),
+   (match (G_IMPLICIT_DEF $undef),
+          (G_SEXT $root, $undef):$Sext,
+   [{ return Helper.isConstantLegalOrBeforeLegalizer(MRI.getType(${Sext}->getOperand(0).getReg())); }]),
+   (apply [{ Helper.replaceInstWithConstant(*${Sext}, 0); }])>;
+
 def cast_of_cast_combines: GICombineGroup<[
   truncate_of_zext,
   truncate_of_sext,
@@ -1882,7 +1903,10 @@ def cast_combines: GICombineGroup<[
   narrow_binop_and,
   narrow_binop_or,
   narrow_binop_xor,
-  integer_of_truncate
+  integer_of_truncate,
+  anyext_undef,
+  sext_undef,
+  zext_undef
 ]>;
 
 def canonicalize_icmp : GICombineRule<
diff --git a/llvm/lib/CodeGen/GlobalISel/CombinerHelper.cpp b/llvm/lib/CodeGen/GlobalISel/CombinerHelper.cpp
index d95fc8cfbcf558..29074103115f59 100644
--- a/llvm/lib/CodeGen/GlobalISel/CombinerHelper.cpp
+++ b/llvm/lib/CodeGen/GlobalISel/CombinerHelper.cpp
@@ -171,6 +171,10 @@ bool CombinerHelper::isConstantLegalOrBeforeLegalizer(const LLT Ty) const {
          isLegal({TargetOpcode::G_CONSTANT, {EltTy}});
 }
 
+bool CombinerHelper::isUndefLegalOrBeforeLegalizer(const LLT Ty) const {
+  return isPreLegalize() || isLegal({TargetOpcode::G_IMPLICIT_DEF, {Ty}});
+}
+
 void CombinerHelper::replaceRegWith(MachineRegisterInfo &MRI, Register FromReg,
                                     Register ToReg) const {
   Observer.changingAllUsesOfReg(MRI, FromReg);
diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/combine-cast.mir b/llvm/test/CodeGen/AArch64/GlobalISel/combine-cast.mir
index b045deebc56e03..25161652dafac4 100644
--- a/llvm/test/CodeGen/AArch64/GlobalISel/combine-cast.mir
+++ b/llvm/test/CodeGen/AArch64/GlobalISel/combine-cast.mir
@@ -217,3 +217,55 @@ body:             |
     %large:_(<2 x s64>) = G_ANYEXT %bv(<2 x s32>)
     $q0 = COPY %large(<2 x s64>)
     $d0 = COPY %bv(<2 x s32>)
+...
+---
+name:            test_combine_anyext_undef
+legalized: true
+body:             |
+  bb.1:
+    ; CHECK-PRE-LABEL: name: test_combine_anyext_undef
+    ; CHECK-PRE: %aext:_(s64) = G_IMPLICIT_DEF
+    ; CHECK-PRE-NEXT: $x0 = COPY %aext(s64)
+    ;
+    ; CHECK-POST-LABEL: name: test_combine_anyext_undef
+    ; CHECK-POST: %undef:_(s32) = G_IMPLICIT_DEF
+    ; CHECK-POST-NEXT: %aext:_(s64) = G_ANYEXT %undef(s32)
+    ; CHECK-POST-NEXT: $x0 = COPY %aext(s64)
+    %undef:_(s32) = G_IMPLICIT_DEF
+    %aext:_(s64) = G_ANYEXT %undef(s32)
+    $x0 = COPY %aext(s64)
+...
+---
+name:            test_combine_sext_undef
+legalized: true
+body:             |
+  bb.1:
+    ; CHECK-PRE-LABEL: name: test_combine_sext_undef
+    ; CHECK-PRE: %sext:_(s64) = G_CONSTANT i64 0
+    ; CHECK-PRE-NEXT: $x0 = COPY %sext(s64)
+    ;
+    ; CHECK-POST-LABEL: name: test_combine_sext_undef
+    ; CHECK-POST: %undef:_(s32) = G_IMPLICIT_DEF
+    ; CHECK-POST-NEXT: %sext:_(s64) = G_SEXT %undef(s32)
+    ; CHECK-POST-NEXT: $x0 = COPY %sext(s64)
+    %undef:_(s32) = G_IMPLICIT_DEF
+    %sext:_(s64) = G_SEXT %undef(s32)
+    $x0 = COPY %sext(s64)
+...
+---
+name:            test_combine_zext_undef
+legalized: true
+body:             |
+  bb.1:
+    ; CHECK-PRE-LABEL: name: test_combine_zext_undef
+    ; CHECK-PRE: %zext:_(s64) = G_CONSTANT i64 0
+    ; CHECK-PRE-NEXT: $x0 = COPY %zext(s64)
+    ;
+    ; CHECK-POST-LABEL: name: test_combine_zext_undef
+    ; CHECK-POST: %undef:_(s32) = G_IMPLICIT_DEF
+    ; CHECK-POST-NEXT: %zext:_(s64) = G_ZEXT %undef(s32)
+    ; CHECK-POST-NEXT: $x0 = COPY %zext(s64)
+    %undef:_(s32) = G_IMPLICIT_DEF
+    %zext:_(s64) = G_ZEXT %undef(s32)
+    $x0 = COPY %zext(s64)
+...
diff --git a/llvm/test/CodeGen/AArch64/extract-vector-elt.ll b/llvm/test/CodeGen/AArch64/extract-vector-elt.ll
index 5e5fdd6d317057..e89e1516fb1f54 100644
--- a/llvm/test/CodeGen/AArch64/extract-vector-elt.ll
+++ b/llvm/test/CodeGen/AArch64/extract-vector-elt.ll
@@ -8,17 +8,10 @@
 ; CHECK-GI-NEXT:  warning: Instruction selection used fallback path for extract_v4i32_vector_extract_const
 
 define i64 @extract_v2i64_undef_index(<2 x i64> %a, i32 %c) {
-; CHECK-SD-LABEL: extract_v2i64_undef_index:
-; CHECK-SD:       // %bb.0: // %entry
-; CHECK-SD-NEXT:    fmov x0, d0
-; CHECK-SD-NEXT:    ret
-;
-; CHECK-GI-LABEL: extract_v2i64_undef_index:
-; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    str q0, [sp, #-16]!
-; CHECK-GI-NEXT:    .cfi_def_cfa_offset 16
-; CHECK-GI-NEXT:    ldr x0, [sp], #16
-; CHECK-GI-NEXT:    ret
+; CHECK-LABEL: extract_v2i64_undef_index:
+; CHECK:       // %bb.0: // %entry
+; CHECK-NEXT:    fmov x0, d0
+; CHECK-NEXT:    ret
 entry:
   %d = extractelement <2 x i64> %a, i32 undef
   ret i64 %d
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/combine-amdgpu-cvt-f32-ubyte.mir b/llvm/test/CodeGen/AMDGPU/GlobalISel/combine-amdgpu-cvt-f32-ubyte.mir
index 7893bfa1d38f08..9b39afd32ac378 100644
--- a/llvm/test/CodeGen/AMDGPU/GlobalISel/combine-amdgpu-cvt-f32-ubyte.mir
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/combine-amdgpu-cvt-f32-ubyte.mir
@@ -261,8 +261,7 @@ body:             |
     ; CHECK-LABEL: name: cvt_f32_ubyte0_zext_lshr_16
     ; CHECK: liveins: $vgpr0
     ; CHECK-NEXT: {{  $}}
-    ; CHECK-NEXT: %shift:_(s16) = G_IMPLICIT_DEF
-    ; CHECK-NEXT: %zext:_(s32) = G_ZEXT %shift(s16)
+    ; CHECK-NEXT: %zext:_(s32) = G_CONSTANT i32 0
     ; CHECK-NEXT: %result:_(s32) = G_AMDGPU_CVT_F32_UBYTE0 %zext
     ; CHECK-NEXT: $vgpr0 = COPY %result(s32)
     %arg:_(s32) = COPY $vgpr0
@@ -284,8 +283,7 @@ body:             |
     ; CHECK-LABEL: name: cvt_f32_ubyte0_zext_lshr_24
     ; CHECK: liveins: $vgpr0
     ; CHECK-NEXT: {{  $}}
-    ; CHECK-NEXT: %shift:_(s16) = G_IMPLICIT_DEF
-    ; CHECK-NEXT: %zext:_(s32) = G_ZEXT %shift(s16)
+    ; CHECK-NEXT: %zext:_(s32) = G_CONSTANT i32 0
     ; CHECK-NEXT: %result:_(s32) = G_AMDGPU_CVT_F32_UBYTE0 %zext
     ; CHECK-NEXT: $vgpr0 = COPY %result(s32)
     %arg:_(s32) = COPY $vgpr0
diff --git a/llvm/test/CodeGen/AMDGPU/shrink-add-sub-constant.ll b/llvm/test/CodeGen/AMDGPU/shrink-add-sub-constant.ll
index a1a466fb04440d..384a2c63122b85 100644
--- a/llvm/test/CodeGen/AMDGPU/shrink-add-sub-constant.ll
+++ b/llvm/test/CodeGen/AMDGPU/shrink-add-sub-constant.ll
@@ -4074,14 +4074,12 @@ define amdgpu_kernel void @v_test_v2i16_x_add_undef_neg32(ptr addrspace(1) %out,
 ; VI-GISEL-NEXT:    v_addc_u32_e32 v1, vcc, 0, v1, vcc
 ; VI-GISEL-NEXT:    flat_load_dword v3, v[0:1]
 ; VI-GISEL-NEXT:    v_mov_b32_e32 v0, s0
+; VI-GISEL-NEXT:    v_mov_b32_e32 v1, s1
 ; VI-GISEL-NEXT:    v_add_u32_e32 v0, vcc, v0, v2
 ; VI-GISEL-NEXT:    v_not_b32_e32 v2, 31
-; VI-GISEL-NEXT:    v_mov_b32_e32 v1, s1
-; VI-GISEL-NEXT:    s_and_b32 s0, 0xffff, s0
 ; VI-GISEL-NEXT:    v_addc_u32_e32 v1, vcc, 0, v1, vcc
 ; VI-GISEL-NEXT:    s_waitcnt vmcnt(0)
 ; VI-GISEL-NEXT:    v_add_u16_sdwa v2, v3, v2 dst_sel:WORD_1 dst_unused:UNUSED_PAD src0_sel:WORD_1 src1_sel:DWORD
-; VI-GISEL-NEXT:    v_or_b32_e32 v2, s0, v2
 ; VI-GISEL-NEXT:    flat_store_dword v[0:1], v2
 ; VI-GISEL-NEXT:    s_endpgm
 ;
@@ -4191,15 +4189,12 @@ define amdgpu_kernel void @v_test_v2i16_x_add_neg32_undef(ptr addrspace(1) %out,
 ; VI-GISEL-NEXT:    v_add_u32_e32 v0, vcc, v0, v2
 ; VI-GISEL-NEXT:    v_addc_u32_e32 v1, vcc, 0, v1, vcc
 ; VI-GISEL-NEXT:    flat_load_dword v3, v[0:1]
-; VI-GISEL-NEXT:    s_and_b32 s2, 0xffff, s0
 ; VI-GISEL-NEXT:    v_mov_b32_e32 v0, s0
 ; VI-GISEL-NEXT:    v_mov_b32_e32 v1, s1
 ; VI-GISEL-NEXT:    v_add_u32_e32 v0, vcc, v0, v2
-; VI-GISEL-NEXT:    s_lshl_b32 s0, s2, 16
 ; VI-GISEL-NEXT:    v_addc_u32_e32 v1, vcc, 0, v1, vcc
 ; VI-GISEL-NEXT:    s_waitcnt vmcnt(0)
 ; VI-GISEL-NEXT:    v_add_u16_e32 v2, 0xffe0, v3
-; VI-GISEL-NEXT:    v_or_b32_e32 v2, s0, v2
 ; VI-GISEL-NEXT:    flat_store_dword v[0:1], v2
 ; VI-GISEL-NEXT:    s_endpgm
 ;

@llvmbot
Copy link
Member

llvmbot commented Nov 23, 2024

@llvm/pr-subscribers-llvm-globalisel

Author: Thorsten Schütt (tschuett)

Changes

Alternative for #113764

It builds on a minimalistic approach with the legality check in match and a blind apply. The precise patterns are used for better compile-time and modularity. It also moves the pattern check into combiner. While unary_undef_to_zero and propagate_undef_any_op rely on custom C++ code for pattern matching.

Is there a limit on the number of patterns?

G_ANYEXT of undef -> undef
G_SEXT of undef -> 0
G_ZEXT of undef -> 0

The combine is not a member of the post legalizer combiner for AArch64.

Test:
llvm/test/CodeGen/AArch64/GlobalISel/combine-cast.mir


Full diff: https://github.com/llvm/llvm-project/pull/117439.diff

7 Files Affected:

  • (modified) llvm/include/llvm/CodeGen/GlobalISel/CombinerHelper.h (+4)
  • (modified) llvm/include/llvm/Target/GlobalISel/Combine.td (+26-2)
  • (modified) llvm/lib/CodeGen/GlobalISel/CombinerHelper.cpp (+4)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-cast.mir (+52)
  • (modified) llvm/test/CodeGen/AArch64/extract-vector-elt.ll (+4-11)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/combine-amdgpu-cvt-f32-ubyte.mir (+2-4)
  • (modified) llvm/test/CodeGen/AMDGPU/shrink-add-sub-constant.ll (+1-6)
diff --git a/llvm/include/llvm/CodeGen/GlobalISel/CombinerHelper.h b/llvm/include/llvm/CodeGen/GlobalISel/CombinerHelper.h
index 55c3b72c8e027f..6662c1055aa17a 100644
--- a/llvm/include/llvm/CodeGen/GlobalISel/CombinerHelper.h
+++ b/llvm/include/llvm/CodeGen/GlobalISel/CombinerHelper.h
@@ -150,6 +150,10 @@ class CombinerHelper {
   /// is a legal integer constant type on the target.
   bool isConstantLegalOrBeforeLegalizer(const LLT Ty) const;
 
+  /// \return true if the combine is running prior to legalization, or if \p Ty
+  /// is a legal undef type on the target.
+  bool isUndefLegalOrBeforeLegalizer(const LLT Ty) const;
+
   /// MachineRegisterInfo::replaceRegWith() and inform the observer of the changes
   void replaceRegWith(MachineRegisterInfo &MRI, Register FromReg, Register ToReg) const;
 
diff --git a/llvm/include/llvm/Target/GlobalISel/Combine.td b/llvm/include/llvm/Target/GlobalISel/Combine.td
index b0c63fc7c7b806..fee695c4333d99 100644
--- a/llvm/include/llvm/Target/GlobalISel/Combine.td
+++ b/llvm/include/llvm/Target/GlobalISel/Combine.td
@@ -428,7 +428,7 @@ def unary_undef_to_zero: GICombineRule<
 // replaced with undef.
 def propagate_undef_any_op: GICombineRule<
   (defs root:$root),
-  (match (wip_match_opcode G_ADD, G_FPTOSI, G_FPTOUI, G_SUB, G_XOR, G_TRUNC, G_BITCAST, G_ANYEXT):$root,
+  (match (wip_match_opcode G_ADD, G_FPTOSI, G_FPTOUI, G_SUB, G_XOR, G_TRUNC, G_BITCAST):$root,
          [{ return Helper.matchAnyExplicitUseIsUndef(*${root}); }]),
   (apply [{ Helper.replaceInstWithUndef(*${root}); }])>;
 
@@ -1857,6 +1857,27 @@ class integer_of_opcode<Instruction castOpcode> : GICombineRule <
 
 def integer_of_truncate : integer_of_opcode<G_TRUNC>;
 
+def anyext_undef: GICombineRule<
+   (defs root:$root),
+   (match (G_IMPLICIT_DEF $undef),
+          (G_ANYEXT $root, $undef):$Aext,
+   [{ return Helper.isUndefLegalOrBeforeLegalizer(MRI.getType(${Aext}->getOperand(0).getReg())); }]),
+   (apply [{ Helper.replaceInstWithUndef(*${Aext}); }])>;
+
+def zext_undef: GICombineRule<
+   (defs root:$root),
+   (match (G_IMPLICIT_DEF $undef),
+          (G_ZEXT $root, $undef):$Zext,
+   [{ return Helper.isConstantLegalOrBeforeLegalizer(MRI.getType(${Zext}->getOperand(0).getReg())); }]),
+   (apply [{ Helper.replaceInstWithConstant(*${Zext}, 0); }])>;
+
+def sext_undef: GICombineRule<
+   (defs root:$root),
+   (match (G_IMPLICIT_DEF $undef),
+          (G_SEXT $root, $undef):$Sext,
+   [{ return Helper.isConstantLegalOrBeforeLegalizer(MRI.getType(${Sext}->getOperand(0).getReg())); }]),
+   (apply [{ Helper.replaceInstWithConstant(*${Sext}, 0); }])>;
+
 def cast_of_cast_combines: GICombineGroup<[
   truncate_of_zext,
   truncate_of_sext,
@@ -1882,7 +1903,10 @@ def cast_combines: GICombineGroup<[
   narrow_binop_and,
   narrow_binop_or,
   narrow_binop_xor,
-  integer_of_truncate
+  integer_of_truncate,
+  anyext_undef,
+  sext_undef,
+  zext_undef
 ]>;
 
 def canonicalize_icmp : GICombineRule<
diff --git a/llvm/lib/CodeGen/GlobalISel/CombinerHelper.cpp b/llvm/lib/CodeGen/GlobalISel/CombinerHelper.cpp
index d95fc8cfbcf558..29074103115f59 100644
--- a/llvm/lib/CodeGen/GlobalISel/CombinerHelper.cpp
+++ b/llvm/lib/CodeGen/GlobalISel/CombinerHelper.cpp
@@ -171,6 +171,10 @@ bool CombinerHelper::isConstantLegalOrBeforeLegalizer(const LLT Ty) const {
          isLegal({TargetOpcode::G_CONSTANT, {EltTy}});
 }
 
+bool CombinerHelper::isUndefLegalOrBeforeLegalizer(const LLT Ty) const {
+  return isPreLegalize() || isLegal({TargetOpcode::G_IMPLICIT_DEF, {Ty}});
+}
+
 void CombinerHelper::replaceRegWith(MachineRegisterInfo &MRI, Register FromReg,
                                     Register ToReg) const {
   Observer.changingAllUsesOfReg(MRI, FromReg);
diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/combine-cast.mir b/llvm/test/CodeGen/AArch64/GlobalISel/combine-cast.mir
index b045deebc56e03..25161652dafac4 100644
--- a/llvm/test/CodeGen/AArch64/GlobalISel/combine-cast.mir
+++ b/llvm/test/CodeGen/AArch64/GlobalISel/combine-cast.mir
@@ -217,3 +217,55 @@ body:             |
     %large:_(<2 x s64>) = G_ANYEXT %bv(<2 x s32>)
     $q0 = COPY %large(<2 x s64>)
     $d0 = COPY %bv(<2 x s32>)
+...
+---
+name:            test_combine_anyext_undef
+legalized: true
+body:             |
+  bb.1:
+    ; CHECK-PRE-LABEL: name: test_combine_anyext_undef
+    ; CHECK-PRE: %aext:_(s64) = G_IMPLICIT_DEF
+    ; CHECK-PRE-NEXT: $x0 = COPY %aext(s64)
+    ;
+    ; CHECK-POST-LABEL: name: test_combine_anyext_undef
+    ; CHECK-POST: %undef:_(s32) = G_IMPLICIT_DEF
+    ; CHECK-POST-NEXT: %aext:_(s64) = G_ANYEXT %undef(s32)
+    ; CHECK-POST-NEXT: $x0 = COPY %aext(s64)
+    %undef:_(s32) = G_IMPLICIT_DEF
+    %aext:_(s64) = G_ANYEXT %undef(s32)
+    $x0 = COPY %aext(s64)
+...
+---
+name:            test_combine_sext_undef
+legalized: true
+body:             |
+  bb.1:
+    ; CHECK-PRE-LABEL: name: test_combine_sext_undef
+    ; CHECK-PRE: %sext:_(s64) = G_CONSTANT i64 0
+    ; CHECK-PRE-NEXT: $x0 = COPY %sext(s64)
+    ;
+    ; CHECK-POST-LABEL: name: test_combine_sext_undef
+    ; CHECK-POST: %undef:_(s32) = G_IMPLICIT_DEF
+    ; CHECK-POST-NEXT: %sext:_(s64) = G_SEXT %undef(s32)
+    ; CHECK-POST-NEXT: $x0 = COPY %sext(s64)
+    %undef:_(s32) = G_IMPLICIT_DEF
+    %sext:_(s64) = G_SEXT %undef(s32)
+    $x0 = COPY %sext(s64)
+...
+---
+name:            test_combine_zext_undef
+legalized: true
+body:             |
+  bb.1:
+    ; CHECK-PRE-LABEL: name: test_combine_zext_undef
+    ; CHECK-PRE: %zext:_(s64) = G_CONSTANT i64 0
+    ; CHECK-PRE-NEXT: $x0 = COPY %zext(s64)
+    ;
+    ; CHECK-POST-LABEL: name: test_combine_zext_undef
+    ; CHECK-POST: %undef:_(s32) = G_IMPLICIT_DEF
+    ; CHECK-POST-NEXT: %zext:_(s64) = G_ZEXT %undef(s32)
+    ; CHECK-POST-NEXT: $x0 = COPY %zext(s64)
+    %undef:_(s32) = G_IMPLICIT_DEF
+    %zext:_(s64) = G_ZEXT %undef(s32)
+    $x0 = COPY %zext(s64)
+...
diff --git a/llvm/test/CodeGen/AArch64/extract-vector-elt.ll b/llvm/test/CodeGen/AArch64/extract-vector-elt.ll
index 5e5fdd6d317057..e89e1516fb1f54 100644
--- a/llvm/test/CodeGen/AArch64/extract-vector-elt.ll
+++ b/llvm/test/CodeGen/AArch64/extract-vector-elt.ll
@@ -8,17 +8,10 @@
 ; CHECK-GI-NEXT:  warning: Instruction selection used fallback path for extract_v4i32_vector_extract_const
 
 define i64 @extract_v2i64_undef_index(<2 x i64> %a, i32 %c) {
-; CHECK-SD-LABEL: extract_v2i64_undef_index:
-; CHECK-SD:       // %bb.0: // %entry
-; CHECK-SD-NEXT:    fmov x0, d0
-; CHECK-SD-NEXT:    ret
-;
-; CHECK-GI-LABEL: extract_v2i64_undef_index:
-; CHECK-GI:       // %bb.0: // %entry
-; CHECK-GI-NEXT:    str q0, [sp, #-16]!
-; CHECK-GI-NEXT:    .cfi_def_cfa_offset 16
-; CHECK-GI-NEXT:    ldr x0, [sp], #16
-; CHECK-GI-NEXT:    ret
+; CHECK-LABEL: extract_v2i64_undef_index:
+; CHECK:       // %bb.0: // %entry
+; CHECK-NEXT:    fmov x0, d0
+; CHECK-NEXT:    ret
 entry:
   %d = extractelement <2 x i64> %a, i32 undef
   ret i64 %d
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/combine-amdgpu-cvt-f32-ubyte.mir b/llvm/test/CodeGen/AMDGPU/GlobalISel/combine-amdgpu-cvt-f32-ubyte.mir
index 7893bfa1d38f08..9b39afd32ac378 100644
--- a/llvm/test/CodeGen/AMDGPU/GlobalISel/combine-amdgpu-cvt-f32-ubyte.mir
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/combine-amdgpu-cvt-f32-ubyte.mir
@@ -261,8 +261,7 @@ body:             |
     ; CHECK-LABEL: name: cvt_f32_ubyte0_zext_lshr_16
     ; CHECK: liveins: $vgpr0
     ; CHECK-NEXT: {{  $}}
-    ; CHECK-NEXT: %shift:_(s16) = G_IMPLICIT_DEF
-    ; CHECK-NEXT: %zext:_(s32) = G_ZEXT %shift(s16)
+    ; CHECK-NEXT: %zext:_(s32) = G_CONSTANT i32 0
     ; CHECK-NEXT: %result:_(s32) = G_AMDGPU_CVT_F32_UBYTE0 %zext
     ; CHECK-NEXT: $vgpr0 = COPY %result(s32)
     %arg:_(s32) = COPY $vgpr0
@@ -284,8 +283,7 @@ body:             |
     ; CHECK-LABEL: name: cvt_f32_ubyte0_zext_lshr_24
     ; CHECK: liveins: $vgpr0
     ; CHECK-NEXT: {{  $}}
-    ; CHECK-NEXT: %shift:_(s16) = G_IMPLICIT_DEF
-    ; CHECK-NEXT: %zext:_(s32) = G_ZEXT %shift(s16)
+    ; CHECK-NEXT: %zext:_(s32) = G_CONSTANT i32 0
     ; CHECK-NEXT: %result:_(s32) = G_AMDGPU_CVT_F32_UBYTE0 %zext
     ; CHECK-NEXT: $vgpr0 = COPY %result(s32)
     %arg:_(s32) = COPY $vgpr0
diff --git a/llvm/test/CodeGen/AMDGPU/shrink-add-sub-constant.ll b/llvm/test/CodeGen/AMDGPU/shrink-add-sub-constant.ll
index a1a466fb04440d..384a2c63122b85 100644
--- a/llvm/test/CodeGen/AMDGPU/shrink-add-sub-constant.ll
+++ b/llvm/test/CodeGen/AMDGPU/shrink-add-sub-constant.ll
@@ -4074,14 +4074,12 @@ define amdgpu_kernel void @v_test_v2i16_x_add_undef_neg32(ptr addrspace(1) %out,
 ; VI-GISEL-NEXT:    v_addc_u32_e32 v1, vcc, 0, v1, vcc
 ; VI-GISEL-NEXT:    flat_load_dword v3, v[0:1]
 ; VI-GISEL-NEXT:    v_mov_b32_e32 v0, s0
+; VI-GISEL-NEXT:    v_mov_b32_e32 v1, s1
 ; VI-GISEL-NEXT:    v_add_u32_e32 v0, vcc, v0, v2
 ; VI-GISEL-NEXT:    v_not_b32_e32 v2, 31
-; VI-GISEL-NEXT:    v_mov_b32_e32 v1, s1
-; VI-GISEL-NEXT:    s_and_b32 s0, 0xffff, s0
 ; VI-GISEL-NEXT:    v_addc_u32_e32 v1, vcc, 0, v1, vcc
 ; VI-GISEL-NEXT:    s_waitcnt vmcnt(0)
 ; VI-GISEL-NEXT:    v_add_u16_sdwa v2, v3, v2 dst_sel:WORD_1 dst_unused:UNUSED_PAD src0_sel:WORD_1 src1_sel:DWORD
-; VI-GISEL-NEXT:    v_or_b32_e32 v2, s0, v2
 ; VI-GISEL-NEXT:    flat_store_dword v[0:1], v2
 ; VI-GISEL-NEXT:    s_endpgm
 ;
@@ -4191,15 +4189,12 @@ define amdgpu_kernel void @v_test_v2i16_x_add_neg32_undef(ptr addrspace(1) %out,
 ; VI-GISEL-NEXT:    v_add_u32_e32 v0, vcc, v0, v2
 ; VI-GISEL-NEXT:    v_addc_u32_e32 v1, vcc, 0, v1, vcc
 ; VI-GISEL-NEXT:    flat_load_dword v3, v[0:1]
-; VI-GISEL-NEXT:    s_and_b32 s2, 0xffff, s0
 ; VI-GISEL-NEXT:    v_mov_b32_e32 v0, s0
 ; VI-GISEL-NEXT:    v_mov_b32_e32 v1, s1
 ; VI-GISEL-NEXT:    v_add_u32_e32 v0, vcc, v0, v2
-; VI-GISEL-NEXT:    s_lshl_b32 s0, s2, 16
 ; VI-GISEL-NEXT:    v_addc_u32_e32 v1, vcc, 0, v1, vcc
 ; VI-GISEL-NEXT:    s_waitcnt vmcnt(0)
 ; VI-GISEL-NEXT:    v_add_u16_e32 v2, 0xffe0, v3
-; VI-GISEL-NEXT:    v_or_b32_e32 v2, s0, v2
 ; VI-GISEL-NEXT:    flat_store_dword v[0:1], v2
 ; VI-GISEL-NEXT:    s_endpgm
 ;

Copy link
Contributor

@aemerson aemerson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not for this PR, but the ergonomics of our legality checking story is horrendous. This approach of having to constantly check for every single thing we create can't scale in the long term.

Comment on lines 1860 to 1865
def anyext_undef: GICombineRule<
(defs root:$root),
(match (G_IMPLICIT_DEF $undef),
(G_ANYEXT $root, $undef):$Aext,
[{ return Helper.isUndefLegalOrBeforeLegalizer(MRI.getType(${Aext}->getOperand(0).getReg())); }]),
(apply [{ Helper.replaceInstWithUndef(*${Aext}); }])>;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is necessary, it was fine where it was in propagate_undef_any_op.

There are two possible situations w.r.t legality here:

  1. We're before legalizer, in which case nothing matters and we can run the combine.
  2. We're post-legalizer. However for undef, since we already agreed that having any legal register definition must imply that G_IMPLICIT_DEF is also legal, then we don't need to check for legality here anyway, since G_ANYEXT defines a vreg with the same type as the new undef instruction.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The law hasn't been written down yet and the legalizers disagree with that statement. For the moment, the legality is safer.

It would be really confusing if match returns true without any documentation or hints.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll make a doc update soon if it's absolutely necessary for you to do the right thing.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For AArch64, scalable vectors are legal in some positions, but G_IMPLICIT_DEF is not legal for scalable vectors.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's understandable since scalable vectors are not implemented at all.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doc update PR: #117609

It turns out we had something almost describing what we wanted anyway.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's understandable since scalable vectors are not implemented at all.

That is not true. There are running GlobalISel tests with SVE every time you invoke ninja check-llvm-codegen-aarch64.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure ok, we have a tiny amount of support. We still have 99% to go.

Copy link
Contributor

@aemerson aemerson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to revert the impdef changes.

@tschuett
Copy link
Author

I need some help to understand that request. When did we stop checking for legality before building?

@aemerson
Copy link
Contributor

I need some help to understand that request. When did we stop checking for legality before building?

When did we start checking for legality of undef is the better question. Do you have a technical rebuttal to the specific case of undef? If not there's no point continuing this discussion.

@tschuett
Copy link
Author

There is prior art. I am not going to do git blame in public:

if (!isLegalOrBeforeLegalizer(

@tschuett
Copy link
Author

The market share of unprotected Builder.buildUndef in CombinerHelper.cpp is really high. I am starting to see the point.

Comment on lines +1860 to +1864
def anyext_undef: GICombineRule<
(defs root:$root),
(match (G_IMPLICIT_DEF $undef),
(G_ANYEXT $root, $undef):$Aext),
(apply [{ Helper.replaceInstWithUndef(*${Aext}); }])>;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this change still here at all? It was fine where it was in propagate_undef_any_op.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Helper.matchAnyExplicitUseIsUndef checks in C++, if there is an undef on the anyext. It will fail 99.999999999999% of the time. Now it is a correct pattern and the combiner takes care of the matching.

I said in the summary Is there a limit on the number of patterns?.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can leave it where it is, and we can have another PR to convert propagate_undef_any_op into the new match syntax for all opcodes, not just undef. This PR is an optimization which is now mixing a refactoring. I'm going to approve this PR on the assumption that you leave undef combine where it is until we convert them en-mass.

Comment on lines +1860 to +1864
def anyext_undef: GICombineRule<
(defs root:$root),
(match (G_IMPLICIT_DEF $undef),
(G_ANYEXT $root, $undef):$Aext),
(apply [{ Helper.replaceInstWithUndef(*${Aext}); }])>;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can leave it where it is, and we can have another PR to convert propagate_undef_any_op into the new match syntax for all opcodes, not just undef. This PR is an optimization which is now mixing a refactoring. I'm going to approve this PR on the assumption that you leave undef combine where it is until we convert them en-mass.

@tschuett
Copy link
Author

tschuett commented Dec 1, 2024

The title of this PR is Combine [a,s,z]ext of undef into 0 or undef . The task was to combine the three ext ops with undef and integrate them with the existing cast_combines.

@aemerson
Copy link
Contributor

aemerson commented Dec 1, 2024

The title of this PR is Combine [a,s,z]ext of undef into 0 or undef . The task was to combine the three ext ops with undef and integrate them with the existing cast_combines.

Then change the title to reflect what it's actually doing, which is just sext and zext.

@tschuett tschuett changed the title [GlobalISel] Combine [a,s,z]ext of undef into 0 or undef [GlobalISel] Combine [s,z]ext of undef into 0 or undef Dec 1, 2024
@tschuett tschuett changed the title [GlobalISel] Combine [s,z]ext of undef into 0 or undef [GlobalISel] Combine [s,z]ext of undef into 0 Dec 1, 2024
@tschuett tschuett merged commit 4516263 into llvm:main Dec 3, 2024
8 checks passed
@tschuett tschuett deleted the gisel-ext-of-undef-to-zero branch December 3, 2024 06:14
@aemerson
Copy link
Contributor

aemerson commented Dec 3, 2024

I'm going to approve this PR on the assumption that you leave undef combine where it is until we convert them en-mass.

@tschuett
Copy link
Author

tschuett commented Dec 4, 2024

I am never going to touch propagate_undef_any_op. Go ahead.

It took a lot of energy and time to get the anyext(undef) in shape. It didn't want to loose it.

@aemerson
Copy link
Contributor

aemerson commented Dec 4, 2024

I'm going to try to ascribe the best intentions I can to you here and give you an opportunity to do the right thing. I highly recommend you follow my advice.

@aemerson
Copy link
Contributor

aemerson commented Dec 4, 2024

I will give you until Friday.

@tschuett
Copy link
Author

tschuett commented Dec 4, 2024

Your combiner contributions are exciting.

tschuett added a commit that referenced this pull request Dec 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants