-
Notifications
You must be signed in to change notification settings - Fork 14.3k
[SelectionDAG] Fix bug related to demanded bits/elts for BITCAST #145902
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: users/bjope/insertundef_4
Are you sure you want to change the base?
[SelectionDAG] Fix bug related to demanded bits/elts for BITCAST #145902
Conversation
When we have a BITCAST and the source type is a vector with smaller elements compared to the destination type, then we need to demand all the source elements that make up the demanded elts for the result when doing recursive calls to SimplifyDemandedBits, SimplifyDemandedVectorElts and SimplifyMultipleUseDemandedBits. Problem is that those simplifications are allowed to turn non-demanded elements of a vector into POISON, so unless we demand all source elements that make up the result there is a risk that the result would be more poisonous (even for demanded elts) after the simplification. The patch fixes some bugs in SimplifyMultipleUseDemandedBits and SimplifyDemandedBits for situations when we did not consider the problem described above. Now we make sure that we also demand vector elements that "must not be turned into poison" even if those elements correspond to bits that does not need to be defined according to the DemandedBits mask. Fixes #138513
@llvm/pr-subscribers-backend-x86 @llvm/pr-subscribers-backend-arm Author: Björn Pettersson (bjope) ChangesWhen we have a BITCAST and the source type is a vector with smaller elements compared to the destination type, then we need to demand all the source elements that make up the demanded elts for the result when doing recursive calls to SimplifyDemandedBits, SimplifyDemandedVectorElts and SimplifyMultipleUseDemandedBits. Problem is that those simplifications are allowed to turn non-demanded elements of a vector into POISON, so unless we demand all source elements that make up the result there is a risk that the result would be more poisonous (even for demanded elts) after the simplification. The patch fixes some bugs in SimplifyMultipleUseDemandedBits and SimplifyDemandedBits for situations when we did not consider the problem described above. Now we make sure that we also demand vector elements that "must not be turned into poison" even if those elements correspond to bits that does not need to be defined according to the DemandedBits mask. Fixes #138513 Patch is 1.49 MiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/145902.diff 114 Files Affected:
diff --git a/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp b/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
index fd3a70d763e8b..524c97ab3eab8 100644
--- a/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
@@ -720,18 +720,17 @@ SDValue TargetLowering::SimplifyMultipleUseDemandedBits(
unsigned Scale = NumDstEltBits / NumSrcEltBits;
unsigned NumSrcElts = SrcVT.getVectorNumElements();
APInt DemandedSrcBits = APInt::getZero(NumSrcEltBits);
- APInt DemandedSrcElts = APInt::getZero(NumSrcElts);
for (unsigned i = 0; i != Scale; ++i) {
unsigned EltOffset = IsLE ? i : (Scale - 1 - i);
unsigned BitOffset = EltOffset * NumSrcEltBits;
APInt Sub = DemandedBits.extractBits(NumSrcEltBits, BitOffset);
- if (!Sub.isZero()) {
+ if (!Sub.isZero())
DemandedSrcBits |= Sub;
- for (unsigned j = 0; j != NumElts; ++j)
- if (DemandedElts[j])
- DemandedSrcElts.setBit((j * Scale) + i);
- }
}
+ // Need to demand all smaller source elements that maps to a demanded
+ // destination element, since recursive calls below may turn not demanded
+ // elements into poison.
+ APInt DemandedSrcElts = APIntOps::ScaleBitMask(DemandedElts, NumSrcElts);
if (SDValue V = SimplifyMultipleUseDemandedBits(
Src, DemandedSrcBits, DemandedSrcElts, DAG, Depth + 1))
@@ -2764,18 +2763,17 @@ bool TargetLowering::SimplifyDemandedBits(
unsigned Scale = BitWidth / NumSrcEltBits;
unsigned NumSrcElts = SrcVT.getVectorNumElements();
APInt DemandedSrcBits = APInt::getZero(NumSrcEltBits);
- APInt DemandedSrcElts = APInt::getZero(NumSrcElts);
for (unsigned i = 0; i != Scale; ++i) {
unsigned EltOffset = IsLE ? i : (Scale - 1 - i);
unsigned BitOffset = EltOffset * NumSrcEltBits;
APInt Sub = DemandedBits.extractBits(NumSrcEltBits, BitOffset);
- if (!Sub.isZero()) {
+ if (!Sub.isZero())
DemandedSrcBits |= Sub;
- for (unsigned j = 0; j != NumElts; ++j)
- if (DemandedElts[j])
- DemandedSrcElts.setBit((j * Scale) + i);
- }
}
+ // Need to demand all smaller source elements that maps to a demanded
+ // destination element, since recursive calls below may turn not demanded
+ // elements into poison.
+ APInt DemandedSrcElts = APIntOps::ScaleBitMask(DemandedElts, NumSrcElts);
APInt KnownSrcUndef, KnownSrcZero;
if (SimplifyDemandedVectorElts(Src, DemandedSrcElts, KnownSrcUndef,
diff --git a/llvm/test/CodeGen/AArch64/reduce-or.ll b/llvm/test/CodeGen/AArch64/reduce-or.ll
index aac31ce8b71b7..f5291f5debb40 100644
--- a/llvm/test/CodeGen/AArch64/reduce-or.ll
+++ b/llvm/test/CodeGen/AArch64/reduce-or.ll
@@ -218,13 +218,12 @@ define i8 @test_redor_v3i8(<3 x i8> %a) {
; CHECK-NEXT: movi v0.2d, #0000000000000000
; CHECK-NEXT: mov v0.h[0], w0
; CHECK-NEXT: mov v0.h[1], w1
-; CHECK-NEXT: fmov x8, d0
; CHECK-NEXT: mov v0.h[2], w2
-; CHECK-NEXT: fmov x9, d0
-; CHECK-NEXT: lsr x10, x9, #32
-; CHECK-NEXT: lsr x9, x9, #16
-; CHECK-NEXT: orr w8, w8, w10
-; CHECK-NEXT: orr w0, w8, w9
+; CHECK-NEXT: fmov x8, d0
+; CHECK-NEXT: lsr x9, x8, #32
+; CHECK-NEXT: lsr x10, x8, #16
+; CHECK-NEXT: orr w8, w8, w9
+; CHECK-NEXT: orr w0, w8, w10
; CHECK-NEXT: ret
;
; GISEL-LABEL: test_redor_v3i8:
diff --git a/llvm/test/CodeGen/AArch64/reduce-xor.ll b/llvm/test/CodeGen/AArch64/reduce-xor.ll
index 9a00172f94763..df8485b91468f 100644
--- a/llvm/test/CodeGen/AArch64/reduce-xor.ll
+++ b/llvm/test/CodeGen/AArch64/reduce-xor.ll
@@ -207,13 +207,12 @@ define i8 @test_redxor_v3i8(<3 x i8> %a) {
; CHECK-NEXT: movi v0.2d, #0000000000000000
; CHECK-NEXT: mov v0.h[0], w0
; CHECK-NEXT: mov v0.h[1], w1
-; CHECK-NEXT: fmov x8, d0
; CHECK-NEXT: mov v0.h[2], w2
-; CHECK-NEXT: fmov x9, d0
-; CHECK-NEXT: lsr x10, x9, #32
-; CHECK-NEXT: lsr x9, x9, #16
-; CHECK-NEXT: eor w8, w8, w10
-; CHECK-NEXT: eor w0, w8, w9
+; CHECK-NEXT: fmov x8, d0
+; CHECK-NEXT: lsr x9, x8, #32
+; CHECK-NEXT: lsr x10, x8, #16
+; CHECK-NEXT: eor w8, w8, w9
+; CHECK-NEXT: eor w0, w8, w10
; CHECK-NEXT: ret
;
; GISEL-LABEL: test_redxor_v3i8:
diff --git a/llvm/test/CodeGen/AArch64/vecreduce-and-legalization.ll b/llvm/test/CodeGen/AArch64/vecreduce-and-legalization.ll
index d2f16721e6e47..ac54dd41b0962 100644
--- a/llvm/test/CodeGen/AArch64/vecreduce-and-legalization.ll
+++ b/llvm/test/CodeGen/AArch64/vecreduce-and-legalization.ll
@@ -101,13 +101,12 @@ define i8 @test_v3i8(<3 x i8> %a) nounwind {
define i8 @test_v9i8(<9 x i8> %a) nounwind {
; CHECK-LABEL: test_v9i8:
; CHECK: // %bb.0:
-; CHECK-NEXT: movi v1.2d, #0xffffff00ffffff00
-; CHECK-NEXT: fmov x8, d0
+; CHECK-NEXT: movi v1.2d, #0xffffffffffffff00
; CHECK-NEXT: orr v1.16b, v0.16b, v1.16b
; CHECK-NEXT: ext v1.16b, v1.16b, v1.16b, #8
; CHECK-NEXT: and v0.8b, v0.8b, v1.8b
-; CHECK-NEXT: fmov x9, d0
-; CHECK-NEXT: and x8, x9, x8, lsr #32
+; CHECK-NEXT: fmov x8, d0
+; CHECK-NEXT: and x8, x8, x8, lsr #32
; CHECK-NEXT: and x8, x8, x8, lsr #16
; CHECK-NEXT: lsr x9, x8, #8
; CHECK-NEXT: and w0, w8, w9
@@ -119,12 +118,14 @@ define i8 @test_v9i8(<9 x i8> %a) nounwind {
define i32 @test_v3i32(<3 x i32> %a) nounwind {
; CHECK-LABEL: test_v3i32:
; CHECK: // %bb.0:
-; CHECK-NEXT: ext v1.16b, v0.16b, v0.16b, #8
+; CHECK-NEXT: mov v1.16b, v0.16b
+; CHECK-NEXT: mov w8, #-1 // =0xffffffff
+; CHECK-NEXT: mov v1.s[3], w8
+; CHECK-NEXT: ext v1.16b, v1.16b, v1.16b, #8
+; CHECK-NEXT: and v0.8b, v0.8b, v1.8b
; CHECK-NEXT: fmov x8, d0
-; CHECK-NEXT: lsr x8, x8, #32
-; CHECK-NEXT: and v1.8b, v0.8b, v1.8b
-; CHECK-NEXT: fmov x9, d1
-; CHECK-NEXT: and w0, w9, w8
+; CHECK-NEXT: lsr x9, x8, #32
+; CHECK-NEXT: and w0, w8, w9
; CHECK-NEXT: ret
%b = call i32 @llvm.vector.reduce.and.v3i32(<3 x i32> %a)
ret i32 %b
diff --git a/llvm/test/CodeGen/AMDGPU/cvt_f32_ubyte.ll b/llvm/test/CodeGen/AMDGPU/cvt_f32_ubyte.ll
index 745e047348626..24c1a0b728a3d 100644
--- a/llvm/test/CodeGen/AMDGPU/cvt_f32_ubyte.ll
+++ b/llvm/test/CodeGen/AMDGPU/cvt_f32_ubyte.ll
@@ -1904,7 +1904,7 @@ define amdgpu_kernel void @load_v7i8_to_v7f32(ptr addrspace(1) noalias %out, ptr
; VI-NEXT: v_addc_u32_e32 v1, vcc, 0, v1, vcc
; VI-NEXT: v_add_u32_e32 v2, vcc, 5, v0
; VI-NEXT: v_addc_u32_e32 v3, vcc, 0, v1, vcc
-; VI-NEXT: v_add_u32_e32 v4, vcc, 6, v0
+; VI-NEXT: v_add_u32_e32 v4, vcc, 4, v0
; VI-NEXT: v_addc_u32_e32 v5, vcc, 0, v1, vcc
; VI-NEXT: v_add_u32_e32 v6, vcc, 1, v0
; VI-NEXT: v_addc_u32_e32 v7, vcc, 0, v1, vcc
@@ -1912,61 +1912,66 @@ define amdgpu_kernel void @load_v7i8_to_v7f32(ptr addrspace(1) noalias %out, ptr
; VI-NEXT: v_addc_u32_e32 v9, vcc, 0, v1, vcc
; VI-NEXT: v_add_u32_e32 v10, vcc, 3, v0
; VI-NEXT: v_addc_u32_e32 v11, vcc, 0, v1, vcc
-; VI-NEXT: flat_load_ubyte v12, v[2:3]
-; VI-NEXT: flat_load_ubyte v2, v[8:9]
-; VI-NEXT: flat_load_ubyte v3, v[10:11]
+; VI-NEXT: v_add_u32_e32 v12, vcc, 6, v0
+; VI-NEXT: v_addc_u32_e32 v13, vcc, 0, v1, vcc
+; VI-NEXT: flat_load_ubyte v2, v[2:3]
; VI-NEXT: flat_load_ubyte v4, v[4:5]
-; VI-NEXT: flat_load_ubyte v5, v[0:1]
-; VI-NEXT: flat_load_ubyte v6, v[6:7]
-; VI-NEXT: v_add_u32_e32 v0, vcc, 4, v0
-; VI-NEXT: v_addc_u32_e32 v1, vcc, 0, v1, vcc
-; VI-NEXT: flat_load_ubyte v7, v[0:1]
+; VI-NEXT: flat_load_ubyte v5, v[6:7]
+; VI-NEXT: flat_load_ubyte v7, v[8:9]
+; VI-NEXT: flat_load_ubyte v3, v[10:11]
+; VI-NEXT: flat_load_ubyte v6, v[12:13]
+; VI-NEXT: flat_load_ubyte v0, v[0:1]
+; VI-NEXT: v_mov_b32_e32 v8, 0x3020504
; VI-NEXT: s_mov_b32 s3, 0xf000
; VI-NEXT: s_mov_b32 s2, -1
+; VI-NEXT: s_waitcnt vmcnt(6)
+; VI-NEXT: v_lshlrev_b32_e32 v9, 8, v2
; VI-NEXT: s_waitcnt vmcnt(5)
-; VI-NEXT: v_cvt_f32_ubyte0_e32 v2, v2
+; VI-NEXT: v_or_b32_e32 v4, v9, v4
; VI-NEXT: s_waitcnt vmcnt(4)
-; VI-NEXT: v_cvt_f32_ubyte0_e32 v3, v3
+; VI-NEXT: v_cvt_f32_ubyte0_e32 v1, v5
+; VI-NEXT: s_waitcnt vmcnt(3)
+; VI-NEXT: v_cvt_f32_ubyte0_e32 v2, v7
; VI-NEXT: s_waitcnt vmcnt(2)
-; VI-NEXT: v_cvt_f32_ubyte0_e32 v0, v5
-; VI-NEXT: s_waitcnt vmcnt(1)
-; VI-NEXT: v_cvt_f32_ubyte0_e32 v1, v6
-; VI-NEXT: v_cvt_f32_ubyte0_e32 v6, v4
-; VI-NEXT: v_cvt_f32_ubyte0_e32 v5, v12
+; VI-NEXT: v_cvt_f32_ubyte0_e32 v3, v3
+; VI-NEXT: v_perm_b32 v4, v4, s0, v8
; VI-NEXT: s_waitcnt vmcnt(0)
-; VI-NEXT: v_cvt_f32_ubyte0_e32 v4, v7
-; VI-NEXT: buffer_store_dwordx3 v[4:6], off, s[0:3], 0 offset:16
+; VI-NEXT: v_cvt_f32_ubyte0_e32 v0, v0
+; VI-NEXT: v_cvt_f32_ubyte0_e32 v6, v6
+; VI-NEXT: v_cvt_f32_ubyte1_e32 v5, v4
+; VI-NEXT: v_cvt_f32_ubyte0_e32 v4, v4
; VI-NEXT: buffer_store_dwordx4 v[0:3], off, s[0:3], 0
+; VI-NEXT: buffer_store_dwordx3 v[4:6], off, s[0:3], 0 offset:16
; VI-NEXT: s_endpgm
;
; GFX10-LABEL: load_v7i8_to_v7f32:
; GFX10: ; %bb.0:
; GFX10-NEXT: s_load_dwordx4 s[0:3], s[4:5], 0x24
; GFX10-NEXT: v_lshlrev_b32_e32 v0, 3, v0
-; GFX10-NEXT: v_mov_b32_e32 v8, 0
+; GFX10-NEXT: v_mov_b32_e32 v4, 0
+; GFX10-NEXT: v_mov_b32_e32 v7, 0
; GFX10-NEXT: s_waitcnt lgkmcnt(0)
; GFX10-NEXT: s_clause 0x5
-; GFX10-NEXT: global_load_ubyte v4, v0, s[2:3] offset:6
+; GFX10-NEXT: global_load_ubyte v5, v0, s[2:3] offset:6
; GFX10-NEXT: global_load_ubyte v1, v0, s[2:3] offset:3
; GFX10-NEXT: global_load_ubyte v2, v0, s[2:3] offset:2
-; GFX10-NEXT: global_load_ubyte v5, v0, s[2:3] offset:1
-; GFX10-NEXT: global_load_short_d16 v7, v0, s[2:3] offset:4
+; GFX10-NEXT: global_load_ubyte v6, v0, s[2:3] offset:1
+; GFX10-NEXT: global_load_short_d16 v4, v0, s[2:3] offset:4
; GFX10-NEXT: global_load_ubyte v0, v0, s[2:3]
-; GFX10-NEXT: s_waitcnt vmcnt(5)
-; GFX10-NEXT: v_cvt_f32_ubyte0_e32 v6, v4
; GFX10-NEXT: s_waitcnt vmcnt(4)
; GFX10-NEXT: v_cvt_f32_ubyte0_e32 v3, v1
; GFX10-NEXT: s_waitcnt vmcnt(3)
; GFX10-NEXT: v_cvt_f32_ubyte0_e32 v2, v2
; GFX10-NEXT: s_waitcnt vmcnt(2)
-; GFX10-NEXT: v_cvt_f32_ubyte0_e32 v1, v5
+; GFX10-NEXT: v_cvt_f32_ubyte0_e32 v1, v6
+; GFX10-NEXT: v_cvt_f32_ubyte0_e32 v6, v5
; GFX10-NEXT: s_waitcnt vmcnt(1)
-; GFX10-NEXT: v_cvt_f32_ubyte1_e32 v5, v7
-; GFX10-NEXT: v_cvt_f32_ubyte0_e32 v4, v7
+; GFX10-NEXT: v_cvt_f32_ubyte1_e32 v5, v4
+; GFX10-NEXT: v_cvt_f32_ubyte0_e32 v4, v4
; GFX10-NEXT: s_waitcnt vmcnt(0)
; GFX10-NEXT: v_cvt_f32_ubyte0_e32 v0, v0
-; GFX10-NEXT: global_store_dwordx3 v8, v[4:6], s[0:1] offset:16
-; GFX10-NEXT: global_store_dwordx4 v8, v[0:3], s[0:1]
+; GFX10-NEXT: global_store_dwordx3 v7, v[4:6], s[0:1] offset:16
+; GFX10-NEXT: global_store_dwordx4 v7, v[0:3], s[0:1]
; GFX10-NEXT: s_endpgm
;
; GFX9-LABEL: load_v7i8_to_v7f32:
@@ -1984,8 +1989,8 @@ define amdgpu_kernel void @load_v7i8_to_v7f32(ptr addrspace(1) noalias %out, ptr
; GFX9-NEXT: s_waitcnt vmcnt(5)
; GFX9-NEXT: v_cvt_f32_ubyte0_e32 v6, v1
; GFX9-NEXT: s_waitcnt vmcnt(4)
-; GFX9-NEXT: v_cvt_f32_ubyte1_e32 v5, v2
-; GFX9-NEXT: v_cvt_f32_ubyte0_e32 v4, v2
+; GFX9-NEXT: v_cvt_f32_ubyte1_sdwa v5, v2 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:WORD_0
+; GFX9-NEXT: v_cvt_f32_ubyte0_sdwa v4, v2 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:WORD_0
; GFX9-NEXT: s_waitcnt vmcnt(3)
; GFX9-NEXT: v_cvt_f32_ubyte0_e32 v3, v3
; GFX9-NEXT: s_waitcnt vmcnt(2)
@@ -2001,34 +2006,33 @@ define amdgpu_kernel void @load_v7i8_to_v7f32(ptr addrspace(1) noalias %out, ptr
; GFX11-LABEL: load_v7i8_to_v7f32:
; GFX11: ; %bb.0:
; GFX11-NEXT: s_load_b128 s[0:3], s[4:5], 0x24
-; GFX11-NEXT: v_and_b32_e32 v0, 0x3ff, v0
-; GFX11-NEXT: v_mov_b32_e32 v8, 0
+; GFX11-NEXT: v_dual_mov_b32 v7, 0 :: v_dual_and_b32 v0, 0x3ff, v0
+; GFX11-NEXT: v_mov_b32_e32 v4, 0
; GFX11-NEXT: s_delay_alu instid0(VALU_DEP_2)
; GFX11-NEXT: v_lshlrev_b32_e32 v0, 3, v0
; GFX11-NEXT: s_waitcnt lgkmcnt(0)
; GFX11-NEXT: s_clause 0x5
-; GFX11-NEXT: global_load_u8 v4, v0, s[2:3] offset:6
+; GFX11-NEXT: global_load_u8 v5, v0, s[2:3] offset:6
; GFX11-NEXT: global_load_u8 v1, v0, s[2:3] offset:3
; GFX11-NEXT: global_load_u8 v2, v0, s[2:3] offset:2
-; GFX11-NEXT: global_load_u8 v5, v0, s[2:3] offset:1
-; GFX11-NEXT: global_load_d16_b16 v7, v0, s[2:3] offset:4
+; GFX11-NEXT: global_load_u8 v6, v0, s[2:3] offset:1
+; GFX11-NEXT: global_load_d16_b16 v4, v0, s[2:3] offset:4
; GFX11-NEXT: global_load_u8 v0, v0, s[2:3]
-; GFX11-NEXT: s_waitcnt vmcnt(5)
-; GFX11-NEXT: v_cvt_f32_ubyte0_e32 v6, v4
; GFX11-NEXT: s_waitcnt vmcnt(4)
; GFX11-NEXT: v_cvt_f32_ubyte0_e32 v3, v1
; GFX11-NEXT: s_waitcnt vmcnt(3)
; GFX11-NEXT: v_cvt_f32_ubyte0_e32 v2, v2
; GFX11-NEXT: s_waitcnt vmcnt(2)
-; GFX11-NEXT: v_cvt_f32_ubyte0_e32 v1, v5
+; GFX11-NEXT: v_cvt_f32_ubyte0_e32 v1, v6
+; GFX11-NEXT: v_cvt_f32_ubyte0_e32 v6, v5
; GFX11-NEXT: s_waitcnt vmcnt(1)
-; GFX11-NEXT: v_cvt_f32_ubyte1_e32 v5, v7
-; GFX11-NEXT: v_cvt_f32_ubyte0_e32 v4, v7
+; GFX11-NEXT: v_cvt_f32_ubyte1_e32 v5, v4
+; GFX11-NEXT: v_cvt_f32_ubyte0_e32 v4, v4
; GFX11-NEXT: s_waitcnt vmcnt(0)
; GFX11-NEXT: v_cvt_f32_ubyte0_e32 v0, v0
; GFX11-NEXT: s_clause 0x1
-; GFX11-NEXT: global_store_b96 v8, v[4:6], s[0:1] offset:16
-; GFX11-NEXT: global_store_b128 v8, v[0:3], s[0:1]
+; GFX11-NEXT: global_store_b96 v7, v[4:6], s[0:1] offset:16
+; GFX11-NEXT: global_store_b128 v7, v[0:3], s[0:1]
; GFX11-NEXT: s_endpgm
%tid = call i32 @llvm.amdgcn.workitem.id.x()
%gep = getelementptr <7 x i8>, ptr addrspace(1) %in, i32 %tid
diff --git a/llvm/test/CodeGen/AMDGPU/load-constant-i1.ll b/llvm/test/CodeGen/AMDGPU/load-constant-i1.ll
index 67c2ee6403558..5aacd238b4211 100644
--- a/llvm/test/CodeGen/AMDGPU/load-constant-i1.ll
+++ b/llvm/test/CodeGen/AMDGPU/load-constant-i1.ll
@@ -8339,191 +8339,216 @@ define amdgpu_kernel void @constant_sextload_v64i1_to_v64i64(ptr addrspace(1) %o
; GFX6-NEXT: s_load_dwordx4 s[0:3], s[4:5], 0x9
; GFX6-NEXT: s_waitcnt lgkmcnt(0)
; GFX6-NEXT: s_load_dwordx2 s[4:5], s[2:3], 0x0
+; GFX6-NEXT: s_mov_b32 s7, 0
; GFX6-NEXT: s_mov_b32 s3, 0xf000
-; GFX6-NEXT: s_mov_b32 s2, -1
+; GFX6-NEXT: s_mov_b32 s9, s7
+; GFX6-NEXT: s_mov_b32 s11, s7
+; GFX6-NEXT: s_mov_b32 s13, s7
+; GFX6-NEXT: s_mov_b32 s17, s7
+; GFX6-NEXT: s_mov_b32 s19, s7
; GFX6-NEXT: s_waitcnt lgkmcnt(0)
-; GFX6-NEXT: s_lshr_b32 s42, s5, 30
-; GFX6-NEXT: s_lshr_b32 s36, s5, 28
-; GFX6-NEXT: s_lshr_b32 s38, s5, 29
-; GFX6-NEXT: s_lshr_b32 s30, s5, 26
-; GFX6-NEXT: s_lshr_b32 s34, s5, 27
-; GFX6-NEXT: s_lshr_b32 s26, s5, 24
-; GFX6-NEXT: s_lshr_b32 s28, s5, 25
-; GFX6-NEXT: s_lshr_b32 s22, s5, 22
-; GFX6-NEXT: s_lshr_b32 s24, s5, 23
-; GFX6-NEXT: s_lshr_b32 s18, s5, 20
-; GFX6-NEXT: s_lshr_b32 s20, s5, 21
-; GFX6-NEXT: s_lshr_b32 s14, s5, 18
-; GFX6-NEXT: s_lshr_b32 s16, s5, 19
-; GFX6-NEXT: s_lshr_b32 s10, s5, 16
-; GFX6-NEXT: s_lshr_b32 s12, s5, 17
-; GFX6-NEXT: s_lshr_b32 s6, s5, 14
-; GFX6-NEXT: s_lshr_b32 s8, s5, 15
-; GFX6-NEXT: s_mov_b32 s40, s5
-; GFX6-NEXT: s_ashr_i32 s7, s5, 31
-; GFX6-NEXT: s_bfe_i64 s[44:45], s[40:41], 0x10000
-; GFX6-NEXT: v_mov_b32_e32 v4, s7
-; GFX6-NEXT: s_lshr_b32 s40, s5, 12
+; GFX6-NEXT: s_lshr_b32 s6, s5, 30
+; GFX6-NEXT: s_lshr_b32 s8, s5, 28
+; GFX6-NEXT: s_lshr_b32 s10, s5, 29
+; GFX6-NEXT: s_lshr_b32 s12, s5, 26
+; GFX6-NEXT: s_lshr_b32 s16, s5, 27
+; GFX6-NEXT: s_mov_b32 s18, s5
+; GFX6-NEXT: s_bfe_i64 s[14:15], s[4:5], 0x10000
+; GFX6-NEXT: s_bfe_i64 s[44:45], s[18:19], 0x10000
+; GFX6-NEXT: s_ashr_i32 s18, s5, 31
+; GFX6-NEXT: s_bfe_i64 s[28:29], s[16:17], 0x10000
+; GFX6-NEXT: s_bfe_i64 s[36:37], s[12:13], 0x10000
+; GFX6-NEXT: s_bfe_i64 s[38:39], s[10:11], 0x10000
+; GFX6-NEXT: s_bfe_i64 s[40:41], s[8:9], 0x10000
+; GFX6-NEXT: s_bfe_i64 s[42:43], s[6:7], 0x10000
+; GFX6-NEXT: s_mov_b32 s2, -1
+; GFX6-NEXT: s_mov_b32 s31, s7
+; GFX6-NEXT: s_mov_b32 s35, s7
+; GFX6-NEXT: s_mov_b32 s25, s7
+; GFX6-NEXT: s_mov_b32 s27, s7
+; GFX6-NEXT: s_mov_b32 s21, s7
+; GFX6-NEXT: s_mov_b32 s23, s7
+; GFX6-NEXT: v_mov_b32_e32 v4, s18
; GFX6-NEXT: v_mov_b32_e32 v0, s44
; GFX6-NEXT: v_mov_b32_e32 v1, s45
-; GFX6-NEXT: s_bfe_i64 s[44:45], s[4:5], 0x10000
-; GFX6-NEXT: s_bfe_i64 s[42:43], s[42:43], 0x10000
-; GFX6-NEXT: v_mov_b32_e32 v6, s44
-; GFX6-NEXT: v_mov_b32_e32 v7, s45
-; GFX6-NEXT: s_lshr_b32 s44, s5, 13
+; GFX6-NEXT: s_mov_b32 s45, s7
+; GFX6-NEXT: v_mov_b32_e32 v6, s14
+; GFX6-NEXT: v_mov_b32_e32 v7, s15
+; GFX6-NEXT: s_mov_b32 s47, s7
; GFX6-NEXT: v_mov_b32_e32 v2, s42
; GFX6-NEXT: v_mov_b32_e32 v3, s43
-; GFX6-NEXT: s_lshr_b32 s42, s5, 10
-; GFX6-NEXT: s_bfe_i64 s[36:37], s[36:37], 0x10000
-; GFX6-NEXT: s_bfe_i64 s[38:39], s[38:39], 0x10000
-; GFX6-NEXT: v_mov_b32_e32 v8, s36
-; GFX6-NEXT: v_mov_b32_e32 v9, s37
-; GFX6-NEXT: s_lshr_b32 s36, s5, 11
+; GFX6-NEXT: s_mov_b32 s43, s7
+; GFX6-NEXT: v_mov_b32_e32 v8, s40
+; GFX6-NEXT: v_mov_b32_e32 v9, s41
+; GFX6-NEXT: s_mov_b32 s41, s7
; GFX6-NEXT: v_mov_b32_e32 v10, s38
; GFX6-NEXT: v_mov_b32_e32 v11, s39
-; GFX6-NEXT: s_lshr_b32 s38, s5, 8
-; GFX6-NEXT: s_bfe_i64 s[30:31], s[30:31], 0x10000
+; GFX6-NEXT: s_mov_b32 s39, s7
+; GFX6-NEXT: v_mov_b32_e32 v12, s36
+; GFX6-NEXT: v_mov_b32_e32 v13, s37
+; GFX6-NEXT: s_mov_b32 s15, s7
+; GFX6-NEXT: v_mov_b32_e32 v14, s28
+; GFX6-NEXT: v_mov_b32_e32 v15, s29
+; GFX6-NEXT: s_mov_b32 s37, s7
+; GFX6-NEXT: s_lshr_b32 s30, s5, 24
+; GFX6-NEXT: s_lshr_b32 s34, s5, 25
; GFX6-NEXT: s_bfe_i64 s[34:35], s[34:35], 0x10000
-; GFX6-NEXT: v_mov_b32_e32 v12, s30
-; GFX6-NEXT: v_mov_b32_e32 v13, s31
-; GFX6-NEXT: s_lshr_b32 s30, s5, 9
-; GFX6-NEXT: v_mov_b32_e32 v14, s34
-; GFX6-NEXT: v_mov_b32_e32 v15, s35
-; GFX6-NEXT: s_lshr_b32 s34, s5, 6
-; GFX6-NEXT: s_bfe_i64 s[28:29], s[28:29], 0x10000
-; GFX6-NEXT: s_bfe_i64 s[26:27], s[26:27], 0x10000
-; GFX6-NEXT: v_mov_b32_e32 v5, s7
+; GFX6-NEXT: s_bfe_i64 s[28:29], s[30:31], 0x10000
+; GFX6-NEXT: v_mov_b32_e32 v5, s18
; GFX6-NEXT: buffer_store_dwordx4 v[2:5], off, s[0:3], 0 offset:496
; GFX6-NEXT: s_waitcnt expcnt(0)
-; GFX6-NEXT: v_mov_b32_e32 v2, s26
-; GFX6-NEXT: v_mov_b32_e32 v3, s27
-; GFX6-NEXT: s_lshr_b32 s26, s5, 7
-; GFX6-NEXT: v_mov_b32_e32 v4, s28
-; GFX6-NEXT: v_mov_b32_e32 v5, s29
-; GFX6-NEXT: s_lshr_b32 s28, s5, 4
+; GFX6-NEXT: v_mov_b32_e32 v2, s28
+; GFX6-NEXT: v_mov_b32_e32 v3, s29
+; GFX6-NEXT: s_mov_b32 s29, s7
+; GFX6-NEXT: v_mov_b32_e32 v4, s34
+; GFX6-NEXT: v_mov_b32_e32 v5, s35
+; GFX6-NEXT: s_lshr_b32 s24, s5, 22
+; GFX6-NEXT: s_lshr_b32 s26, s5, 23
+; GFX6-NEXT: s_bfe_i64 s[26:27], s[26:27], 0x10000
; GFX6-NEXT: s_bfe_i64 s[24:25], s[24:25], 0x10000
-; GFX6-NEXT: s_bfe_i64 s[22:23], s[22:23], 0x10000
; GFX6-NEXT: buffer_store_dwordx4 v[8:11], off, s[0:3], 0 offset:480
; GFX6-NEXT: s_waitcnt expcnt(0)
-; GFX6-NEXT: v_mov_b32_e32 v8, s22
-; GFX6-NEXT: v_mov_b32_e32 v9, s23
-; GFX6-NEXT: s_lshr_b32 s22, s5, 5
-; GFX6-NEXT: v_mov_b32_e32 v10, s24
-; GFX6-NEXT: v_mov_b32_e32 v11, s25
-; GFX6-NEXT: s_lshr_b32 s24, s5, 2
+; GFX6-NEXT: v_mov_b32_e32 v8, s24
+; GFX6-NEXT: v_mov_b32_e32 v9, s25
+; GFX6-NEXT: s_mov_b32 s25, s7
+; GFX6-NEXT: v_mov_b32_e32 v10, s26
+; GFX6-NEXT: v_mov_b32_e32 v11, s27
+; GFX6-NEXT: s_mov_b32 s27, s7
+; GFX6-NEXT: s_lshr_b32 s20, s5, 20
+; GFX6-NEXT: s_lshr_b32 s22, s5, 21
+; GFX6-NEXT: s_bfe_i64 s[22:23], s[22:23], 0x10000
; GFX6-NEXT: s_bfe_i64 s[20:21], s[20:21], 0x10000
-; GFX6-NEXT: s_bfe_i64 s[18:19], s[18:19], 0x10000
; GFX6-NEXT: buffer_store_dwordx4 v[12:15], off, s[0:3], 0 offset:464
; GFX6-NEXT: s_waitcnt expcnt(0)
-; GFX6-NEXT: v_mov_b32_e32 v12, s18
-; GFX6-NEXT: v_mov_b32_e32 v13, s19
-; GFX6-NEXT: s_lshr_b32 s18, s5, 3
-; GFX6-NEXT: v_mov_b32_e32 v14, s20
-; GFX6-NEXT: v_mov_b32_e32 v15, s21
-; GFX6-NEXT: s_lshr_b32 s20, s5, 1
+; GFX...
[truncated]
|
@llvm/pr-subscribers-llvm-selectiondag Author: Björn Pettersson (bjope) ChangesWhen we have a BITCAST and the source type is a vector with smaller elements compared to the destination type, then we need to demand all the source elements that make up the demanded elts for the result when doing recursive calls to SimplifyDemandedBits, SimplifyDemandedVectorElts and SimplifyMultipleUseDemandedBits. Problem is that those simplifications are allowed to turn non-demanded elements of a vector into POISON, so unless we demand all source elements that make up the result there is a risk that the result would be more poisonous (even for demanded elts) after the simplification. The patch fixes some bugs in SimplifyMultipleUseDemandedBits and SimplifyDemandedBits for situations when we did not consider the problem described above. Now we make sure that we also demand vector elements that "must not be turned into poison" even if those elements correspond to bits that does not need to be defined according to the DemandedBits mask. Fixes #138513 Patch is 1.49 MiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/145902.diff 114 Files Affected:
diff --git a/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp b/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
index fd3a70d763e8b..524c97ab3eab8 100644
--- a/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
@@ -720,18 +720,17 @@ SDValue TargetLowering::SimplifyMultipleUseDemandedBits(
unsigned Scale = NumDstEltBits / NumSrcEltBits;
unsigned NumSrcElts = SrcVT.getVectorNumElements();
APInt DemandedSrcBits = APInt::getZero(NumSrcEltBits);
- APInt DemandedSrcElts = APInt::getZero(NumSrcElts);
for (unsigned i = 0; i != Scale; ++i) {
unsigned EltOffset = IsLE ? i : (Scale - 1 - i);
unsigned BitOffset = EltOffset * NumSrcEltBits;
APInt Sub = DemandedBits.extractBits(NumSrcEltBits, BitOffset);
- if (!Sub.isZero()) {
+ if (!Sub.isZero())
DemandedSrcBits |= Sub;
- for (unsigned j = 0; j != NumElts; ++j)
- if (DemandedElts[j])
- DemandedSrcElts.setBit((j * Scale) + i);
- }
}
+ // Need to demand all smaller source elements that maps to a demanded
+ // destination element, since recursive calls below may turn not demanded
+ // elements into poison.
+ APInt DemandedSrcElts = APIntOps::ScaleBitMask(DemandedElts, NumSrcElts);
if (SDValue V = SimplifyMultipleUseDemandedBits(
Src, DemandedSrcBits, DemandedSrcElts, DAG, Depth + 1))
@@ -2764,18 +2763,17 @@ bool TargetLowering::SimplifyDemandedBits(
unsigned Scale = BitWidth / NumSrcEltBits;
unsigned NumSrcElts = SrcVT.getVectorNumElements();
APInt DemandedSrcBits = APInt::getZero(NumSrcEltBits);
- APInt DemandedSrcElts = APInt::getZero(NumSrcElts);
for (unsigned i = 0; i != Scale; ++i) {
unsigned EltOffset = IsLE ? i : (Scale - 1 - i);
unsigned BitOffset = EltOffset * NumSrcEltBits;
APInt Sub = DemandedBits.extractBits(NumSrcEltBits, BitOffset);
- if (!Sub.isZero()) {
+ if (!Sub.isZero())
DemandedSrcBits |= Sub;
- for (unsigned j = 0; j != NumElts; ++j)
- if (DemandedElts[j])
- DemandedSrcElts.setBit((j * Scale) + i);
- }
}
+ // Need to demand all smaller source elements that maps to a demanded
+ // destination element, since recursive calls below may turn not demanded
+ // elements into poison.
+ APInt DemandedSrcElts = APIntOps::ScaleBitMask(DemandedElts, NumSrcElts);
APInt KnownSrcUndef, KnownSrcZero;
if (SimplifyDemandedVectorElts(Src, DemandedSrcElts, KnownSrcUndef,
diff --git a/llvm/test/CodeGen/AArch64/reduce-or.ll b/llvm/test/CodeGen/AArch64/reduce-or.ll
index aac31ce8b71b7..f5291f5debb40 100644
--- a/llvm/test/CodeGen/AArch64/reduce-or.ll
+++ b/llvm/test/CodeGen/AArch64/reduce-or.ll
@@ -218,13 +218,12 @@ define i8 @test_redor_v3i8(<3 x i8> %a) {
; CHECK-NEXT: movi v0.2d, #0000000000000000
; CHECK-NEXT: mov v0.h[0], w0
; CHECK-NEXT: mov v0.h[1], w1
-; CHECK-NEXT: fmov x8, d0
; CHECK-NEXT: mov v0.h[2], w2
-; CHECK-NEXT: fmov x9, d0
-; CHECK-NEXT: lsr x10, x9, #32
-; CHECK-NEXT: lsr x9, x9, #16
-; CHECK-NEXT: orr w8, w8, w10
-; CHECK-NEXT: orr w0, w8, w9
+; CHECK-NEXT: fmov x8, d0
+; CHECK-NEXT: lsr x9, x8, #32
+; CHECK-NEXT: lsr x10, x8, #16
+; CHECK-NEXT: orr w8, w8, w9
+; CHECK-NEXT: orr w0, w8, w10
; CHECK-NEXT: ret
;
; GISEL-LABEL: test_redor_v3i8:
diff --git a/llvm/test/CodeGen/AArch64/reduce-xor.ll b/llvm/test/CodeGen/AArch64/reduce-xor.ll
index 9a00172f94763..df8485b91468f 100644
--- a/llvm/test/CodeGen/AArch64/reduce-xor.ll
+++ b/llvm/test/CodeGen/AArch64/reduce-xor.ll
@@ -207,13 +207,12 @@ define i8 @test_redxor_v3i8(<3 x i8> %a) {
; CHECK-NEXT: movi v0.2d, #0000000000000000
; CHECK-NEXT: mov v0.h[0], w0
; CHECK-NEXT: mov v0.h[1], w1
-; CHECK-NEXT: fmov x8, d0
; CHECK-NEXT: mov v0.h[2], w2
-; CHECK-NEXT: fmov x9, d0
-; CHECK-NEXT: lsr x10, x9, #32
-; CHECK-NEXT: lsr x9, x9, #16
-; CHECK-NEXT: eor w8, w8, w10
-; CHECK-NEXT: eor w0, w8, w9
+; CHECK-NEXT: fmov x8, d0
+; CHECK-NEXT: lsr x9, x8, #32
+; CHECK-NEXT: lsr x10, x8, #16
+; CHECK-NEXT: eor w8, w8, w9
+; CHECK-NEXT: eor w0, w8, w10
; CHECK-NEXT: ret
;
; GISEL-LABEL: test_redxor_v3i8:
diff --git a/llvm/test/CodeGen/AArch64/vecreduce-and-legalization.ll b/llvm/test/CodeGen/AArch64/vecreduce-and-legalization.ll
index d2f16721e6e47..ac54dd41b0962 100644
--- a/llvm/test/CodeGen/AArch64/vecreduce-and-legalization.ll
+++ b/llvm/test/CodeGen/AArch64/vecreduce-and-legalization.ll
@@ -101,13 +101,12 @@ define i8 @test_v3i8(<3 x i8> %a) nounwind {
define i8 @test_v9i8(<9 x i8> %a) nounwind {
; CHECK-LABEL: test_v9i8:
; CHECK: // %bb.0:
-; CHECK-NEXT: movi v1.2d, #0xffffff00ffffff00
-; CHECK-NEXT: fmov x8, d0
+; CHECK-NEXT: movi v1.2d, #0xffffffffffffff00
; CHECK-NEXT: orr v1.16b, v0.16b, v1.16b
; CHECK-NEXT: ext v1.16b, v1.16b, v1.16b, #8
; CHECK-NEXT: and v0.8b, v0.8b, v1.8b
-; CHECK-NEXT: fmov x9, d0
-; CHECK-NEXT: and x8, x9, x8, lsr #32
+; CHECK-NEXT: fmov x8, d0
+; CHECK-NEXT: and x8, x8, x8, lsr #32
; CHECK-NEXT: and x8, x8, x8, lsr #16
; CHECK-NEXT: lsr x9, x8, #8
; CHECK-NEXT: and w0, w8, w9
@@ -119,12 +118,14 @@ define i8 @test_v9i8(<9 x i8> %a) nounwind {
define i32 @test_v3i32(<3 x i32> %a) nounwind {
; CHECK-LABEL: test_v3i32:
; CHECK: // %bb.0:
-; CHECK-NEXT: ext v1.16b, v0.16b, v0.16b, #8
+; CHECK-NEXT: mov v1.16b, v0.16b
+; CHECK-NEXT: mov w8, #-1 // =0xffffffff
+; CHECK-NEXT: mov v1.s[3], w8
+; CHECK-NEXT: ext v1.16b, v1.16b, v1.16b, #8
+; CHECK-NEXT: and v0.8b, v0.8b, v1.8b
; CHECK-NEXT: fmov x8, d0
-; CHECK-NEXT: lsr x8, x8, #32
-; CHECK-NEXT: and v1.8b, v0.8b, v1.8b
-; CHECK-NEXT: fmov x9, d1
-; CHECK-NEXT: and w0, w9, w8
+; CHECK-NEXT: lsr x9, x8, #32
+; CHECK-NEXT: and w0, w8, w9
; CHECK-NEXT: ret
%b = call i32 @llvm.vector.reduce.and.v3i32(<3 x i32> %a)
ret i32 %b
diff --git a/llvm/test/CodeGen/AMDGPU/cvt_f32_ubyte.ll b/llvm/test/CodeGen/AMDGPU/cvt_f32_ubyte.ll
index 745e047348626..24c1a0b728a3d 100644
--- a/llvm/test/CodeGen/AMDGPU/cvt_f32_ubyte.ll
+++ b/llvm/test/CodeGen/AMDGPU/cvt_f32_ubyte.ll
@@ -1904,7 +1904,7 @@ define amdgpu_kernel void @load_v7i8_to_v7f32(ptr addrspace(1) noalias %out, ptr
; VI-NEXT: v_addc_u32_e32 v1, vcc, 0, v1, vcc
; VI-NEXT: v_add_u32_e32 v2, vcc, 5, v0
; VI-NEXT: v_addc_u32_e32 v3, vcc, 0, v1, vcc
-; VI-NEXT: v_add_u32_e32 v4, vcc, 6, v0
+; VI-NEXT: v_add_u32_e32 v4, vcc, 4, v0
; VI-NEXT: v_addc_u32_e32 v5, vcc, 0, v1, vcc
; VI-NEXT: v_add_u32_e32 v6, vcc, 1, v0
; VI-NEXT: v_addc_u32_e32 v7, vcc, 0, v1, vcc
@@ -1912,61 +1912,66 @@ define amdgpu_kernel void @load_v7i8_to_v7f32(ptr addrspace(1) noalias %out, ptr
; VI-NEXT: v_addc_u32_e32 v9, vcc, 0, v1, vcc
; VI-NEXT: v_add_u32_e32 v10, vcc, 3, v0
; VI-NEXT: v_addc_u32_e32 v11, vcc, 0, v1, vcc
-; VI-NEXT: flat_load_ubyte v12, v[2:3]
-; VI-NEXT: flat_load_ubyte v2, v[8:9]
-; VI-NEXT: flat_load_ubyte v3, v[10:11]
+; VI-NEXT: v_add_u32_e32 v12, vcc, 6, v0
+; VI-NEXT: v_addc_u32_e32 v13, vcc, 0, v1, vcc
+; VI-NEXT: flat_load_ubyte v2, v[2:3]
; VI-NEXT: flat_load_ubyte v4, v[4:5]
-; VI-NEXT: flat_load_ubyte v5, v[0:1]
-; VI-NEXT: flat_load_ubyte v6, v[6:7]
-; VI-NEXT: v_add_u32_e32 v0, vcc, 4, v0
-; VI-NEXT: v_addc_u32_e32 v1, vcc, 0, v1, vcc
-; VI-NEXT: flat_load_ubyte v7, v[0:1]
+; VI-NEXT: flat_load_ubyte v5, v[6:7]
+; VI-NEXT: flat_load_ubyte v7, v[8:9]
+; VI-NEXT: flat_load_ubyte v3, v[10:11]
+; VI-NEXT: flat_load_ubyte v6, v[12:13]
+; VI-NEXT: flat_load_ubyte v0, v[0:1]
+; VI-NEXT: v_mov_b32_e32 v8, 0x3020504
; VI-NEXT: s_mov_b32 s3, 0xf000
; VI-NEXT: s_mov_b32 s2, -1
+; VI-NEXT: s_waitcnt vmcnt(6)
+; VI-NEXT: v_lshlrev_b32_e32 v9, 8, v2
; VI-NEXT: s_waitcnt vmcnt(5)
-; VI-NEXT: v_cvt_f32_ubyte0_e32 v2, v2
+; VI-NEXT: v_or_b32_e32 v4, v9, v4
; VI-NEXT: s_waitcnt vmcnt(4)
-; VI-NEXT: v_cvt_f32_ubyte0_e32 v3, v3
+; VI-NEXT: v_cvt_f32_ubyte0_e32 v1, v5
+; VI-NEXT: s_waitcnt vmcnt(3)
+; VI-NEXT: v_cvt_f32_ubyte0_e32 v2, v7
; VI-NEXT: s_waitcnt vmcnt(2)
-; VI-NEXT: v_cvt_f32_ubyte0_e32 v0, v5
-; VI-NEXT: s_waitcnt vmcnt(1)
-; VI-NEXT: v_cvt_f32_ubyte0_e32 v1, v6
-; VI-NEXT: v_cvt_f32_ubyte0_e32 v6, v4
-; VI-NEXT: v_cvt_f32_ubyte0_e32 v5, v12
+; VI-NEXT: v_cvt_f32_ubyte0_e32 v3, v3
+; VI-NEXT: v_perm_b32 v4, v4, s0, v8
; VI-NEXT: s_waitcnt vmcnt(0)
-; VI-NEXT: v_cvt_f32_ubyte0_e32 v4, v7
-; VI-NEXT: buffer_store_dwordx3 v[4:6], off, s[0:3], 0 offset:16
+; VI-NEXT: v_cvt_f32_ubyte0_e32 v0, v0
+; VI-NEXT: v_cvt_f32_ubyte0_e32 v6, v6
+; VI-NEXT: v_cvt_f32_ubyte1_e32 v5, v4
+; VI-NEXT: v_cvt_f32_ubyte0_e32 v4, v4
; VI-NEXT: buffer_store_dwordx4 v[0:3], off, s[0:3], 0
+; VI-NEXT: buffer_store_dwordx3 v[4:6], off, s[0:3], 0 offset:16
; VI-NEXT: s_endpgm
;
; GFX10-LABEL: load_v7i8_to_v7f32:
; GFX10: ; %bb.0:
; GFX10-NEXT: s_load_dwordx4 s[0:3], s[4:5], 0x24
; GFX10-NEXT: v_lshlrev_b32_e32 v0, 3, v0
-; GFX10-NEXT: v_mov_b32_e32 v8, 0
+; GFX10-NEXT: v_mov_b32_e32 v4, 0
+; GFX10-NEXT: v_mov_b32_e32 v7, 0
; GFX10-NEXT: s_waitcnt lgkmcnt(0)
; GFX10-NEXT: s_clause 0x5
-; GFX10-NEXT: global_load_ubyte v4, v0, s[2:3] offset:6
+; GFX10-NEXT: global_load_ubyte v5, v0, s[2:3] offset:6
; GFX10-NEXT: global_load_ubyte v1, v0, s[2:3] offset:3
; GFX10-NEXT: global_load_ubyte v2, v0, s[2:3] offset:2
-; GFX10-NEXT: global_load_ubyte v5, v0, s[2:3] offset:1
-; GFX10-NEXT: global_load_short_d16 v7, v0, s[2:3] offset:4
+; GFX10-NEXT: global_load_ubyte v6, v0, s[2:3] offset:1
+; GFX10-NEXT: global_load_short_d16 v4, v0, s[2:3] offset:4
; GFX10-NEXT: global_load_ubyte v0, v0, s[2:3]
-; GFX10-NEXT: s_waitcnt vmcnt(5)
-; GFX10-NEXT: v_cvt_f32_ubyte0_e32 v6, v4
; GFX10-NEXT: s_waitcnt vmcnt(4)
; GFX10-NEXT: v_cvt_f32_ubyte0_e32 v3, v1
; GFX10-NEXT: s_waitcnt vmcnt(3)
; GFX10-NEXT: v_cvt_f32_ubyte0_e32 v2, v2
; GFX10-NEXT: s_waitcnt vmcnt(2)
-; GFX10-NEXT: v_cvt_f32_ubyte0_e32 v1, v5
+; GFX10-NEXT: v_cvt_f32_ubyte0_e32 v1, v6
+; GFX10-NEXT: v_cvt_f32_ubyte0_e32 v6, v5
; GFX10-NEXT: s_waitcnt vmcnt(1)
-; GFX10-NEXT: v_cvt_f32_ubyte1_e32 v5, v7
-; GFX10-NEXT: v_cvt_f32_ubyte0_e32 v4, v7
+; GFX10-NEXT: v_cvt_f32_ubyte1_e32 v5, v4
+; GFX10-NEXT: v_cvt_f32_ubyte0_e32 v4, v4
; GFX10-NEXT: s_waitcnt vmcnt(0)
; GFX10-NEXT: v_cvt_f32_ubyte0_e32 v0, v0
-; GFX10-NEXT: global_store_dwordx3 v8, v[4:6], s[0:1] offset:16
-; GFX10-NEXT: global_store_dwordx4 v8, v[0:3], s[0:1]
+; GFX10-NEXT: global_store_dwordx3 v7, v[4:6], s[0:1] offset:16
+; GFX10-NEXT: global_store_dwordx4 v7, v[0:3], s[0:1]
; GFX10-NEXT: s_endpgm
;
; GFX9-LABEL: load_v7i8_to_v7f32:
@@ -1984,8 +1989,8 @@ define amdgpu_kernel void @load_v7i8_to_v7f32(ptr addrspace(1) noalias %out, ptr
; GFX9-NEXT: s_waitcnt vmcnt(5)
; GFX9-NEXT: v_cvt_f32_ubyte0_e32 v6, v1
; GFX9-NEXT: s_waitcnt vmcnt(4)
-; GFX9-NEXT: v_cvt_f32_ubyte1_e32 v5, v2
-; GFX9-NEXT: v_cvt_f32_ubyte0_e32 v4, v2
+; GFX9-NEXT: v_cvt_f32_ubyte1_sdwa v5, v2 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:WORD_0
+; GFX9-NEXT: v_cvt_f32_ubyte0_sdwa v4, v2 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:WORD_0
; GFX9-NEXT: s_waitcnt vmcnt(3)
; GFX9-NEXT: v_cvt_f32_ubyte0_e32 v3, v3
; GFX9-NEXT: s_waitcnt vmcnt(2)
@@ -2001,34 +2006,33 @@ define amdgpu_kernel void @load_v7i8_to_v7f32(ptr addrspace(1) noalias %out, ptr
; GFX11-LABEL: load_v7i8_to_v7f32:
; GFX11: ; %bb.0:
; GFX11-NEXT: s_load_b128 s[0:3], s[4:5], 0x24
-; GFX11-NEXT: v_and_b32_e32 v0, 0x3ff, v0
-; GFX11-NEXT: v_mov_b32_e32 v8, 0
+; GFX11-NEXT: v_dual_mov_b32 v7, 0 :: v_dual_and_b32 v0, 0x3ff, v0
+; GFX11-NEXT: v_mov_b32_e32 v4, 0
; GFX11-NEXT: s_delay_alu instid0(VALU_DEP_2)
; GFX11-NEXT: v_lshlrev_b32_e32 v0, 3, v0
; GFX11-NEXT: s_waitcnt lgkmcnt(0)
; GFX11-NEXT: s_clause 0x5
-; GFX11-NEXT: global_load_u8 v4, v0, s[2:3] offset:6
+; GFX11-NEXT: global_load_u8 v5, v0, s[2:3] offset:6
; GFX11-NEXT: global_load_u8 v1, v0, s[2:3] offset:3
; GFX11-NEXT: global_load_u8 v2, v0, s[2:3] offset:2
-; GFX11-NEXT: global_load_u8 v5, v0, s[2:3] offset:1
-; GFX11-NEXT: global_load_d16_b16 v7, v0, s[2:3] offset:4
+; GFX11-NEXT: global_load_u8 v6, v0, s[2:3] offset:1
+; GFX11-NEXT: global_load_d16_b16 v4, v0, s[2:3] offset:4
; GFX11-NEXT: global_load_u8 v0, v0, s[2:3]
-; GFX11-NEXT: s_waitcnt vmcnt(5)
-; GFX11-NEXT: v_cvt_f32_ubyte0_e32 v6, v4
; GFX11-NEXT: s_waitcnt vmcnt(4)
; GFX11-NEXT: v_cvt_f32_ubyte0_e32 v3, v1
; GFX11-NEXT: s_waitcnt vmcnt(3)
; GFX11-NEXT: v_cvt_f32_ubyte0_e32 v2, v2
; GFX11-NEXT: s_waitcnt vmcnt(2)
-; GFX11-NEXT: v_cvt_f32_ubyte0_e32 v1, v5
+; GFX11-NEXT: v_cvt_f32_ubyte0_e32 v1, v6
+; GFX11-NEXT: v_cvt_f32_ubyte0_e32 v6, v5
; GFX11-NEXT: s_waitcnt vmcnt(1)
-; GFX11-NEXT: v_cvt_f32_ubyte1_e32 v5, v7
-; GFX11-NEXT: v_cvt_f32_ubyte0_e32 v4, v7
+; GFX11-NEXT: v_cvt_f32_ubyte1_e32 v5, v4
+; GFX11-NEXT: v_cvt_f32_ubyte0_e32 v4, v4
; GFX11-NEXT: s_waitcnt vmcnt(0)
; GFX11-NEXT: v_cvt_f32_ubyte0_e32 v0, v0
; GFX11-NEXT: s_clause 0x1
-; GFX11-NEXT: global_store_b96 v8, v[4:6], s[0:1] offset:16
-; GFX11-NEXT: global_store_b128 v8, v[0:3], s[0:1]
+; GFX11-NEXT: global_store_b96 v7, v[4:6], s[0:1] offset:16
+; GFX11-NEXT: global_store_b128 v7, v[0:3], s[0:1]
; GFX11-NEXT: s_endpgm
%tid = call i32 @llvm.amdgcn.workitem.id.x()
%gep = getelementptr <7 x i8>, ptr addrspace(1) %in, i32 %tid
diff --git a/llvm/test/CodeGen/AMDGPU/load-constant-i1.ll b/llvm/test/CodeGen/AMDGPU/load-constant-i1.ll
index 67c2ee6403558..5aacd238b4211 100644
--- a/llvm/test/CodeGen/AMDGPU/load-constant-i1.ll
+++ b/llvm/test/CodeGen/AMDGPU/load-constant-i1.ll
@@ -8339,191 +8339,216 @@ define amdgpu_kernel void @constant_sextload_v64i1_to_v64i64(ptr addrspace(1) %o
; GFX6-NEXT: s_load_dwordx4 s[0:3], s[4:5], 0x9
; GFX6-NEXT: s_waitcnt lgkmcnt(0)
; GFX6-NEXT: s_load_dwordx2 s[4:5], s[2:3], 0x0
+; GFX6-NEXT: s_mov_b32 s7, 0
; GFX6-NEXT: s_mov_b32 s3, 0xf000
-; GFX6-NEXT: s_mov_b32 s2, -1
+; GFX6-NEXT: s_mov_b32 s9, s7
+; GFX6-NEXT: s_mov_b32 s11, s7
+; GFX6-NEXT: s_mov_b32 s13, s7
+; GFX6-NEXT: s_mov_b32 s17, s7
+; GFX6-NEXT: s_mov_b32 s19, s7
; GFX6-NEXT: s_waitcnt lgkmcnt(0)
-; GFX6-NEXT: s_lshr_b32 s42, s5, 30
-; GFX6-NEXT: s_lshr_b32 s36, s5, 28
-; GFX6-NEXT: s_lshr_b32 s38, s5, 29
-; GFX6-NEXT: s_lshr_b32 s30, s5, 26
-; GFX6-NEXT: s_lshr_b32 s34, s5, 27
-; GFX6-NEXT: s_lshr_b32 s26, s5, 24
-; GFX6-NEXT: s_lshr_b32 s28, s5, 25
-; GFX6-NEXT: s_lshr_b32 s22, s5, 22
-; GFX6-NEXT: s_lshr_b32 s24, s5, 23
-; GFX6-NEXT: s_lshr_b32 s18, s5, 20
-; GFX6-NEXT: s_lshr_b32 s20, s5, 21
-; GFX6-NEXT: s_lshr_b32 s14, s5, 18
-; GFX6-NEXT: s_lshr_b32 s16, s5, 19
-; GFX6-NEXT: s_lshr_b32 s10, s5, 16
-; GFX6-NEXT: s_lshr_b32 s12, s5, 17
-; GFX6-NEXT: s_lshr_b32 s6, s5, 14
-; GFX6-NEXT: s_lshr_b32 s8, s5, 15
-; GFX6-NEXT: s_mov_b32 s40, s5
-; GFX6-NEXT: s_ashr_i32 s7, s5, 31
-; GFX6-NEXT: s_bfe_i64 s[44:45], s[40:41], 0x10000
-; GFX6-NEXT: v_mov_b32_e32 v4, s7
-; GFX6-NEXT: s_lshr_b32 s40, s5, 12
+; GFX6-NEXT: s_lshr_b32 s6, s5, 30
+; GFX6-NEXT: s_lshr_b32 s8, s5, 28
+; GFX6-NEXT: s_lshr_b32 s10, s5, 29
+; GFX6-NEXT: s_lshr_b32 s12, s5, 26
+; GFX6-NEXT: s_lshr_b32 s16, s5, 27
+; GFX6-NEXT: s_mov_b32 s18, s5
+; GFX6-NEXT: s_bfe_i64 s[14:15], s[4:5], 0x10000
+; GFX6-NEXT: s_bfe_i64 s[44:45], s[18:19], 0x10000
+; GFX6-NEXT: s_ashr_i32 s18, s5, 31
+; GFX6-NEXT: s_bfe_i64 s[28:29], s[16:17], 0x10000
+; GFX6-NEXT: s_bfe_i64 s[36:37], s[12:13], 0x10000
+; GFX6-NEXT: s_bfe_i64 s[38:39], s[10:11], 0x10000
+; GFX6-NEXT: s_bfe_i64 s[40:41], s[8:9], 0x10000
+; GFX6-NEXT: s_bfe_i64 s[42:43], s[6:7], 0x10000
+; GFX6-NEXT: s_mov_b32 s2, -1
+; GFX6-NEXT: s_mov_b32 s31, s7
+; GFX6-NEXT: s_mov_b32 s35, s7
+; GFX6-NEXT: s_mov_b32 s25, s7
+; GFX6-NEXT: s_mov_b32 s27, s7
+; GFX6-NEXT: s_mov_b32 s21, s7
+; GFX6-NEXT: s_mov_b32 s23, s7
+; GFX6-NEXT: v_mov_b32_e32 v4, s18
; GFX6-NEXT: v_mov_b32_e32 v0, s44
; GFX6-NEXT: v_mov_b32_e32 v1, s45
-; GFX6-NEXT: s_bfe_i64 s[44:45], s[4:5], 0x10000
-; GFX6-NEXT: s_bfe_i64 s[42:43], s[42:43], 0x10000
-; GFX6-NEXT: v_mov_b32_e32 v6, s44
-; GFX6-NEXT: v_mov_b32_e32 v7, s45
-; GFX6-NEXT: s_lshr_b32 s44, s5, 13
+; GFX6-NEXT: s_mov_b32 s45, s7
+; GFX6-NEXT: v_mov_b32_e32 v6, s14
+; GFX6-NEXT: v_mov_b32_e32 v7, s15
+; GFX6-NEXT: s_mov_b32 s47, s7
; GFX6-NEXT: v_mov_b32_e32 v2, s42
; GFX6-NEXT: v_mov_b32_e32 v3, s43
-; GFX6-NEXT: s_lshr_b32 s42, s5, 10
-; GFX6-NEXT: s_bfe_i64 s[36:37], s[36:37], 0x10000
-; GFX6-NEXT: s_bfe_i64 s[38:39], s[38:39], 0x10000
-; GFX6-NEXT: v_mov_b32_e32 v8, s36
-; GFX6-NEXT: v_mov_b32_e32 v9, s37
-; GFX6-NEXT: s_lshr_b32 s36, s5, 11
+; GFX6-NEXT: s_mov_b32 s43, s7
+; GFX6-NEXT: v_mov_b32_e32 v8, s40
+; GFX6-NEXT: v_mov_b32_e32 v9, s41
+; GFX6-NEXT: s_mov_b32 s41, s7
; GFX6-NEXT: v_mov_b32_e32 v10, s38
; GFX6-NEXT: v_mov_b32_e32 v11, s39
-; GFX6-NEXT: s_lshr_b32 s38, s5, 8
-; GFX6-NEXT: s_bfe_i64 s[30:31], s[30:31], 0x10000
+; GFX6-NEXT: s_mov_b32 s39, s7
+; GFX6-NEXT: v_mov_b32_e32 v12, s36
+; GFX6-NEXT: v_mov_b32_e32 v13, s37
+; GFX6-NEXT: s_mov_b32 s15, s7
+; GFX6-NEXT: v_mov_b32_e32 v14, s28
+; GFX6-NEXT: v_mov_b32_e32 v15, s29
+; GFX6-NEXT: s_mov_b32 s37, s7
+; GFX6-NEXT: s_lshr_b32 s30, s5, 24
+; GFX6-NEXT: s_lshr_b32 s34, s5, 25
; GFX6-NEXT: s_bfe_i64 s[34:35], s[34:35], 0x10000
-; GFX6-NEXT: v_mov_b32_e32 v12, s30
-; GFX6-NEXT: v_mov_b32_e32 v13, s31
-; GFX6-NEXT: s_lshr_b32 s30, s5, 9
-; GFX6-NEXT: v_mov_b32_e32 v14, s34
-; GFX6-NEXT: v_mov_b32_e32 v15, s35
-; GFX6-NEXT: s_lshr_b32 s34, s5, 6
-; GFX6-NEXT: s_bfe_i64 s[28:29], s[28:29], 0x10000
-; GFX6-NEXT: s_bfe_i64 s[26:27], s[26:27], 0x10000
-; GFX6-NEXT: v_mov_b32_e32 v5, s7
+; GFX6-NEXT: s_bfe_i64 s[28:29], s[30:31], 0x10000
+; GFX6-NEXT: v_mov_b32_e32 v5, s18
; GFX6-NEXT: buffer_store_dwordx4 v[2:5], off, s[0:3], 0 offset:496
; GFX6-NEXT: s_waitcnt expcnt(0)
-; GFX6-NEXT: v_mov_b32_e32 v2, s26
-; GFX6-NEXT: v_mov_b32_e32 v3, s27
-; GFX6-NEXT: s_lshr_b32 s26, s5, 7
-; GFX6-NEXT: v_mov_b32_e32 v4, s28
-; GFX6-NEXT: v_mov_b32_e32 v5, s29
-; GFX6-NEXT: s_lshr_b32 s28, s5, 4
+; GFX6-NEXT: v_mov_b32_e32 v2, s28
+; GFX6-NEXT: v_mov_b32_e32 v3, s29
+; GFX6-NEXT: s_mov_b32 s29, s7
+; GFX6-NEXT: v_mov_b32_e32 v4, s34
+; GFX6-NEXT: v_mov_b32_e32 v5, s35
+; GFX6-NEXT: s_lshr_b32 s24, s5, 22
+; GFX6-NEXT: s_lshr_b32 s26, s5, 23
+; GFX6-NEXT: s_bfe_i64 s[26:27], s[26:27], 0x10000
; GFX6-NEXT: s_bfe_i64 s[24:25], s[24:25], 0x10000
-; GFX6-NEXT: s_bfe_i64 s[22:23], s[22:23], 0x10000
; GFX6-NEXT: buffer_store_dwordx4 v[8:11], off, s[0:3], 0 offset:480
; GFX6-NEXT: s_waitcnt expcnt(0)
-; GFX6-NEXT: v_mov_b32_e32 v8, s22
-; GFX6-NEXT: v_mov_b32_e32 v9, s23
-; GFX6-NEXT: s_lshr_b32 s22, s5, 5
-; GFX6-NEXT: v_mov_b32_e32 v10, s24
-; GFX6-NEXT: v_mov_b32_e32 v11, s25
-; GFX6-NEXT: s_lshr_b32 s24, s5, 2
+; GFX6-NEXT: v_mov_b32_e32 v8, s24
+; GFX6-NEXT: v_mov_b32_e32 v9, s25
+; GFX6-NEXT: s_mov_b32 s25, s7
+; GFX6-NEXT: v_mov_b32_e32 v10, s26
+; GFX6-NEXT: v_mov_b32_e32 v11, s27
+; GFX6-NEXT: s_mov_b32 s27, s7
+; GFX6-NEXT: s_lshr_b32 s20, s5, 20
+; GFX6-NEXT: s_lshr_b32 s22, s5, 21
+; GFX6-NEXT: s_bfe_i64 s[22:23], s[22:23], 0x10000
; GFX6-NEXT: s_bfe_i64 s[20:21], s[20:21], 0x10000
-; GFX6-NEXT: s_bfe_i64 s[18:19], s[18:19], 0x10000
; GFX6-NEXT: buffer_store_dwordx4 v[12:15], off, s[0:3], 0 offset:464
; GFX6-NEXT: s_waitcnt expcnt(0)
-; GFX6-NEXT: v_mov_b32_e32 v12, s18
-; GFX6-NEXT: v_mov_b32_e32 v13, s19
-; GFX6-NEXT: s_lshr_b32 s18, s5, 3
-; GFX6-NEXT: v_mov_b32_e32 v14, s20
-; GFX6-NEXT: v_mov_b32_e32 v15, s21
-; GFX6-NEXT: s_lshr_b32 s20, s5, 1
+; GFX...
[truncated]
|
// Need to demand all smaller source elements that maps to a demanded | ||
// destination element, since recursive calls below may turn not demanded | ||
// elements into poison. | ||
APInt DemandedSrcElts = APIntOps::ScaleBitMask(DemandedElts, NumSrcElts); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if we check for poison can we use the more refined DemandedSrcElts mask?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Checking for poison before calling Simplify* does not work as long as the Simply* functions are allowed to turn undemanded elements into poison.
So the idea would be that for SimplifyMultipleUseDemandedBits/VectorElts, when we get the new value back, then we can freeze that value instead of passing along the "DoNotPoison" mask as in #145903
That was difficult to do for the normal SimplifyDemandedVectorElts that is doing the RAUW under the hood, but might work for the MultipleUse case.
When we have a BITCAST and the source type is a vector with smaller elements compared to the destination type, then we need to demand all the source elements that make up the demanded elts for the result when doing recursive calls to SimplifyDemandedBits, SimplifyDemandedVectorElts and SimplifyMultipleUseDemandedBits. Problem is that those simplifications are allowed to turn non-demanded elements of a vector into POISON, so unless we demand all source elements that make up the result there is a risk that the result would be more poisonous (even for demanded elts) after the simplification.
The patch fixes some bugs in SimplifyMultipleUseDemandedBits and SimplifyDemandedBits for situations when we did not consider the problem described above. Now we make sure that we also demand vector elements that "must not be turned into poison" even if those elements correspond to bits that does not need to be defined according to the DemandedBits mask.
Fixes #138513