Skip to content

AMDGPU/GlobalISel: Run redundant_and combine in RegBankCombiner #112353

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Oct 16, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 6 additions & 6 deletions llvm/lib/CodeGen/GlobalISel/CombinerHelper.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -178,7 +178,7 @@ void CombinerHelper::replaceRegWith(MachineRegisterInfo &MRI, Register FromReg,
if (MRI.constrainRegAttrs(ToReg, FromReg))
MRI.replaceRegWith(FromReg, ToReg);
else
Builder.buildCopy(ToReg, FromReg);
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems too broken for precommit.
To even get to buildCopy MI must not be deleted.
Then buildCopy(ToReg, FromReg) results in
*** Bad machine code: Reading virtual register without a def ***

Could test it just as -run-pass=amdgpu-regbank-combiner without -verify-machineinstrs but that leaves other tests broken since I have to add redundant_and.

When I try to run in other combiner there is an infinite loop on regclass to regbank copy:

Try combining %4:sgpr(s32) = COPY %3:sreg_32(s32)
10: GIM_SwitchOpcode(MIs[0], [20, 256), Default=6643, JumpTable...) // Got=20
955: Begin try-block
962: GIM_CheckSimplePredicate(Predicate=14)
965: GIR_DoneWithCustomAction(FnID=6)
Changing: G_STORE %4:sgpr(s32), %0:sgpr(p1) :: (store (s32), addrspace 1)
Creating: COPY
Creating: COPY
Changed: G_STORE %4:sgpr(s32), %0:sgpr(p1) :: (store (s32), addrspace 1)
Erasing: %4:sgpr(s32) = COPY %3:sreg_32(s32)
Created: %4:sgpr(s32) = COPY %3:sreg_32(s32)

Any suggestions?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should add a mir test with the funny pre-assigned class situation

Builder.buildCopy(FromReg, ToReg);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks backwards? Can you precommit this fix with a mir test?

Copy link
Collaborator Author

@petar-avramovic petar-avramovic Oct 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With Builder.buildCopy(ToReg, FromReg);

bb.0:
  liveins: $sgpr0, $vgpr0_vgpr1
  %0:sgpr(p1) = COPY $vgpr0_vgpr1
  %1:sgpr(s32) = COPY $sgpr0
  %2:sgpr(s32) = G_CONSTANT i32 1
  %3:sreg_32(s32) = G_ICMP intpred(ne), %1:sgpr(s32), %2:sgpr
  %4:sgpr(s32) = G_AND %3:sreg_32, %2:sgpr
  G_STORE %4:sgpr(s32), %0:sgpr(p1) :: (store (s32), addrspace 1)
  S_ENDPGM 0
bb.0:
  liveins: $sgpr0, $vgpr0_vgpr1
  %0:sgpr(p1) = COPY $vgpr0_vgpr1
  %1:sgpr(s32) = COPY $sgpr0
  %2:sgpr(s32) = G_CONSTANT i32 1
  %3:sreg_32(s32) = G_ICMP intpred(ne), %1:sgpr(s32), %2:sgpr
  %3:sreg_32(s32) = COPY %4:sgpr(s32)
  %4:sgpr(s32) = G_AND %3:sreg_32, %2:sgpr
  G_STORE %4:sgpr(s32), %0:sgpr(p1) :: (store (s32), addrspace 1)
  S_ENDPGM 0
# After AMDGPURegBankCombiner
# Machine code for function replaceRegWith_requires_copy: IsSSA, NoPHIs, TracksLiveness

bb.0:
  liveins: $sgpr0, $vgpr0_vgpr1
  %0:sgpr(p1) = COPY $vgpr0_vgpr1
  G_STORE %4:sgpr(s32), %0:sgpr(p1) :: (store (s32), addrspace 1)
  S_ENDPGM 0

# End machine code for function replaceRegWith_requires_copy.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure where the register class reference came from

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is from weird bfe lowering, bfe sets reg clases on all its inputs as it is inst-selected in regbank-select

body:             |
  bb.1 (%ir-block.0):
    liveins: $sgpr2_sgpr3
  
    %2:sgpr(p4) = COPY $sgpr2_sgpr3
    %16:sgpr(s64) = G_CONSTANT i64 36
    %17:sgpr(p4) = nuw nusw G_PTR_ADD %2, %16(s64)
    %18:sgpr(p1) = G_LOAD %17(p4) :: (dereferenceable invariant load (p1) from %ir.out.kernarg.offset, align 4, addrspace 4)
    %20:sreg_32(s32) = G_CONSTANT i32 0
    %21:sgpr(s32) = G_CONSTANT i32 63
    %22:sgpr(s32) = G_AND %20, %21
    %23:sgpr(s32) = G_CONSTANT i32 16
    %24:sgpr(s32) = G_SHL %20, %23(s32)
    %25:sreg_32(s32) = G_OR %22, %24
    %19:sreg_32(s32) = S_BFE_U32 %20(s32), %25(s32), implicit-def $scc
    %26:vgpr(s32) = COPY %19(s32)
    %27:vgpr(p1) = COPY %18(p1)
    G_STORE %26(s32), %27(p1) :: (store (s32) into %ir.out.load, addrspace 1)
    S_ENDPGM 0
...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ugh, we shouldn't be directly introducing S_BFE there


Observer.finishedChangingAllUsesOfReg();
}
Expand Down Expand Up @@ -229,8 +229,8 @@ bool CombinerHelper::matchCombineCopy(MachineInstr &MI) {
void CombinerHelper::applyCombineCopy(MachineInstr &MI) {
Register DstReg = MI.getOperand(0).getReg();
Register SrcReg = MI.getOperand(1).getReg();
MI.eraseFromParent();
replaceRegWith(MRI, DstReg, SrcReg);
MI.eraseFromParent();
}

bool CombinerHelper::matchFreezeOfSingleMaybePoisonOperand(
Expand Down Expand Up @@ -379,8 +379,8 @@ void CombinerHelper::applyCombineConcatVectors(MachineInstr &MI,
Builder.buildUndef(NewDstReg);
else
Builder.buildBuildVector(NewDstReg, Ops);
MI.eraseFromParent();
replaceRegWith(MRI, DstReg, NewDstReg);
MI.eraseFromParent();
}

bool CombinerHelper::matchCombineShuffleConcat(MachineInstr &MI,
Expand Down Expand Up @@ -559,8 +559,8 @@ void CombinerHelper::applyCombineShuffleVector(MachineInstr &MI,
else
Builder.buildMergeLikeInstr(NewDstReg, Ops);

MI.eraseFromParent();
replaceRegWith(MRI, DstReg, NewDstReg);
MI.eraseFromParent();
}

bool CombinerHelper::matchShuffleToExtract(MachineInstr &MI) {
Expand Down Expand Up @@ -2825,17 +2825,17 @@ void CombinerHelper::replaceSingleDefInstWithOperand(MachineInstr &MI,
Register OldReg = MI.getOperand(0).getReg();
Register Replacement = MI.getOperand(OpIdx).getReg();
assert(canReplaceReg(OldReg, Replacement, MRI) && "Cannot replace register?");
MI.eraseFromParent();
replaceRegWith(MRI, OldReg, Replacement);
MI.eraseFromParent();
}

void CombinerHelper::replaceSingleDefInstWithReg(MachineInstr &MI,
Register Replacement) {
assert(MI.getNumExplicitDefs() == 1 && "Expected one explicit def?");
Register OldReg = MI.getOperand(0).getReg();
assert(canReplaceReg(OldReg, Replacement, MRI) && "Cannot replace register?");
MI.eraseFromParent();
replaceRegWith(MRI, OldReg, Replacement);
MI.eraseFromParent();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this necessary? Isn't it better to eliminate the old use to give the replacement less work?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem is when replaceRegWith wants to insert COPY. Inset point is MI that got deleted. There are other places in this file where MI is erased after replaceRegWith

}

bool CombinerHelper::matchConstantLargerBitWidth(MachineInstr &MI,
Expand Down
3 changes: 2 additions & 1 deletion llvm/lib/Target/AMDGPU/AMDGPUCombine.td
Original file line number Diff line number Diff line change
Expand Up @@ -169,5 +169,6 @@ def AMDGPURegBankCombiner : GICombiner<
"AMDGPURegBankCombinerImpl",
[unmerge_merge, unmerge_cst, unmerge_undef,
zext_trunc_fold, int_minmax_to_med3, ptr_add_immed_chain,
fp_minmax_to_clamp, fp_minmax_to_med3, fmed3_intrinsic_to_clamp]> {
fp_minmax_to_clamp, fp_minmax_to_med3, fmed3_intrinsic_to_clamp,
redundant_and]> {
}
Original file line number Diff line number Diff line change
Expand Up @@ -27,10 +27,8 @@ define hidden <2 x i64> @icmp_v2i32_zext_to_v2i64(<2 x i32> %arg) {
; CHECK-NEXT: v_mov_b32_e32 v3, 0
; CHECK-NEXT: v_cndmask_b32_e64 v0, 0, 1, vcc_lo
; CHECK-NEXT: v_cmp_eq_u32_e32 vcc_lo, 0, v1
; CHECK-NEXT: v_and_b32_e32 v0, 1, v0
; CHECK-NEXT: v_cndmask_b32_e64 v1, 0, 1, vcc_lo
; CHECK-NEXT: v_and_b32_e32 v2, 1, v1
; CHECK-NEXT: v_mov_b32_e32 v1, 0
; CHECK-NEXT: v_cndmask_b32_e64 v2, 0, 1, vcc_lo
; CHECK-NEXT: s_setpc_b64 s[30:31]
%cmp = icmp eq <2 x i32> %arg, zeroinitializer
%sext = zext <2 x i1> %cmp to <2 x i64>
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
# NOTE: Assertions have been autogenerated by utils/update_mir_test_checks.py
# RUN: llc -mtriple=amdgcn-amd-mesa3d -mcpu=gfx1010 -run-pass=amdgpu-regbank-combiner -verify-machineinstrs %s -o - | FileCheck %s

---
name: replaceRegWith_requires_copy
tracksRegLiveness: true
body: |
bb.0:
liveins: $sgpr0, $vgpr0_vgpr1

; CHECK-LABEL: name: replaceRegWith_requires_copy
; CHECK: liveins: $sgpr0, $vgpr0_vgpr1
; CHECK-NEXT: {{ $}}
; CHECK-NEXT: [[COPY:%[0-9]+]]:sgpr(p1) = COPY $vgpr0_vgpr1
; CHECK-NEXT: [[COPY1:%[0-9]+]]:sgpr(s32) = COPY $sgpr0
; CHECK-NEXT: [[C:%[0-9]+]]:sgpr(s32) = G_CONSTANT i32 1
; CHECK-NEXT: [[ICMP:%[0-9]+]]:sreg_32(s32) = G_ICMP intpred(ne), [[COPY1]](s32), [[C]]
; CHECK-NEXT: [[COPY2:%[0-9]+]]:sgpr(s32) = COPY [[ICMP]](s32)
; CHECK-NEXT: G_STORE [[COPY2]](s32), [[COPY]](p1) :: (store (s32), addrspace 1)
; CHECK-NEXT: S_ENDPGM 0
%0:sgpr(p1) = COPY $vgpr0_vgpr1
%1:sgpr(s32) = COPY $sgpr0
%2:sgpr(s32) = G_CONSTANT i32 1
%3:sreg_32(s32) = G_ICMP intpred(ne), %1, %2
%4:sgpr(s32) = G_AND %3, %2
G_STORE %4(s32), %0(p1) :: (store (s32), addrspace 1)
S_ENDPGM 0
...
12 changes: 6 additions & 6 deletions llvm/test/CodeGen/AMDGPU/fptoi.i128.ll
Original file line number Diff line number Diff line change
Expand Up @@ -136,12 +136,12 @@ define i128 @fptosi_f64_to_i128(double %x) {
; GISEL-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
; GISEL-NEXT: v_mov_b32_e32 v5, v1
; GISEL-NEXT: v_mov_b32_e32 v4, v0
; GISEL-NEXT: v_lshrrev_b32_e32 v0, 20, v5
; GISEL-NEXT: v_and_b32_e32 v6, 0x7ff, v0
; GISEL-NEXT: v_lshrrev_b32_e32 v2, 20, v5
; GISEL-NEXT: v_mov_b32_e32 v0, 0x3ff
; GISEL-NEXT: s_mov_b64 s[4:5], 0
; GISEL-NEXT: v_mov_b32_e32 v1, 0
; GISEL-NEXT: v_mov_b32_e32 v7, 0
; GISEL-NEXT: v_mov_b32_e32 v1, 0
; GISEL-NEXT: v_and_b32_e32 v6, 0x7ff, v2
; GISEL-NEXT: v_cmp_ge_u64_e32 vcc, v[6:7], v[0:1]
; GISEL-NEXT: s_mov_b64 s[6:7], s[4:5]
; GISEL-NEXT: v_mov_b32_e32 v0, s4
Expand Down Expand Up @@ -508,12 +508,12 @@ define i128 @fptoui_f64_to_i128(double %x) {
; GISEL-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
; GISEL-NEXT: v_mov_b32_e32 v5, v1
; GISEL-NEXT: v_mov_b32_e32 v4, v0
; GISEL-NEXT: v_lshrrev_b32_e32 v0, 20, v5
; GISEL-NEXT: v_and_b32_e32 v6, 0x7ff, v0
; GISEL-NEXT: v_lshrrev_b32_e32 v2, 20, v5
; GISEL-NEXT: v_mov_b32_e32 v0, 0x3ff
; GISEL-NEXT: s_mov_b64 s[4:5], 0
; GISEL-NEXT: v_mov_b32_e32 v1, 0
; GISEL-NEXT: v_mov_b32_e32 v7, 0
; GISEL-NEXT: v_mov_b32_e32 v1, 0
; GISEL-NEXT: v_and_b32_e32 v6, 0x7ff, v2
; GISEL-NEXT: v_cmp_ge_u64_e32 vcc, v[6:7], v[0:1]
; GISEL-NEXT: s_mov_b64 s[6:7], s[4:5]
; GISEL-NEXT: v_mov_b32_e32 v0, s4
Expand Down
166 changes: 82 additions & 84 deletions llvm/test/CodeGen/AMDGPU/itofp.i128.ll
Original file line number Diff line number Diff line change
Expand Up @@ -673,38 +673,38 @@ define double @sitofp_i128_to_f64(i128 %x) {
; GISEL-NEXT: v_ashrrev_i32_e32 v6, 31, v3
; GISEL-NEXT: v_xor_b32_e32 v0, v6, v4
; GISEL-NEXT: v_xor_b32_e32 v1, v6, v5
; GISEL-NEXT: v_sub_co_u32_e32 v0, vcc, v0, v6
; GISEL-NEXT: v_xor_b32_e32 v2, v6, v2
; GISEL-NEXT: v_subb_co_u32_e32 v1, vcc, v1, v6, vcc
; GISEL-NEXT: v_xor_b32_e32 v3, v6, v3
; GISEL-NEXT: v_subb_co_u32_e32 v2, vcc, v2, v6, vcc
; GISEL-NEXT: v_ffbh_u32_e32 v5, v0
; GISEL-NEXT: v_subb_co_u32_e32 v3, vcc, v3, v6, vcc
; GISEL-NEXT: v_ffbh_u32_e32 v4, v1
; GISEL-NEXT: v_add_u32_e32 v5, 32, v5
; GISEL-NEXT: v_ffbh_u32_e32 v7, v2
; GISEL-NEXT: v_min_u32_e32 v4, v4, v5
; GISEL-NEXT: v_ffbh_u32_e32 v5, v3
; GISEL-NEXT: v_xor_b32_e32 v4, v6, v2
; GISEL-NEXT: v_sub_co_u32_e32 v2, vcc, v0, v6
; GISEL-NEXT: v_xor_b32_e32 v5, v6, v3
; GISEL-NEXT: v_subb_co_u32_e32 v3, vcc, v1, v6, vcc
; GISEL-NEXT: v_subb_co_u32_e32 v4, vcc, v4, v6, vcc
; GISEL-NEXT: v_ffbh_u32_e32 v1, v2
; GISEL-NEXT: v_subb_co_u32_e32 v5, vcc, v5, v6, vcc
; GISEL-NEXT: v_ffbh_u32_e32 v0, v3
; GISEL-NEXT: v_add_u32_e32 v1, 32, v1
; GISEL-NEXT: v_ffbh_u32_e32 v7, v4
; GISEL-NEXT: v_min_u32_e32 v0, v0, v1
; GISEL-NEXT: v_ffbh_u32_e32 v1, v5
; GISEL-NEXT: v_add_u32_e32 v7, 32, v7
; GISEL-NEXT: v_cmp_eq_u64_e32 vcc, 0, v[2:3]
; GISEL-NEXT: v_add_u32_e32 v4, 64, v4
; GISEL-NEXT: v_min_u32_e32 v5, v5, v7
; GISEL-NEXT: v_cndmask_b32_e32 v9, v5, v4, vcc
; GISEL-NEXT: v_cmp_eq_u64_e32 vcc, 0, v[4:5]
; GISEL-NEXT: v_add_u32_e32 v0, 64, v0
; GISEL-NEXT: v_min_u32_e32 v1, v1, v7
; GISEL-NEXT: v_cndmask_b32_e32 v9, v1, v0, vcc
; GISEL-NEXT: v_sub_u32_e32 v8, 0x80, v9
; GISEL-NEXT: v_sub_u32_e32 v7, 0x7f, v9
; GISEL-NEXT: v_cmp_ge_i32_e32 vcc, 53, v8
; GISEL-NEXT: ; implicit-def: $vgpr10
; GISEL-NEXT: ; implicit-def: $vgpr4_vgpr5
; GISEL-NEXT: ; implicit-def: $vgpr0_vgpr1
; GISEL-NEXT: s_and_saveexec_b64 s[4:5], vcc
; GISEL-NEXT: s_xor_b64 s[4:5], exec, s[4:5]
; GISEL-NEXT: ; %bb.2: ; %itofp-if-else
; GISEL-NEXT: v_add_u32_e32 v2, 0xffffffb5, v9
; GISEL-NEXT: v_lshlrev_b64 v[0:1], v2, v[0:1]
; GISEL-NEXT: v_cmp_gt_u32_e32 vcc, 64, v2
; GISEL-NEXT: v_cndmask_b32_e32 v4, 0, v0, vcc
; GISEL-NEXT: v_add_u32_e32 v4, 0xffffffb5, v9
; GISEL-NEXT: v_lshlrev_b64 v[0:1], v4, v[2:3]
; GISEL-NEXT: v_cmp_gt_u32_e32 vcc, 64, v4
; GISEL-NEXT: v_cndmask_b32_e32 v0, 0, v0, vcc
; GISEL-NEXT: v_cndmask_b32_e32 v10, 0, v1, vcc
; GISEL-NEXT: ; implicit-def: $vgpr8
; GISEL-NEXT: ; implicit-def: $vgpr0
; GISEL-NEXT: ; implicit-def: $vgpr2
; GISEL-NEXT: ; implicit-def: $vgpr9
; GISEL-NEXT: ; %bb.3: ; %Flow3
; GISEL-NEXT: s_andn2_saveexec_b64 s[8:9], s[4:5]
Expand All @@ -721,89 +721,88 @@ define double @sitofp_i128_to_f64(i128 %x) {
; GISEL-NEXT: ; %bb.6: ; %itofp-sw-default
; GISEL-NEXT: v_sub_u32_e32 v14, 0x49, v9
; GISEL-NEXT: v_sub_u32_e32 v10, 64, v14
; GISEL-NEXT: v_lshrrev_b64 v[4:5], v14, v[0:1]
; GISEL-NEXT: v_lshlrev_b64 v[10:11], v10, v[2:3]
; GISEL-NEXT: v_lshrrev_b64 v[0:1], v14, v[2:3]
; GISEL-NEXT: v_lshlrev_b64 v[10:11], v10, v[4:5]
; GISEL-NEXT: v_subrev_u32_e32 v15, 64, v14
; GISEL-NEXT: v_or_b32_e32 v10, v4, v10
; GISEL-NEXT: v_or_b32_e32 v11, v5, v11
; GISEL-NEXT: v_lshrrev_b64 v[4:5], v15, v[2:3]
; GISEL-NEXT: v_lshrrev_b64 v[12:13], v14, v[2:3]
; GISEL-NEXT: v_lshrrev_b64 v[12:13], v14, v[4:5]
; GISEL-NEXT: v_or_b32_e32 v10, v0, v10
; GISEL-NEXT: v_or_b32_e32 v11, v1, v11
; GISEL-NEXT: v_lshrrev_b64 v[0:1], v15, v[4:5]
; GISEL-NEXT: v_cmp_gt_u32_e32 vcc, 64, v14
; GISEL-NEXT: v_add_u32_e32 v9, 55, v9
; GISEL-NEXT: v_cndmask_b32_e32 v0, v0, v10, vcc
; GISEL-NEXT: v_cndmask_b32_e32 v1, v1, v11, vcc
; GISEL-NEXT: v_cmp_eq_u32_e64 s[4:5], 0, v14
; GISEL-NEXT: v_add_u32_e32 v14, 55, v9
; GISEL-NEXT: v_cndmask_b32_e32 v4, v4, v10, vcc
; GISEL-NEXT: v_cndmask_b32_e32 v5, v5, v11, vcc
; GISEL-NEXT: v_sub_u32_e32 v11, 64, v14
; GISEL-NEXT: v_cndmask_b32_e64 v13, v4, v0, s[4:5]
; GISEL-NEXT: v_cndmask_b32_e64 v4, v5, v1, s[4:5]
; GISEL-NEXT: v_cndmask_b32_e32 v5, 0, v12, vcc
; GISEL-NEXT: v_lshrrev_b64 v[9:10], v14, -1
; GISEL-NEXT: v_lshlrev_b64 v[11:12], v11, -1
; GISEL-NEXT: v_subrev_u32_e32 v15, 64, v14
; GISEL-NEXT: v_or_b32_e32 v16, v9, v11
; GISEL-NEXT: v_or_b32_e32 v17, v10, v12
; GISEL-NEXT: v_lshrrev_b64 v[11:12], v15, -1
; GISEL-NEXT: v_cmp_gt_u32_e32 vcc, 64, v14
; GISEL-NEXT: v_cndmask_b32_e32 v11, v11, v16, vcc
; GISEL-NEXT: v_cndmask_b32_e32 v12, v12, v17, vcc
; GISEL-NEXT: v_cmp_eq_u32_e64 s[4:5], 0, v14
; GISEL-NEXT: v_cndmask_b32_e32 v9, 0, v9, vcc
; GISEL-NEXT: v_cndmask_b32_e32 v10, 0, v10, vcc
; GISEL-NEXT: v_cndmask_b32_e64 v11, v11, -1, s[4:5]
; GISEL-NEXT: v_cndmask_b32_e64 v12, v12, -1, s[4:5]
; GISEL-NEXT: v_and_b32_e32 v2, v9, v2
; GISEL-NEXT: v_and_b32_e32 v3, v10, v3
; GISEL-NEXT: v_and_or_b32 v0, v11, v0, v2
; GISEL-NEXT: v_and_or_b32 v1, v12, v1, v3
; GISEL-NEXT: v_cndmask_b32_e32 v11, 0, v12, vcc
; GISEL-NEXT: v_sub_u32_e32 v12, 64, v9
; GISEL-NEXT: v_cndmask_b32_e64 v14, v0, v2, s[4:5]
; GISEL-NEXT: v_cndmask_b32_e64 v10, v1, v3, s[4:5]
; GISEL-NEXT: v_lshrrev_b64 v[0:1], v9, -1
; GISEL-NEXT: v_lshlrev_b64 v[12:13], v12, -1
; GISEL-NEXT: v_subrev_u32_e32 v15, 64, v9
; GISEL-NEXT: v_or_b32_e32 v16, v0, v12
; GISEL-NEXT: v_or_b32_e32 v17, v1, v13
; GISEL-NEXT: v_lshrrev_b64 v[12:13], v15, -1
; GISEL-NEXT: v_cmp_gt_u32_e32 vcc, 64, v9
; GISEL-NEXT: v_cndmask_b32_e32 v12, v12, v16, vcc
; GISEL-NEXT: v_cndmask_b32_e32 v13, v13, v17, vcc
; GISEL-NEXT: v_cmp_eq_u32_e64 s[4:5], 0, v9
; GISEL-NEXT: v_cndmask_b32_e32 v0, 0, v0, vcc
; GISEL-NEXT: v_cndmask_b32_e32 v1, 0, v1, vcc
; GISEL-NEXT: v_cndmask_b32_e64 v9, v12, -1, s[4:5]
; GISEL-NEXT: v_cndmask_b32_e64 v12, v13, -1, s[4:5]
; GISEL-NEXT: v_and_b32_e32 v0, v0, v4
; GISEL-NEXT: v_and_b32_e32 v1, v1, v5
; GISEL-NEXT: v_and_or_b32 v0, v9, v2, v0
; GISEL-NEXT: v_and_or_b32 v1, v12, v3, v1
; GISEL-NEXT: v_cmp_ne_u64_e32 vcc, 0, v[0:1]
; GISEL-NEXT: v_cndmask_b32_e64 v0, 0, 1, vcc
; GISEL-NEXT: v_or_b32_e32 v3, v13, v0
; GISEL-NEXT: v_mov_b32_e32 v0, v3
; GISEL-NEXT: v_mov_b32_e32 v1, v4
; GISEL-NEXT: v_mov_b32_e32 v2, v5
; GISEL-NEXT: v_mov_b32_e32 v3, v6
; GISEL-NEXT: v_or_b32_e32 v9, v14, v0
; GISEL-NEXT: v_mov_b32_e32 v2, v9
; GISEL-NEXT: v_mov_b32_e32 v3, v10
; GISEL-NEXT: v_mov_b32_e32 v4, v11
; GISEL-NEXT: v_mov_b32_e32 v5, v12
; GISEL-NEXT: .LBB2_7: ; %Flow1
; GISEL-NEXT: s_or_b64 exec, exec, s[12:13]
; GISEL-NEXT: .LBB2_8: ; %Flow2
; GISEL-NEXT: s_andn2_saveexec_b64 s[4:5], s[10:11]
; GISEL-NEXT: s_cbranch_execz .LBB2_10
; GISEL-NEXT: ; %bb.9: ; %itofp-sw-bb
; GISEL-NEXT: v_lshlrev_b64 v[9:10], 1, v[0:1]
; GISEL-NEXT: v_lshlrev_b64 v[2:3], 1, v[2:3]
; GISEL-NEXT: v_lshrrev_b32_e32 v0, 31, v1
; GISEL-NEXT: v_or_b32_e32 v11, v2, v0
; GISEL-NEXT: v_mov_b32_e32 v0, v9
; GISEL-NEXT: v_mov_b32_e32 v1, v10
; GISEL-NEXT: v_mov_b32_e32 v2, v11
; GISEL-NEXT: v_mov_b32_e32 v3, v12
; GISEL-NEXT: v_lshlrev_b64 v[4:5], 1, v[4:5]
; GISEL-NEXT: v_lshlrev_b64 v[0:1], 1, v[2:3]
; GISEL-NEXT: v_lshrrev_b32_e32 v2, 31, v3
; GISEL-NEXT: v_or_b32_e32 v2, v4, v2
; GISEL-NEXT: v_mov_b32_e32 v5, v3
; GISEL-NEXT: v_mov_b32_e32 v4, v2
; GISEL-NEXT: v_mov_b32_e32 v3, v1
; GISEL-NEXT: v_mov_b32_e32 v2, v0
; GISEL-NEXT: .LBB2_10: ; %itofp-sw-epilog
; GISEL-NEXT: s_or_b64 exec, exec, s[4:5]
; GISEL-NEXT: v_bfe_u32 v3, v0, 2, 1
; GISEL-NEXT: v_or_b32_e32 v0, v0, v3
; GISEL-NEXT: v_add_co_u32_e32 v0, vcc, 1, v0
; GISEL-NEXT: v_addc_co_u32_e32 v1, vcc, 0, v1, vcc
; GISEL-NEXT: v_addc_co_u32_e32 v2, vcc, 0, v2, vcc
; GISEL-NEXT: v_lshrrev_b64 v[4:5], 2, v[0:1]
; GISEL-NEXT: v_bfe_u32 v0, v2, 2, 1
; GISEL-NEXT: v_or_b32_e32 v0, v2, v0
; GISEL-NEXT: v_add_co_u32_e32 v2, vcc, 1, v0
; GISEL-NEXT: v_addc_co_u32_e32 v3, vcc, 0, v3, vcc
; GISEL-NEXT: v_addc_co_u32_e32 v4, vcc, 0, v4, vcc
; GISEL-NEXT: v_lshrrev_b64 v[0:1], 2, v[2:3]
; GISEL-NEXT: v_mov_b32_e32 v9, 0
; GISEL-NEXT: v_and_b32_e32 v10, 0x800000, v1
; GISEL-NEXT: v_and_b32_e32 v10, 0x800000, v3
; GISEL-NEXT: v_cmp_ne_u64_e32 vcc, 0, v[9:10]
; GISEL-NEXT: v_lshl_or_b32 v10, v2, 30, v5
; GISEL-NEXT: v_lshl_or_b32 v10, v4, 30, v1
; GISEL-NEXT: s_and_saveexec_b64 s[4:5], vcc
; GISEL-NEXT: ; %bb.11: ; %itofp-if-then20
; GISEL-NEXT: v_lshrrev_b64 v[4:5], 3, v[0:1]
; GISEL-NEXT: v_lshrrev_b64 v[0:1], 3, v[2:3]
; GISEL-NEXT: v_mov_b32_e32 v7, v8
; GISEL-NEXT: v_lshl_or_b32 v10, v2, 29, v5
; GISEL-NEXT: v_lshl_or_b32 v10, v4, 29, v1
; GISEL-NEXT: ; %bb.12: ; %Flow
; GISEL-NEXT: s_or_b64 exec, exec, s[4:5]
; GISEL-NEXT: .LBB2_13: ; %Flow4
; GISEL-NEXT: s_or_b64 exec, exec, s[8:9]
; GISEL-NEXT: v_and_b32_e32 v0, 0x80000000, v6
; GISEL-NEXT: v_mov_b32_e32 v1, 0x3ff00000
; GISEL-NEXT: v_mov_b32_e32 v2, 0xfffff
; GISEL-NEXT: v_lshl_add_u32 v1, v7, 20, v1
; GISEL-NEXT: v_and_or_b32 v2, v10, v2, v0
; GISEL-NEXT: v_and_or_b32 v0, v4, -1, 0
; GISEL-NEXT: v_or3_b32 v1, v2, v1, 0
; GISEL-NEXT: v_and_b32_e32 v1, 0x80000000, v6
; GISEL-NEXT: v_mov_b32_e32 v2, 0x3ff00000
; GISEL-NEXT: v_mov_b32_e32 v3, 0xfffff
; GISEL-NEXT: v_lshl_add_u32 v2, v7, 20, v2
; GISEL-NEXT: v_and_or_b32 v1, v10, v3, v1
; GISEL-NEXT: v_or3_b32 v1, v1, v2, 0
; GISEL-NEXT: .LBB2_14: ; %Flow5
; GISEL-NEXT: s_or_b64 exec, exec, s[6:7]
; GISEL-NEXT: s_setpc_b64 s[30:31]
Expand Down Expand Up @@ -1083,7 +1082,6 @@ define double @uitofp_i128_to_f64(i128 %x) {
; GISEL-NEXT: v_mov_b32_e32 v0, 0x3ff00000
; GISEL-NEXT: v_lshl_add_u32 v0, v6, 20, v0
; GISEL-NEXT: v_and_b32_e32 v1, 0xfffff, v9
; GISEL-NEXT: v_and_or_b32 v4, v4, -1, 0
; GISEL-NEXT: v_or3_b32 v5, v1, v0, 0
; GISEL-NEXT: .LBB3_14: ; %Flow5
; GISEL-NEXT: s_or_b64 exec, exec, s[6:7]
Expand Down
Loading