-
Notifications
You must be signed in to change notification settings - Fork 14.3k
[AMDGPU] Implement llvm.lrint intrinsic lowering #98931
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This patch enabled the target-independent lowering of llvm.lrint via GlobalISel. For SelectionDAG, the instrinsic is custom lowered.
Thank you for submitting a Pull Request (PR) to the LLVM Project! This PR will be automatically labeled and the relevant teams will be If you wish to, you can add reviewers by using the "Reviewers" section on this page. If this is not working for you, it is probably because you do not have write If you have received no comments on your PR for a week, you can request a review If you have further questions, they may be answered by the LLVM GitHub User Guide. You can also ask questions in a comment on this PR, on the LLVM Discord or on the forums. |
@llvm/pr-subscribers-llvm-selectiondag @llvm/pr-subscribers-backend-amdgpu Author: Sumanth Gundapaneni (sgundapa) ChangesThis patch enabled the target-independent lowering of llvm.lrint via GlobalISel. Patch is 44.77 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/98931.diff 6 Files Affected:
diff --git a/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp b/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp
index 3f1094e0ac703..c63b24caf6106 100644
--- a/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp
+++ b/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp
@@ -3818,6 +3818,17 @@ LegalizerHelper::lower(MachineInstr &MI, unsigned TypeIdx, LLT LowerHintTy) {
changeOpcode(MI, TargetOpcode::G_INTRINSIC_ROUNDEVEN);
return Legalized;
}
+ case TargetOpcode::G_INTRINSIC_LRINT:
+ case TargetOpcode::G_INTRINSIC_LLRINT: {
+ Register DstReg = MI.getOperand(0).getReg();
+ Register SrcReg = MI.getOperand(1).getReg();
+ LLT SrcTy = MRI.getType(SrcReg);
+ auto Round = MIRBuilder.buildIntrinsicRoundeven(SrcTy, SrcReg);
+
+ MIRBuilder.buildFPTOSI(DstReg, Round);
+ MI.eraseFromParent();
+ return Legalized;
+ }
case TargetOpcode::G_ATOMIC_CMPXCHG_WITH_SUCCESS: {
auto [OldValRes, SuccessRes, Addr, CmpVal, NewVal] = MI.getFirst5Regs();
Register NewOldValRes = MRI.cloneVirtualRegister(OldValRes);
@@ -4668,6 +4679,8 @@ LegalizerHelper::fewerElementsVector(MachineInstr &MI, unsigned TypeIdx,
case G_FCEIL:
case G_FFLOOR:
case G_FRINT:
+ case G_INTRINSIC_LRINT:
+ case G_INTRINSIC_LLRINT:
case G_INTRINSIC_ROUND:
case G_INTRINSIC_ROUNDEVEN:
case G_INTRINSIC_TRUNC:
diff --git a/llvm/lib/Target/AMDGPU/AMDGPUISelLowering.cpp b/llvm/lib/Target/AMDGPU/AMDGPUISelLowering.cpp
index ef30bf6d993fa..ef3e74c9a622f 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPUISelLowering.cpp
+++ b/llvm/lib/Target/AMDGPU/AMDGPUISelLowering.cpp
@@ -404,7 +404,8 @@ AMDGPUTargetLowering::AMDGPUTargetLowering(const TargetMachine &TM,
setOperationAction(ISD::FNEARBYINT, {MVT::f16, MVT::f32, MVT::f64}, Custom);
- setOperationAction(ISD::FRINT, {MVT::f16, MVT::f32, MVT::f64}, Custom);
+ setOperationAction({ISD::FRINT, ISD::LRINT, ISD::LLRINT},
+ {MVT::f16, MVT::f32, MVT::f64}, Custom);
setOperationAction(ISD::FREM, {MVT::f16, MVT::f32, MVT::f64}, Custom);
@@ -1388,7 +1389,11 @@ SDValue AMDGPUTargetLowering::LowerOperation(SDValue Op,
case ISD::FCEIL: return LowerFCEIL(Op, DAG);
case ISD::FTRUNC: return LowerFTRUNC(Op, DAG);
case ISD::FRINT: return LowerFRINT(Op, DAG);
- case ISD::FNEARBYINT: return LowerFNEARBYINT(Op, DAG);
+ case ISD::LRINT:
+ case ISD::LLRINT:
+ return LowerLRINT(Op, DAG);
+ case ISD::FNEARBYINT:
+ return LowerFNEARBYINT(Op, DAG);
case ISD::FROUNDEVEN:
return LowerFROUNDEVEN(Op, DAG);
case ISD::FROUND: return LowerFROUND(Op, DAG);
@@ -2496,6 +2501,14 @@ SDValue AMDGPUTargetLowering::LowerFRINT(SDValue Op, SelectionDAG &DAG) const {
return DAG.getNode(ISD::FROUNDEVEN, SDLoc(Op), VT, Arg);
}
+SDValue AMDGPUTargetLowering::LowerLRINT(SDValue Op, SelectionDAG &DAG) const {
+ auto ResVT = Op.getValueType();
+ auto Arg = Op.getOperand(0u);
+ auto ArgVT = Arg.getValueType();
+ SDValue RoundNode = DAG.getNode(ISD::FROUNDEVEN, SDLoc(Op), ArgVT, Arg);
+ return DAG.getNode(ISD::FP_TO_SINT, SDLoc(Op), ResVT, RoundNode);
+}
+
// XXX - May require not supporting f32 denormals?
// Don't handle v2f16. The extra instructions to scalarize and repack around the
diff --git a/llvm/lib/Target/AMDGPU/AMDGPUISelLowering.h b/llvm/lib/Target/AMDGPU/AMDGPUISelLowering.h
index 37572af3897f2..2e8f857e95a2d 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPUISelLowering.h
+++ b/llvm/lib/Target/AMDGPU/AMDGPUISelLowering.h
@@ -55,6 +55,7 @@ class AMDGPUTargetLowering : public TargetLowering {
SDValue LowerFCEIL(SDValue Op, SelectionDAG &DAG) const;
SDValue LowerFTRUNC(SDValue Op, SelectionDAG &DAG) const;
SDValue LowerFRINT(SDValue Op, SelectionDAG &DAG) const;
+ SDValue LowerLRINT(SDValue Op, SelectionDAG &DAG) const;
SDValue LowerFNEARBYINT(SDValue Op, SelectionDAG &DAG) const;
SDValue LowerFROUNDEVEN(SDValue Op, SelectionDAG &DAG) const;
diff --git a/llvm/lib/Target/AMDGPU/AMDGPULegalizerInfo.cpp b/llvm/lib/Target/AMDGPU/AMDGPULegalizerInfo.cpp
index 88e40da110555..0622690759c35 100644
--- a/llvm/lib/Target/AMDGPU/AMDGPULegalizerInfo.cpp
+++ b/llvm/lib/Target/AMDGPU/AMDGPULegalizerInfo.cpp
@@ -1141,6 +1141,11 @@ AMDGPULegalizerInfo::AMDGPULegalizerInfo(const GCNSubtarget &ST_,
.scalarize(0)
.lower();
+ getActionDefinitionsBuilder({G_INTRINSIC_LRINT, G_INTRINSIC_LLRINT})
+ .clampScalar(0, S16, S64)
+ .scalarize(0)
+ .lower();
+
if (ST.has16BitInsts()) {
getActionDefinitionsBuilder(
{G_INTRINSIC_TRUNC, G_FCEIL, G_INTRINSIC_ROUNDEVEN})
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/lrint.ll b/llvm/test/CodeGen/AMDGPU/GlobalISel/lrint.ll
new file mode 100644
index 0000000000000..c6ac0b2dd3334
--- /dev/null
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/lrint.ll
@@ -0,0 +1,493 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5
+; RUN: llc -global-isel -mtriple=amdgcn -mcpu=gfx900 < %s | FileCheck -check-prefixes=GCN,GFX9 %s
+; RUN: llc -global-isel -mtriple=amdgcn -mcpu=gfx1010 < %s | FileCheck -check-prefixes=GCN,GFX10 %s
+; RUN: llc -global-isel -mtriple=amdgcn -mcpu=gfx1100 < %s | FileCheck -check-prefixes=GCN,GFX11 %s
+
+declare float @llvm.rint.f32(float)
+declare i32 @llvm.lrint.i32.f32(float)
+declare i32 @llvm.lrint.i32.f64(double)
+declare i64 @llvm.lrint.i64.f32(float)
+declare i64 @llvm.lrint.i64.f64(double)
+declare i64 @llvm.llrint.i64.f32(float)
+declare half @llvm.rint.f16(half)
+declare i32 @llvm.lrint.i32.f16(half %arg)
+declare <2 x float> @llvm.rint.v2f32.v2f32(<2 x float> %arg)
+declare <2 x i32> @llvm.lrint.v2i32.v2f32(<2 x float> %arg)
+declare <2 x i64> @llvm.lrint.v2i64.v2f32(<2 x float> %arg)
+
+define float @intrinsic_frint(float %arg) {
+; GCN-LABEL: intrinsic_frint:
+; GCN: ; %bb.0: ; %entry
+; GCN-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GCN-NEXT: v_rndne_f32_e32 v0, v0
+; GCN-NEXT: s_setpc_b64 s[30:31]
+entry:
+ %0 = tail call float @llvm.rint.f32(float %arg)
+ ret float %0
+}
+
+define i32 @intrinsic_lrint_i32_f32(float %arg) {
+; GFX9-LABEL: intrinsic_lrint_i32_f32:
+; GFX9: ; %bb.0: ; %entry
+; GFX9-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX9-NEXT: v_rndne_f32_e32 v0, v0
+; GFX9-NEXT: v_cvt_i32_f32_e32 v0, v0
+; GFX9-NEXT: s_setpc_b64 s[30:31]
+;
+; GFX10-LABEL: intrinsic_lrint_i32_f32:
+; GFX10: ; %bb.0: ; %entry
+; GFX10-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX10-NEXT: v_rndne_f32_e32 v0, v0
+; GFX10-NEXT: v_cvt_i32_f32_e32 v0, v0
+; GFX10-NEXT: s_setpc_b64 s[30:31]
+;
+; GFX11-LABEL: intrinsic_lrint_i32_f32:
+; GFX11: ; %bb.0: ; %entry
+; GFX11-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX11-NEXT: v_rndne_f32_e32 v0, v0
+; GFX11-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX11-NEXT: v_cvt_i32_f32_e32 v0, v0
+; GFX11-NEXT: s_setpc_b64 s[30:31]
+entry:
+ %0 = tail call i32 @llvm.lrint.i32.f32(float %arg)
+ ret i32 %0
+}
+
+define i32 @intrinsic_lrint_i32_f64(double %arg) {
+; GFX9-LABEL: intrinsic_lrint_i32_f64:
+; GFX9: ; %bb.0: ; %entry
+; GFX9-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX9-NEXT: v_rndne_f64_e32 v[0:1], v[0:1]
+; GFX9-NEXT: v_cvt_i32_f64_e32 v0, v[0:1]
+; GFX9-NEXT: s_setpc_b64 s[30:31]
+;
+; GFX10-LABEL: intrinsic_lrint_i32_f64:
+; GFX10: ; %bb.0: ; %entry
+; GFX10-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX10-NEXT: v_rndne_f64_e32 v[0:1], v[0:1]
+; GFX10-NEXT: v_cvt_i32_f64_e32 v0, v[0:1]
+; GFX10-NEXT: s_setpc_b64 s[30:31]
+;
+; GFX11-LABEL: intrinsic_lrint_i32_f64:
+; GFX11: ; %bb.0: ; %entry
+; GFX11-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX11-NEXT: v_rndne_f64_e32 v[0:1], v[0:1]
+; GFX11-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX11-NEXT: v_cvt_i32_f64_e32 v0, v[0:1]
+; GFX11-NEXT: s_setpc_b64 s[30:31]
+entry:
+ %0 = tail call i32 @llvm.lrint.i32.f64(double %arg)
+ ret i32 %0
+}
+
+define i64 @intrinsic_lrint_i64_f32(float %arg) {
+; GFX9-LABEL: intrinsic_lrint_i64_f32:
+; GFX9: ; %bb.0: ; %entry
+; GFX9-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX9-NEXT: v_rndne_f32_e32 v0, v0
+; GFX9-NEXT: v_trunc_f32_e32 v1, v0
+; GFX9-NEXT: v_mov_b32_e32 v2, 0x2f800000
+; GFX9-NEXT: v_mul_f32_e64 v2, |v1|, v2
+; GFX9-NEXT: v_floor_f32_e32 v2, v2
+; GFX9-NEXT: v_mov_b32_e32 v3, 0xcf800000
+; GFX9-NEXT: v_fma_f32 v1, v2, v3, |v1|
+; GFX9-NEXT: v_cvt_u32_f32_e32 v1, v1
+; GFX9-NEXT: v_cvt_u32_f32_e32 v2, v2
+; GFX9-NEXT: v_ashrrev_i32_e32 v3, 31, v0
+; GFX9-NEXT: v_xor_b32_e32 v0, v1, v3
+; GFX9-NEXT: v_xor_b32_e32 v1, v2, v3
+; GFX9-NEXT: v_sub_co_u32_e32 v0, vcc, v0, v3
+; GFX9-NEXT: v_subb_co_u32_e32 v1, vcc, v1, v3, vcc
+; GFX9-NEXT: s_setpc_b64 s[30:31]
+;
+; GFX10-LABEL: intrinsic_lrint_i64_f32:
+; GFX10: ; %bb.0: ; %entry
+; GFX10-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX10-NEXT: v_rndne_f32_e32 v0, v0
+; GFX10-NEXT: v_trunc_f32_e32 v1, v0
+; GFX10-NEXT: v_ashrrev_i32_e32 v3, 31, v0
+; GFX10-NEXT: v_mul_f32_e64 v2, 0x2f800000, |v1|
+; GFX10-NEXT: v_floor_f32_e32 v2, v2
+; GFX10-NEXT: v_fma_f32 v1, 0xcf800000, v2, |v1|
+; GFX10-NEXT: v_cvt_u32_f32_e32 v0, v1
+; GFX10-NEXT: v_cvt_u32_f32_e32 v1, v2
+; GFX10-NEXT: v_xor_b32_e32 v0, v0, v3
+; GFX10-NEXT: v_xor_b32_e32 v1, v1, v3
+; GFX10-NEXT: v_sub_co_u32 v0, vcc_lo, v0, v3
+; GFX10-NEXT: v_sub_co_ci_u32_e32 v1, vcc_lo, v1, v3, vcc_lo
+; GFX10-NEXT: s_setpc_b64 s[30:31]
+;
+; GFX11-LABEL: intrinsic_lrint_i64_f32:
+; GFX11: ; %bb.0: ; %entry
+; GFX11-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX11-NEXT: v_rndne_f32_e32 v0, v0
+; GFX11-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(VALU_DEP_2)
+; GFX11-NEXT: v_trunc_f32_e32 v1, v0
+; GFX11-NEXT: v_ashrrev_i32_e32 v3, 31, v0
+; GFX11-NEXT: v_mul_f32_e64 v2, 0x2f800000, |v1|
+; GFX11-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX11-NEXT: v_floor_f32_e32 v2, v2
+; GFX11-NEXT: v_fma_f32 v1, 0xcf800000, v2, |v1|
+; GFX11-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(VALU_DEP_2)
+; GFX11-NEXT: v_cvt_u32_f32_e32 v0, v1
+; GFX11-NEXT: v_cvt_u32_f32_e32 v1, v2
+; GFX11-NEXT: v_xor_b32_e32 v0, v0, v3
+; GFX11-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_2)
+; GFX11-NEXT: v_xor_b32_e32 v1, v1, v3
+; GFX11-NEXT: v_sub_co_u32 v0, vcc_lo, v0, v3
+; GFX11-NEXT: s_delay_alu instid0(VALU_DEP_2)
+; GFX11-NEXT: v_sub_co_ci_u32_e32 v1, vcc_lo, v1, v3, vcc_lo
+; GFX11-NEXT: s_setpc_b64 s[30:31]
+entry:
+ %0 = tail call i64 @llvm.lrint.i64.f32(float %arg)
+ ret i64 %0
+}
+
+define i64 @intrinsic_lrint_i64_f64(double %arg) {
+; GFX9-LABEL: intrinsic_lrint_i64_f64:
+; GFX9: ; %bb.0: ; %entry
+; GFX9-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX9-NEXT: v_rndne_f64_e32 v[0:1], v[0:1]
+; GFX9-NEXT: v_mov_b32_e32 v2, 0
+; GFX9-NEXT: v_mov_b32_e32 v3, 0x3df00000
+; GFX9-NEXT: v_mov_b32_e32 v4, 0
+; GFX9-NEXT: v_mov_b32_e32 v5, 0xc1f00000
+; GFX9-NEXT: v_trunc_f64_e32 v[0:1], v[0:1]
+; GFX9-NEXT: v_mul_f64 v[2:3], v[0:1], v[2:3]
+; GFX9-NEXT: v_floor_f64_e32 v[2:3], v[2:3]
+; GFX9-NEXT: v_fma_f64 v[0:1], v[2:3], v[4:5], v[0:1]
+; GFX9-NEXT: v_cvt_u32_f64_e32 v0, v[0:1]
+; GFX9-NEXT: v_cvt_i32_f64_e32 v1, v[2:3]
+; GFX9-NEXT: s_setpc_b64 s[30:31]
+;
+; GFX10-LABEL: intrinsic_lrint_i64_f64:
+; GFX10: ; %bb.0: ; %entry
+; GFX10-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX10-NEXT: v_rndne_f64_e32 v[0:1], v[0:1]
+; GFX10-NEXT: v_trunc_f64_e32 v[0:1], v[0:1]
+; GFX10-NEXT: v_mul_f64 v[2:3], 0x3df00000, v[0:1]
+; GFX10-NEXT: v_floor_f64_e32 v[2:3], v[2:3]
+; GFX10-NEXT: v_fma_f64 v[0:1], 0xc1f00000, v[2:3], v[0:1]
+; GFX10-NEXT: v_cvt_u32_f64_e32 v0, v[0:1]
+; GFX10-NEXT: v_cvt_i32_f64_e32 v1, v[2:3]
+; GFX10-NEXT: s_setpc_b64 s[30:31]
+;
+; GFX11-LABEL: intrinsic_lrint_i64_f64:
+; GFX11: ; %bb.0: ; %entry
+; GFX11-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX11-NEXT: v_rndne_f64_e32 v[0:1], v[0:1]
+; GFX11-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX11-NEXT: v_trunc_f64_e32 v[0:1], v[0:1]
+; GFX11-NEXT: v_mul_f64 v[2:3], 0x3df00000, v[0:1]
+; GFX11-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX11-NEXT: v_floor_f64_e32 v[2:3], v[2:3]
+; GFX11-NEXT: v_fma_f64 v[0:1], 0xc1f00000, v[2:3], v[0:1]
+; GFX11-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX11-NEXT: v_cvt_u32_f64_e32 v0, v[0:1]
+; GFX11-NEXT: v_cvt_i32_f64_e32 v1, v[2:3]
+; GFX11-NEXT: s_setpc_b64 s[30:31]
+entry:
+ %0 = tail call i64 @llvm.lrint.i64.f64(double %arg)
+ ret i64 %0
+}
+
+define i64 @intrinsic_llrint_i64_f32(float %arg) {
+; GFX9-LABEL: intrinsic_llrint_i64_f32:
+; GFX9: ; %bb.0: ; %entry
+; GFX9-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX9-NEXT: v_rndne_f32_e32 v0, v0
+; GFX9-NEXT: v_trunc_f32_e32 v1, v0
+; GFX9-NEXT: v_mov_b32_e32 v2, 0x2f800000
+; GFX9-NEXT: v_mul_f32_e64 v2, |v1|, v2
+; GFX9-NEXT: v_floor_f32_e32 v2, v2
+; GFX9-NEXT: v_mov_b32_e32 v3, 0xcf800000
+; GFX9-NEXT: v_fma_f32 v1, v2, v3, |v1|
+; GFX9-NEXT: v_cvt_u32_f32_e32 v1, v1
+; GFX9-NEXT: v_cvt_u32_f32_e32 v2, v2
+; GFX9-NEXT: v_ashrrev_i32_e32 v3, 31, v0
+; GFX9-NEXT: v_xor_b32_e32 v0, v1, v3
+; GFX9-NEXT: v_xor_b32_e32 v1, v2, v3
+; GFX9-NEXT: v_sub_co_u32_e32 v0, vcc, v0, v3
+; GFX9-NEXT: v_subb_co_u32_e32 v1, vcc, v1, v3, vcc
+; GFX9-NEXT: s_setpc_b64 s[30:31]
+;
+; GFX10-LABEL: intrinsic_llrint_i64_f32:
+; GFX10: ; %bb.0: ; %entry
+; GFX10-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX10-NEXT: v_rndne_f32_e32 v0, v0
+; GFX10-NEXT: v_trunc_f32_e32 v1, v0
+; GFX10-NEXT: v_ashrrev_i32_e32 v3, 31, v0
+; GFX10-NEXT: v_mul_f32_e64 v2, 0x2f800000, |v1|
+; GFX10-NEXT: v_floor_f32_e32 v2, v2
+; GFX10-NEXT: v_fma_f32 v1, 0xcf800000, v2, |v1|
+; GFX10-NEXT: v_cvt_u32_f32_e32 v0, v1
+; GFX10-NEXT: v_cvt_u32_f32_e32 v1, v2
+; GFX10-NEXT: v_xor_b32_e32 v0, v0, v3
+; GFX10-NEXT: v_xor_b32_e32 v1, v1, v3
+; GFX10-NEXT: v_sub_co_u32 v0, vcc_lo, v0, v3
+; GFX10-NEXT: v_sub_co_ci_u32_e32 v1, vcc_lo, v1, v3, vcc_lo
+; GFX10-NEXT: s_setpc_b64 s[30:31]
+;
+; GFX11-LABEL: intrinsic_llrint_i64_f32:
+; GFX11: ; %bb.0: ; %entry
+; GFX11-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX11-NEXT: v_rndne_f32_e32 v0, v0
+; GFX11-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(VALU_DEP_2)
+; GFX11-NEXT: v_trunc_f32_e32 v1, v0
+; GFX11-NEXT: v_ashrrev_i32_e32 v3, 31, v0
+; GFX11-NEXT: v_mul_f32_e64 v2, 0x2f800000, |v1|
+; GFX11-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX11-NEXT: v_floor_f32_e32 v2, v2
+; GFX11-NEXT: v_fma_f32 v1, 0xcf800000, v2, |v1|
+; GFX11-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(SKIP_1) | instid1(VALU_DEP_2)
+; GFX11-NEXT: v_cvt_u32_f32_e32 v0, v1
+; GFX11-NEXT: v_cvt_u32_f32_e32 v1, v2
+; GFX11-NEXT: v_xor_b32_e32 v0, v0, v3
+; GFX11-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_2)
+; GFX11-NEXT: v_xor_b32_e32 v1, v1, v3
+; GFX11-NEXT: v_sub_co_u32 v0, vcc_lo, v0, v3
+; GFX11-NEXT: s_delay_alu instid0(VALU_DEP_2)
+; GFX11-NEXT: v_sub_co_ci_u32_e32 v1, vcc_lo, v1, v3, vcc_lo
+; GFX11-NEXT: s_setpc_b64 s[30:31]
+entry:
+ %0 = tail call i64 @llvm.llrint.i64.f32(float %arg)
+ ret i64 %0
+}
+
+define i64 @intrinsic_llrint_i64_f64(double %arg) {
+; GFX9-LABEL: intrinsic_llrint_i64_f64:
+; GFX9: ; %bb.0: ; %entry
+; GFX9-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX9-NEXT: v_rndne_f64_e32 v[0:1], v[0:1]
+; GFX9-NEXT: v_mov_b32_e32 v2, 0
+; GFX9-NEXT: v_mov_b32_e32 v3, 0x3df00000
+; GFX9-NEXT: v_mov_b32_e32 v4, 0
+; GFX9-NEXT: v_mov_b32_e32 v5, 0xc1f00000
+; GFX9-NEXT: v_trunc_f64_e32 v[0:1], v[0:1]
+; GFX9-NEXT: v_mul_f64 v[2:3], v[0:1], v[2:3]
+; GFX9-NEXT: v_floor_f64_e32 v[2:3], v[2:3]
+; GFX9-NEXT: v_fma_f64 v[0:1], v[2:3], v[4:5], v[0:1]
+; GFX9-NEXT: v_cvt_u32_f64_e32 v0, v[0:1]
+; GFX9-NEXT: v_cvt_i32_f64_e32 v1, v[2:3]
+; GFX9-NEXT: s_setpc_b64 s[30:31]
+;
+; GFX10-LABEL: intrinsic_llrint_i64_f64:
+; GFX10: ; %bb.0: ; %entry
+; GFX10-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX10-NEXT: v_rndne_f64_e32 v[0:1], v[0:1]
+; GFX10-NEXT: v_trunc_f64_e32 v[0:1], v[0:1]
+; GFX10-NEXT: v_mul_f64 v[2:3], 0x3df00000, v[0:1]
+; GFX10-NEXT: v_floor_f64_e32 v[2:3], v[2:3]
+; GFX10-NEXT: v_fma_f64 v[0:1], 0xc1f00000, v[2:3], v[0:1]
+; GFX10-NEXT: v_cvt_u32_f64_e32 v0, v[0:1]
+; GFX10-NEXT: v_cvt_i32_f64_e32 v1, v[2:3]
+; GFX10-NEXT: s_setpc_b64 s[30:31]
+;
+; GFX11-LABEL: intrinsic_llrint_i64_f64:
+; GFX11: ; %bb.0: ; %entry
+; GFX11-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX11-NEXT: v_rndne_f64_e32 v[0:1], v[0:1]
+; GFX11-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX11-NEXT: v_trunc_f64_e32 v[0:1], v[0:1]
+; GFX11-NEXT: v_mul_f64 v[2:3], 0x3df00000, v[0:1]
+; GFX11-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX11-NEXT: v_floor_f64_e32 v[2:3], v[2:3]
+; GFX11-NEXT: v_fma_f64 v[0:1], 0xc1f00000, v[2:3], v[0:1]
+; GFX11-NEXT: s_delay_alu instid0(VALU_DEP_1)
+; GFX11-NEXT: v_cvt_u32_f64_e32 v0, v[0:1]
+; GFX11-NEXT: v_cvt_i32_f64_e32 v1, v[2:3]
+; GFX11-NEXT: s_setpc_b64 s[30:31]
+entry:
+ %0 = tail call i64 @llvm.llrint.i64.f64(double %arg)
+ ret i64 %0
+}
+
+define half @intrinsic_frint_half(half %arg) {
+; GCN-LABEL: intrinsic_frint_half:
+; GCN: ; %bb.0: ; %entry
+; GCN-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GCN-NEXT: v_rndne_f16_e32 v0, v0
+; GCN-NEXT: s_setpc_b64 s[30:31]
+entry:
+ %0 = tail call half @llvm.rint.f16(half %arg)
+ ret half %0
+}
+
+define i32 @intrinsic_lrint_i32_f16(half %arg) {
+; GFX9-LABEL: intrinsic_lrint_i32_f16:
+; GFX9: ; %bb.0: ; %entry
+; GFX9-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX9-NEXT: v_rndne_f16_e32 v0, v0
+; GFX9-NEXT: v_cvt_f32_f16_e32 v0, v0
+; GFX9-NEXT: v_cvt_i32_f32_e32 v0, v0
+; GFX9-NEXT: s_setpc_b64 s[30:31]
+;
+; GFX10-LABEL: intrinsic_lrint_i32_f16:
+; GFX10: ; %bb.0: ; %entry
+; GFX10-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX10-NEXT: v_rndne_f16_e32 v0, v0
+; GFX10-NEXT: v_cvt_f32_f16_e32 v0, v0
+; GFX10-NEXT: v_cvt_i32_f32_e32 v0, v0
+; GFX10-NEXT: s_setpc_b64 s[30:31]
+;
+; GFX11-LABEL: intrinsic_lrint_i32_f16:
+; GFX11: ; %bb.0: ; %entry
+; GFX11-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX11-NEXT: v_rndne_f16_e32 v0, v0
+; GFX11-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX11-NEXT: v_cvt_f32_f16_e32 v0, v0
+; GFX11-NEXT: v_cvt_i32_f32_e32 v0, v0
+; GFX11-NEXT: s_setpc_b64 s[30:31]
+entry:
+ %0 = tail call i32 @llvm.lrint.i32.f16(half %arg)
+ ret i32 %0
+}
+
+define <2 x float> @intrinsic_frint_v2f32_v2f32(<2 x float> %arg) {
+; GCN-LABEL: intrinsic_frint_v2f32_v2f32:
+; GCN: ; %bb.0: ; %entry
+; GCN-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GCN-NEXT: v_rndne_f32_e32 v0, v0
+; GCN-NEXT: v_rndne_f32_e32 v1, v1
+; GCN-NEXT: s_setpc_b64 s[30:31]
+entry:
+ %0 = tail call <2 x float> @llvm.rint.v2f32.v2f32(<2 x float> %arg)
+ ret <2 x float> %0
+}
+
+define <2 x i32> @intrinsic_lrint_v2i32_v2f32(<2 x float> %arg) {
+; GFX9-LABEL: intrinsic_lrint_v2i32_v2f32:
+; GFX9: ; %bb.0: ; %entry
+; GFX9-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX9-NEXT: v_rndne_f32_e32 v0, v0
+; GFX9-NEXT: v_rndne_f32_e32 v1, v1
+; GFX9-NEXT: v_cvt_i32_f32_e32 v0, v0
+; GFX9-NEXT: v_cvt_i32_f32_e32 v1, v1
+; GFX9-NEXT: s_setpc_b64 s[30:31]
+;
+; GFX10-LABEL: intrinsic_lrint_v2i32_v2f32:
+; GFX10: ; %bb.0: ; %entry
+; GFX10-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX10-NEXT: v_rndne_f32_e32 v0, v0
+; GFX10-NEXT: v_rndne_f32_e32 v1, v1
+; GFX10-NEXT: v_cvt_i32_f32_e32 v0, v0
+; GFX10-NEXT: v_cvt_i32_f32_e32 v1, v1
+; GFX10-NEXT: s_setpc_b64 s[30:31]
+;
+; GFX11-LABEL: intrinsic_lrint_v2i32_v2f32:
+; GFX11: ; %bb.0...
[truncated]
|
Unless the target expands this node, the intrinsic is defaulted to lower to a library call.
✅ With the latest revision this PR passed the C/C++ code formatter. |
@sgundapa Congratulations on having your first Pull Request (PR) merged into the LLVM Project! Your changes will be combined with recent changes from other authors, then tested Please check whether problems have been caused by your change specifically, as How to do this, and the rest of the post-merge process, is covered in detail here. If your change does cause a problem, it may be reverted, or you can revert it yourself. If you don't get any reports, no action is required from you. Your changes are working as expected, well done! |
Summary: This patch enabled the target-independent lowering of llvm.lrint via GlobalISel. For SelectionDAG, the instrinsic is custom lowered for AMDGPU. Test Plan: Reviewers: Subscribers: Tasks: Tags: Differential Revision: https://phabricator.intern.facebook.com/D60250609
This patch enabled the target-independent lowering of llvm.lrint via GlobalISel. For SelectionDAG, the instrinsic is custom lowered for AMDGPU. (cherry picked from commit 0ee32c4) Change-Id: I97cb9c0a1846e4fa5ba90b2ced615656ecd03383
This patch enabled the target-independent lowering of llvm.lrint via GlobalISel.
For SelectionDAG, the instrinsic is custom lowered for AMDGPU.