Skip to content

[AArch64] Codegen for new SCVTF/UCVTF variants (FEAT_FPRCVT) #123767

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Feb 6, 2025

Conversation

virginia-cangelosi
Copy link
Contributor

Adds patterns of new SCVTF/UCVTF instructions to tablegen, with associated test .ll file.

Copy link

Thank you for submitting a Pull Request (PR) to the LLVM Project!

This PR will be automatically labeled and the relevant teams will be notified.

If you wish to, you can add reviewers by using the "Reviewers" section on this page.

If this is not working for you, it is probably because you do not have write permissions for the repository. In which case you can instead tag reviewers by name in a comment by using @ followed by their GitHub username.

If you have received no comments on your PR for a week, you can request a review by "ping"ing the PR by adding a comment “Ping”. The common courtesy "ping" rate is once a week. Please remember that you are asking for valuable time from other developers.

If you have further questions, they may be answered by the LLVM GitHub User Guide.

You can also ask questions in a comment on this PR, on the LLVM Discord or on the forums.

@llvmbot
Copy link
Member

llvmbot commented Jan 21, 2025

@llvm/pr-subscribers-backend-aarch64

Author: Virginia Cangelosi (virginia-cangelosi)

Changes

Adds patterns of new SCVTF/UCVTF instructions to tablegen, with associated test .ll file.


Full diff: https://github.com/llvm/llvm-project/pull/123767.diff

3 Files Affected:

  • (modified) llvm/lib/Target/AArch64/AArch64InstrFormats.td (+10-2)
  • (modified) llvm/lib/Target/AArch64/AArch64InstrInfo.td (+2-2)
  • (added) llvm/test/CodeGen/AArch64/fprcvt-cvtf.ll (+159)
diff --git a/llvm/lib/Target/AArch64/AArch64InstrFormats.td b/llvm/lib/Target/AArch64/AArch64InstrFormats.td
index 6a3a9492e031c6..d2a1bcee00291b 100644
--- a/llvm/lib/Target/AArch64/AArch64InstrFormats.td
+++ b/llvm/lib/Target/AArch64/AArch64InstrFormats.td
@@ -5487,7 +5487,7 @@ multiclass IntegerToFP<bits<2> rmode, bits<3> opcode, string asm, SDPatternOpera
   }
 }
 
-multiclass IntegerToFPSIMDScalar<bits<2> rmode, bits<3> opcode, string asm, SDPatternOperator node = null_frag> {
+multiclass IntegerToFPSIMDScalar<bits<2> rmode, bits<3> opcode, string asm, SDPatternOperator op, SDPatternOperator node = null_frag> {
   // 32-bit to half-precision
   def HSr: BaseIntegerToFPUnscaled<rmode, opcode, FPR32, FPR16, f16, asm, node> {
     let Inst{31} = 0; // 32-bit FPR flag
@@ -5511,6 +5511,15 @@ multiclass IntegerToFPSIMDScalar<bits<2> rmode, bits<3> opcode, string asm, SDPa
     let Inst{31} = 1; // 64-bit FPR flag
     let Inst{23-22} = 0b00; // 32-bit FPR flag
   }
+
+  def : Pat<(f16 (any_fpround (f32 (op (i32 FPR32:$Rn))))),
+          (!cast<Instruction>(NAME # HSr) $Rn)>;
+  def : Pat<(f64 (op (i32 (extractelt (v4i32 V128:$Rn), (i64 0))))),
+          (!cast<Instruction>(NAME # DSr) (EXTRACT_SUBREG $Rn, ssub))>;
+  def : Pat<(f16 (any_fpround (f32 (op (i64 FPR64:$Rn))))),
+          (!cast<Instruction>(NAME # HDr) $Rn)>;
+  def : Pat<(f32 (op (i64 (extractelt (v2i64 V128:$Rn), (i64 0))))),
+          (!cast<Instruction>(NAME # SDr) (EXTRACT_SUBREG $Rn, dsub))>;
 }
 
 //---
@@ -13270,4 +13279,3 @@ multiclass SIMDThreeSameVectorFP8MatrixMul<string asm>{
       let Predicates = [HasNEON, HasF8F32MM];
     }
 }
-
diff --git a/llvm/lib/Target/AArch64/AArch64InstrInfo.td b/llvm/lib/Target/AArch64/AArch64InstrInfo.td
index 8e575abf83d449..e9d2fd2916f5ba 100644
--- a/llvm/lib/Target/AArch64/AArch64InstrInfo.td
+++ b/llvm/lib/Target/AArch64/AArch64InstrInfo.td
@@ -5060,8 +5060,8 @@ defm SCVTF : IntegerToFP<0b00, 0b010, "scvtf", any_sint_to_fp>;
 defm UCVTF : IntegerToFP<0b00, 0b011, "ucvtf", any_uint_to_fp>;
 
 let Predicates = [HasNEON, HasFPRCVT] in {
-  defm SCVTF : IntegerToFPSIMDScalar<0b11, 0b100, "scvtf">;
-  defm UCVTF : IntegerToFPSIMDScalar<0b11, 0b101, "ucvtf">;
+  defm SCVTF : IntegerToFPSIMDScalar<0b11, 0b100, "scvtf", any_sint_to_fp>;
+  defm UCVTF : IntegerToFPSIMDScalar<0b11, 0b101, "ucvtf", any_uint_to_fp>;
 }
 
 def : Pat<(f16 (fdiv (f16 (any_sint_to_fp (i32 GPR32:$Rn))), fixedpoint_f16_i32:$scale)),
diff --git a/llvm/test/CodeGen/AArch64/fprcvt-cvtf.ll b/llvm/test/CodeGen/AArch64/fprcvt-cvtf.ll
new file mode 100644
index 00000000000000..75fc6b65f024d5
--- /dev/null
+++ b/llvm/test/CodeGen/AArch64/fprcvt-cvtf.ll
@@ -0,0 +1,159 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5
+; RUN: llc -mattr=+neon,+fprcvt -verify-machineinstrs %s -o - | FileCheck %s
+; RUN: llc -mattr=+neon -verify-machineinstrs %s -o - | FileCheck %s --check-prefix=CHECK-NO-FPRCVT
+
+target triple = "aarch64-unknown-linux-gnu"
+
+
+; To demonstrate what we have implemented, we'll want a scalar integer value in a SIMD/FP register.
+; A common case for this setup is when using the result of an integer reduction intrinsic.
+
+; SCVTF
+
+define half @scvtf_f16i32(<4 x i32> %x) {
+; CHECK-LABEL: scvtf_f16i32:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    addv s0, v0.4s
+; CHECK-NEXT:    scvtf h0, s0
+; CHECK-NEXT:    ret
+;
+; CHECK-NO-FPRCVT-LABEL: scvtf_f16i32:
+; CHECK-NO-FPRCVT:       // %bb.0:
+; CHECK-NO-FPRCVT-NEXT:    addv s0, v0.4s
+; CHECK-NO-FPRCVT-NEXT:    scvtf s0, s0
+; CHECK-NO-FPRCVT-NEXT:    fcvt h0, s0
+; CHECK-NO-FPRCVT-NEXT:    ret
+ %addv = tail call i32 @llvm.aarch64.neon.saddv.i32.v4i32(<4 x i32> %x)
+ %conv = sitofp i32 %addv to half
+ ret half %conv
+}
+
+define double @scvtf_f64i32(<4 x i32> %x) {
+; CHECK-LABEL: scvtf_f64i32:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    addv s0, v0.4s
+; CHECK-NEXT:    scvtf d0, s0
+; CHECK-NEXT:    ret
+;
+; CHECK-NO-FPRCVT-LABEL: scvtf_f64i32:
+; CHECK-NO-FPRCVT:       // %bb.0:
+; CHECK-NO-FPRCVT-NEXT:    addv s0, v0.4s
+; CHECK-NO-FPRCVT-NEXT:    fmov w8, s0
+; CHECK-NO-FPRCVT-NEXT:    scvtf d0, w8
+; CHECK-NO-FPRCVT-NEXT:    ret
+ %addv = tail call i32 @llvm.aarch64.neon.saddv.i32.v4i32(<4 x i32> %x)
+ %conv = sitofp i32 %addv to double
+ ret double %conv
+}
+
+define half @scvtf_f16i64(<2 x i64> %x) {
+; CHECK-LABEL: scvtf_f16i64:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    addp d0, v0.2d
+; CHECK-NEXT:    scvtf h0, d0
+; CHECK-NEXT:    ret
+;
+; CHECK-NO-FPRCVT-LABEL: scvtf_f16i64:
+; CHECK-NO-FPRCVT:       // %bb.0:
+; CHECK-NO-FPRCVT-NEXT:    addp d0, v0.2d
+; CHECK-NO-FPRCVT-NEXT:    fmov x8, d0
+; CHECK-NO-FPRCVT-NEXT:    scvtf s0, x8
+; CHECK-NO-FPRCVT-NEXT:    fcvt h0, s0
+; CHECK-NO-FPRCVT-NEXT:    ret
+ %addp = tail call i64 @llvm.aarch64.neon.saddv.i64.v2i64(<2 x i64> %x)
+ %conv = sitofp i64 %addp to half
+ ret half %conv
+}
+
+define float @scvtf_f32i64(<2 x i64> %x) {
+; CHECK-LABEL: scvtf_f32i64:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    addp d0, v0.2d
+; CHECK-NEXT:    scvtf s0, d0
+; CHECK-NEXT:    ret
+;
+; CHECK-NO-FPRCVT-LABEL: scvtf_f32i64:
+; CHECK-NO-FPRCVT:       // %bb.0:
+; CHECK-NO-FPRCVT-NEXT:    addp d0, v0.2d
+; CHECK-NO-FPRCVT-NEXT:    fmov x8, d0
+; CHECK-NO-FPRCVT-NEXT:    scvtf s0, x8
+; CHECK-NO-FPRCVT-NEXT:    ret
+ %addp = tail call i64 @llvm.aarch64.neon.saddv.i64.v2i64(<2 x i64> %x)
+ %conv = sitofp i64 %addp to float
+ ret float %conv
+}
+
+; UCVTF
+
+define half @ucvtf_f16i32(<4 x i32> %x) {
+; CHECK-LABEL: ucvtf_f16i32:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    addv s0, v0.4s
+; CHECK-NEXT:    ucvtf h0, s0
+; CHECK-NEXT:    ret
+;
+; CHECK-NO-FPRCVT-LABEL: ucvtf_f16i32:
+; CHECK-NO-FPRCVT:       // %bb.0:
+; CHECK-NO-FPRCVT-NEXT:    addv s0, v0.4s
+; CHECK-NO-FPRCVT-NEXT:    ucvtf s0, s0
+; CHECK-NO-FPRCVT-NEXT:    fcvt h0, s0
+; CHECK-NO-FPRCVT-NEXT:    ret
+ %addv = tail call i32 @llvm.aarch64.neon.uaddv.i32.v4i32(<4 x i32> %x)
+ %conv = uitofp i32 %addv to half
+ ret half %conv
+}
+
+define double @ucvtf_f64i32(<4 x i32> %x) {
+; CHECK-LABEL: ucvtf_f64i32:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    addv s0, v0.4s
+; CHECK-NEXT:    ucvtf d0, s0
+; CHECK-NEXT:    ret
+;
+; CHECK-NO-FPRCVT-LABEL: ucvtf_f64i32:
+; CHECK-NO-FPRCVT:       // %bb.0:
+; CHECK-NO-FPRCVT-NEXT:    addv s0, v0.4s
+; CHECK-NO-FPRCVT-NEXT:    fmov w8, s0
+; CHECK-NO-FPRCVT-NEXT:    ucvtf d0, w8
+; CHECK-NO-FPRCVT-NEXT:    ret
+ %addv = tail call i32 @llvm.aarch64.neon.uaddv.i32.v4i32(<4 x i32> %x)
+ %conv = uitofp i32 %addv to double
+ ret double %conv
+}
+
+define half @ucvtf_f16i64(<2 x i64> %x) {
+; CHECK-LABEL: ucvtf_f16i64:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    addp d0, v0.2d
+; CHECK-NEXT:    ucvtf h0, d0
+; CHECK-NEXT:    ret
+;
+; CHECK-NO-FPRCVT-LABEL: ucvtf_f16i64:
+; CHECK-NO-FPRCVT:       // %bb.0:
+; CHECK-NO-FPRCVT-NEXT:    addp d0, v0.2d
+; CHECK-NO-FPRCVT-NEXT:    fmov x8, d0
+; CHECK-NO-FPRCVT-NEXT:    ucvtf s0, x8
+; CHECK-NO-FPRCVT-NEXT:    fcvt h0, s0
+; CHECK-NO-FPRCVT-NEXT:    ret
+ %addp = tail call i64 @llvm.aarch64.neon.uaddv.i64.v2i64(<2 x i64> %x)
+ %conv = uitofp i64 %addp to half
+ ret half %conv
+}
+
+define float @ucvtf_f32i64(<2 x i64> %x) {
+; CHECK-LABEL: ucvtf_f32i64:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    addp d0, v0.2d
+; CHECK-NEXT:    ucvtf s0, d0
+; CHECK-NEXT:    ret
+;
+; CHECK-NO-FPRCVT-LABEL: ucvtf_f32i64:
+; CHECK-NO-FPRCVT:       // %bb.0:
+; CHECK-NO-FPRCVT-NEXT:    addp d0, v0.2d
+; CHECK-NO-FPRCVT-NEXT:    fmov x8, d0
+; CHECK-NO-FPRCVT-NEXT:    ucvtf s0, x8
+; CHECK-NO-FPRCVT-NEXT:    ret
+ %addp = tail call i64 @llvm.aarch64.neon.uaddv.i64.v2i64(<2 x i64> %x)
+ %conv = uitofp i64 %addp to float
+ ret float %conv
+}

@jthackray jthackray self-requested a review January 21, 2025 18:00
Copy link
Contributor

@jthackray jthackray left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, tests added for when +fprcvt is both present and absent.

Copy link
Contributor

@CarolineConcatto CarolineConcatto left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Virginia,
Thank you for the patch, I left some comments.
It would be interesting to add some more tests as well.

; CHECK-NO-FPRCVT: // %bb.0:
; CHECK-NO-FPRCVT-NEXT: addv s0, v0.4s
; CHECK-NO-FPRCVT-NEXT: scvtf s0, s0
; CHECK-NO-FPRCVT-NEXT: fcvt h0, s0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be nice to have tests using the extract vector without the .neon.saddv.
Something like:
define half @scvtf_f16i32_v(<4 x i32> %x) {
%extract = extractelement <4 x i32> %x, i64 0
%conv = sitofp i32 % extract to half
ret half %conv
}

I believe atm we cannot do anything for this:
define half @scvtf_f16i32_s(i32 %x) {
%conv = sitofp i32 %x to half
ret half %conv
}
Is it possible to add a patterns for that too?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok I will replace the .neon.saddv lines in the tests, I agree that it will make them clearer.

When I add these patterns it still uses the old versions of scvtf as it prioritises them (i think the priority is just the order they appear in the file for now)
def : Pat<(f16 (op (i32 FPR32:$Rn))),
(!cast(NAME # HSr) $Rn)>;
note: op resolves to any_{u,s}int_to_fp


def : Pat<(f16 (any_fpround (f32 (op (i32 FPR32:$Rn))))),
(!cast<Instruction>(NAME # HSr) $Rn)>;
def : Pat<(f64 (op (i32 (extractelt (v4i32 V128:$Rn), (i64 0))))),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to add test for when the line is not zero

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have tried to change the pattern to accept any integer but haven't been successful. I also haven't found any examples of patterns which use this. Most patterns I've seen with extractelt use (i64 0)
When I try adding a test with %extract = extractelement <4 x i32> %x, i64 1, i get this:
mov w8, v0.s[1]
scvtf h0, w8
ret
So it uses the old version as its loaded the 1 into a GPR

Copy link
Contributor

@SpencerAbson SpencerAbson Jan 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, I think you are right to restrict this to lane zero.

The benefit of the codegen here is that we do not actually have to perform an explicit vector extraction (the use of UMOV above) because we know that the result of extractelt can be referred to by the scalar FPR subregister, which happens to be a valid operand for these new instructions. If we performed the extraction, we may as well use the SCVTF/UCVTF that operate on GPRs (as above). It would be invalid to apply this reasoning when extracting anything but the least-significant/lowest element.

I think what Carol might be trying to say is that, once the tests have been changed to use extractelement rather than the reduction intrinsics, we should add negative tests that show the pattern does not apply when the index argument to extractelt is anything other than zero. @CarolineConcatto please correct me if I've misunderstood.

This pattern applies well to reduction intrinsics because they are actually modeled as returning a vector, then immediately extracting the bottom element! (See getReductionSDNode).

Thanks again for all your work.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, you are correct Spencer.

It is not for you to change the pattern, was for you to add test in llvm-ir with the lane different from zero. So we can see that the pattern does not work.

Sorry @virginia-cangelosi for not being clear on what I meant.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No problem, thank you for your comments. I have implemented the changes hopefully this is what you were aiming for. I agree that its a good idea to show the negative tests as you both said

@@ -0,0 +1,159 @@
; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5
; RUN: llc -mattr=+neon,+fprcvt -verify-machineinstrs %s -o - | FileCheck %s
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add +fullfp16 to the +fprcvt run line. I think it might be implied by the architecture revision, or at least always be present in practice, but isn't modelled explicitly. That will hopefully help regularize the patterns.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding this removes the fp_round so making all the patterns follow the same style works. I will push the changes once I have added Carolines changes. Thank you for spotting that.

@Lukacma Lukacma self-requested a review January 23, 2025 12:09
@@ -5487,7 +5487,7 @@ multiclass IntegerToFP<bits<2> rmode, bits<3> opcode, string asm, SDPatternOpera
}
}

multiclass IntegerToFPSIMDScalar<bits<2> rmode, bits<3> opcode, string asm, SDPatternOperator node = null_frag> {
multiclass IntegerToFPSIMDScalar<bits<2> rmode, bits<3> opcode, string asm, SDPatternOperator op, SDPatternOperator node = null_frag> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe we can use node and we dont need to add SDPatternOperator op

@@ -5511,6 +5511,18 @@ multiclass IntegerToFPSIMDScalar<bits<2> rmode, bits<3> opcode, string asm, SDPa
let Inst{31} = 1; // 64-bit FPR flag
let Inst{23-22} = 0b00; // 32-bit FPR flag
}

def : Pat<(f16 (op (i32 FPR32:$Rn))),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why we need only this size? There is not need for f16-i64, f32-i64 or f64-i32?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry this was supposed to be removed in the commit as it is unused.

Copy link
Contributor

@CarolineConcatto CarolineConcatto left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Virginia,
The patch is almost there.
I just think that we should add more tests for the cvt.

%extract = extractelement <2 x i64> %x, i64 1
%conv = uitofp i64 %extract to float
ret float %conv
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we also add tests without the extract. Like this one:

define <1 x half> @scvtf_f16i32_1(<1 x i32> %x) {
 %conv = sitofp <1 x i32> %x to <1 x half>
 ret <1 x half> %conv
}

I think this will force the compiler to use the FPR register and then it can test
[(set (dvt dstType:$Rd), (node srcType:$Rn))]>, from line 5387 that now could also triggered when using any_sint_to_fp.

Just in case:
I did create these tests locally and noticed that convert between 32bits and 64bits does not used the new SCVTF as expect. But I don't think you need to fix that in this patch.
Like for instance:

define <1 x float> @scvtf_f32i64(<1 x i64> %x) {
 %conv = sitofp <1 x i64> %x to <1 x float>
 ret <1 x float> %conv
}

Is using:

 scvtf v0.2d, v0.2d
fcvtn v0.2s, v0.2d

but it should be :
scvtf s0, d0.

Copy link
Contributor

@CarolineConcatto CarolineConcatto left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @virginia-cangelosi ,

LGTM!

@CarolineConcatto CarolineConcatto merged commit 875e014 into llvm:main Feb 6, 2025
8 checks passed
Copy link

github-actions bot commented Feb 6, 2025

@virginia-cangelosi Congratulations on having your first Pull Request (PR) merged into the LLVM Project!

Your changes will be combined with recent changes from other authors, then tested by our build bots. If there is a problem with a build, you may receive a report in an email or a comment on this PR.

Please check whether problems have been caused by your change specifically, as the builds can include changes from many authors. It is not uncommon for your change to be included in a build that fails due to someone else's changes, or infrastructure issues.

How to do this, and the rest of the post-merge process, is covered in detail here.

If your change does cause a problem, it may be reverted, or you can revert it yourself. This is a normal part of LLVM development. You can fix your changes and open a new PR to merge them again.

If you don't get any reports, no action is required from you. Your changes are working as expected, well done!

Icohedron pushed a commit to Icohedron/llvm-project that referenced this pull request Feb 11, 2025
…3767)

Adds patterns of new SCVTF/UCVTF instructions to tablegen, with
associated test .ll file.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants