Skip to content

[LLVM][CodeGen][SVE] Prefer NEON instructions when zeroing Z registers. #133929

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Apr 3, 2025

Conversation

paulwalker-arm
Copy link
Collaborator

Several implementations have zero-latency instructions to zero registers. To-date no implementation has a dedicated SVE instruction but we can use the NEON equivalent because it is defined to zero bits 128..VL regardless of the immediate used.

NOTE: The relevant instruction is not available in streaming mode, where the original SVE DUP instruction remains in use.

Comment on lines +7717 to 7735
let Predicates = [HasNEON] in {
def : Pat<(v2i64 immAllZerosV), (MOVIv2d_ns (i32 0))>;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Presumably, these other fixed-vector patterns were already not reachable in streaming-mode?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the fixed length spat/build vector would have been legalised so that it cannot get this far. I figured since I was protecting the code for scalable vector types I may as well just protect the whole related block.

Copy link
Contributor

@david-arm david-arm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

Several implementations have zero-latency instructions to zero
registers. To-date no implementation has a dedicated SVE instruction
but we can use the NEON equivalent because it is defined to zero bits
128..VL regardless of the immediate used.

NOTE: The relevant instruction is not available in streaming mode,
where the original SVE DUP instruction remains in use.
@llvmbot
Copy link
Member

llvmbot commented Apr 3, 2025

@llvm/pr-subscribers-backend-aarch64

Author: Paul Walker (paulwalker-arm)

Changes

Several implementations have zero-latency instructions to zero registers. To-date no implementation has a dedicated SVE instruction but we can use the NEON equivalent because it is defined to zero bits 128..VL regardless of the immediate used.

NOTE: The relevant instruction is not available in streaming mode, where the original SVE DUP instruction remains in use.


Patch is 283.97 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/133929.diff

56 Files Affected:

  • (modified) llvm/lib/Target/AArch64/AArch64InstrInfo.td (+19)
  • (modified) llvm/test/CodeGen/AArch64/complex-deinterleaving-add-mull-scalable-contract.ll (+13-13)
  • (modified) llvm/test/CodeGen/AArch64/complex-deinterleaving-add-mull-scalable-fast.ll (+6-6)
  • (modified) llvm/test/CodeGen/AArch64/complex-deinterleaving-f16-mul-scalable.ll (+7-7)
  • (modified) llvm/test/CodeGen/AArch64/complex-deinterleaving-f32-mul-scalable.ll (+7-7)
  • (modified) llvm/test/CodeGen/AArch64/complex-deinterleaving-f64-mul-scalable.ll (+7-7)
  • (modified) llvm/test/CodeGen/AArch64/complex-deinterleaving-i16-mul-scalable.ll (+7-7)
  • (modified) llvm/test/CodeGen/AArch64/complex-deinterleaving-i32-mul-scalable.ll (+7-7)
  • (modified) llvm/test/CodeGen/AArch64/complex-deinterleaving-i64-mul-scalable.ll (+11-11)
  • (modified) llvm/test/CodeGen/AArch64/complex-deinterleaving-reductions-predicated-scalable.ll (+3-3)
  • (modified) llvm/test/CodeGen/AArch64/complex-deinterleaving-reductions-scalable.ll (+9-9)
  • (modified) llvm/test/CodeGen/AArch64/complex-deinterleaving-splat-scalable.ll (+19-18)
  • (modified) llvm/test/CodeGen/AArch64/dag-combine-concat-vectors.ll (+23-24)
  • (modified) llvm/test/CodeGen/AArch64/load-insert-zero.ll (+12-12)
  • (modified) llvm/test/CodeGen/AArch64/sinksplat.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/sve-bf16-int-converts.ll (+68-32)
  • (modified) llvm/test/CodeGen/AArch64/sve-fcmp.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/sve-fcvt.ll (+16-16)
  • (modified) llvm/test/CodeGen/AArch64/sve-fixed-length-shuffles.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/sve-fp-combine.ll (+6-6)
  • (modified) llvm/test/CodeGen/AArch64/sve-implicit-zero-filling.ll (+3-3)
  • (modified) llvm/test/CodeGen/AArch64/sve-int-log.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/sve-int-reduce.ll (+11-11)
  • (modified) llvm/test/CodeGen/AArch64/sve-intrinsics-int-arith-imm.ll (+8-8)
  • (modified) llvm/test/CodeGen/AArch64/sve-intrinsics-int-arith-merging.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/sve-intrinsics-scalar-to-vec.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/sve-intrinsics-shifts-merging.ll (+9-9)
  • (modified) llvm/test/CodeGen/AArch64/sve-knownbits.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/sve-ld1r.ll (+6-6)
  • (modified) llvm/test/CodeGen/AArch64/sve-masked-scatter.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/sve-partial-reduce-dot-product.ll (+14-14)
  • (modified) llvm/test/CodeGen/AArch64/sve-pr92779.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/sve-split-fcvt.ll (+4-4)
  • (modified) llvm/test/CodeGen/AArch64/sve-vector-splat.ll (+11-11)
  • (modified) llvm/test/CodeGen/AArch64/sve-vselect-imm.ll (+18-18)
  • (modified) llvm/test/CodeGen/AArch64/sve-zeroinit.ll (+90-40)
  • (modified) llvm/test/CodeGen/AArch64/sve2p1-intrinsics-bfadd.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/sve2p1-intrinsics-bfmax.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/sve2p1-intrinsics-bfmaxnm.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/sve2p1-intrinsics-bfmin.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/sve2p1-intrinsics-bfminnm.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/sve2p1-intrinsics-bfmla.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/sve2p1-intrinsics-bfmls.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/sve2p1-intrinsics-bfmul.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/sve2p1-intrinsics-bfsub.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/zeroing-forms-abs-neg.ll (+156-72)
  • (modified) llvm/test/CodeGen/AArch64/zeroing-forms-counts-not.ll (+266-122)
  • (modified) llvm/test/CodeGen/AArch64/zeroing-forms-ext.ll (+134-62)
  • (modified) llvm/test/CodeGen/AArch64/zeroing-forms-fcvt-bfcvt.ll (+79-37)
  • (modified) llvm/test/CodeGen/AArch64/zeroing-forms-fcvtlt-fcvtx.ll (+35-17)
  • (modified) llvm/test/CodeGen/AArch64/zeroing-forms-fcvtzsu.ll (+156-72)
  • (modified) llvm/test/CodeGen/AArch64/zeroing-forms-flogb.ll (+35-17)
  • (modified) llvm/test/CodeGen/AArch64/zeroing-forms-frint-frecpx-fsqrt.ll (+596-272)
  • (modified) llvm/test/CodeGen/AArch64/zeroing-forms-rev.ll (+200-92)
  • (modified) llvm/test/CodeGen/AArch64/zeroing-forms-urecpe-ursqrte-sqabs-sqneg.ll (+112-52)
  • (modified) llvm/test/CodeGen/AArch64/zeroing-forms-uscvtf.ll (+156-72)
diff --git a/llvm/lib/Target/AArch64/AArch64InstrInfo.td b/llvm/lib/Target/AArch64/AArch64InstrInfo.td
index f291589e04c6b..a3b1ae55df028 100644
--- a/llvm/lib/Target/AArch64/AArch64InstrInfo.td
+++ b/llvm/lib/Target/AArch64/AArch64InstrInfo.td
@@ -7731,6 +7731,7 @@ def MOVIv2d_ns   : SIMDModifiedImmVectorNoShift<1, 1, 0, 0b1110, V128,
                                                 "movi", ".2d",
                    [(set (v2i64 V128:$Rd), (AArch64movi_edit imm0_255:$imm8))]>;
 
+let Predicates = [HasNEON] in {
 def : Pat<(v2i64 immAllZerosV), (MOVIv2d_ns (i32 0))>;
 def : Pat<(v4i32 immAllZerosV), (MOVIv2d_ns (i32 0))>;
 def : Pat<(v8i16 immAllZerosV), (MOVIv2d_ns (i32 0))>;
@@ -7740,6 +7741,23 @@ def : Pat<(v4f32 immAllZerosV), (MOVIv2d_ns (i32 0))>;
 def : Pat<(v8f16 immAllZerosV), (MOVIv2d_ns (i32 0))>;
 def : Pat<(v8bf16 immAllZerosV), (MOVIv2d_ns (i32 0))>;
 
+// Prefer NEON instructions when zeroing ZPRs because they are potentially zero-latency.
+let AddedComplexity = 5 in {
+def : Pat<(nxv2i64 (splat_vector (i64 0))), (SUBREG_TO_REG (i32 0), (MOVIv2d_ns (i32 0)), zsub)>;
+def : Pat<(nxv4i32 (splat_vector (i32 0))), (SUBREG_TO_REG (i32 0), (MOVIv2d_ns (i32 0)), zsub)>;
+def : Pat<(nxv8i16 (splat_vector (i32 0))), (SUBREG_TO_REG (i32 0), (MOVIv2d_ns (i32 0)), zsub)>;
+def : Pat<(nxv16i8 (splat_vector (i32 0))), (SUBREG_TO_REG (i32 0), (MOVIv2d_ns (i32 0)), zsub)>;
+def : Pat<(nxv2f64 (splat_vector (f64 fpimm0))), (SUBREG_TO_REG (i32 0), (MOVIv2d_ns (i32 0)), zsub)>;
+def : Pat<(nxv2f32 (splat_vector (f32 fpimm0))), (SUBREG_TO_REG (i32 0), (MOVIv2d_ns (i32 0)), zsub)>;
+def : Pat<(nxv4f32 (splat_vector (f32 fpimm0))), (SUBREG_TO_REG (i32 0), (MOVIv2d_ns (i32 0)), zsub)>;
+def : Pat<(nxv2f16 (splat_vector (f16 fpimm0))), (SUBREG_TO_REG (i32 0), (MOVIv2d_ns (i32 0)), zsub)>;
+def : Pat<(nxv4f16 (splat_vector (f16 fpimm0))), (SUBREG_TO_REG (i32 0), (MOVIv2d_ns (i32 0)), zsub)>;
+def : Pat<(nxv8f16 (splat_vector (f16 fpimm0))), (SUBREG_TO_REG (i32 0), (MOVIv2d_ns (i32 0)), zsub)>;
+def : Pat<(nxv2bf16 (splat_vector (bf16 fpimm0))), (SUBREG_TO_REG (i32 0), (MOVIv2d_ns (i32 0)), zsub)>;
+def : Pat<(nxv4bf16 (splat_vector (bf16 fpimm0))), (SUBREG_TO_REG (i32 0), (MOVIv2d_ns (i32 0)), zsub)>;
+def : Pat<(nxv8bf16 (splat_vector (bf16 fpimm0))), (SUBREG_TO_REG (i32 0), (MOVIv2d_ns (i32 0)), zsub)>;
+}
+
 def : Pat<(v2i64 immAllOnesV), (MOVIv2d_ns (i32 255))>;
 def : Pat<(v4i32 immAllOnesV), (MOVIv2d_ns (i32 255))>;
 def : Pat<(v8i16 immAllOnesV), (MOVIv2d_ns (i32 255))>;
@@ -7760,6 +7778,7 @@ def : Pat<(v1i64 immAllOnesV), (EXTRACT_SUBREG (MOVIv2d_ns (i32 255)), dsub)>;
 def : Pat<(v2i32 immAllOnesV), (EXTRACT_SUBREG (MOVIv2d_ns (i32 255)), dsub)>;
 def : Pat<(v4i16 immAllOnesV), (EXTRACT_SUBREG (MOVIv2d_ns (i32 255)), dsub)>;
 def : Pat<(v8i8  immAllOnesV), (EXTRACT_SUBREG (MOVIv2d_ns (i32 255)), dsub)>;
+}
 
 // EDIT per word & halfword: 2s, 4h, 4s, & 8h
 let isReMaterializable = 1, isAsCheapAsAMove = 1 in
diff --git a/llvm/test/CodeGen/AArch64/complex-deinterleaving-add-mull-scalable-contract.ll b/llvm/test/CodeGen/AArch64/complex-deinterleaving-add-mull-scalable-contract.ll
index 98f5b4c19a9b9..533e831de0df8 100644
--- a/llvm/test/CodeGen/AArch64/complex-deinterleaving-add-mull-scalable-contract.ll
+++ b/llvm/test/CodeGen/AArch64/complex-deinterleaving-add-mull-scalable-contract.ll
@@ -50,10 +50,10 @@ entry:
 define <vscale x 4 x double> @mul_add_mull(<vscale x 4 x double> %a, <vscale x 4 x double> %b, <vscale x 4 x double> %c, <vscale x 4 x double> %d) {
 ; CHECK-LABEL: mul_add_mull:
 ; CHECK:       // %bb.0: // %entry
-; CHECK-NEXT:    mov z24.d, #0 // =0x0
-; CHECK-NEXT:    mov z25.d, #0 // =0x0
-; CHECK-NEXT:    mov z26.d, #0 // =0x0
-; CHECK-NEXT:    mov z27.d, #0 // =0x0
+; CHECK-NEXT:    movi v24.2d, #0000000000000000
+; CHECK-NEXT:    movi v25.2d, #0000000000000000
+; CHECK-NEXT:    movi v26.2d, #0000000000000000
+; CHECK-NEXT:    movi v27.2d, #0000000000000000
 ; CHECK-NEXT:    ptrue p0.d
 ; CHECK-NEXT:    fcmla z24.d, p0/m, z2.d, z0.d, #0
 ; CHECK-NEXT:    fcmla z25.d, p0/m, z3.d, z1.d, #0
@@ -101,10 +101,10 @@ entry:
 define <vscale x 4 x double> @mul_sub_mull(<vscale x 4 x double> %a, <vscale x 4 x double> %b, <vscale x 4 x double> %c, <vscale x 4 x double> %d) {
 ; CHECK-LABEL: mul_sub_mull:
 ; CHECK:       // %bb.0: // %entry
-; CHECK-NEXT:    mov z24.d, #0 // =0x0
-; CHECK-NEXT:    mov z25.d, #0 // =0x0
-; CHECK-NEXT:    mov z26.d, #0 // =0x0
-; CHECK-NEXT:    mov z27.d, #0 // =0x0
+; CHECK-NEXT:    movi v24.2d, #0000000000000000
+; CHECK-NEXT:    movi v25.2d, #0000000000000000
+; CHECK-NEXT:    movi v26.2d, #0000000000000000
+; CHECK-NEXT:    movi v27.2d, #0000000000000000
 ; CHECK-NEXT:    ptrue p0.d
 ; CHECK-NEXT:    fcmla z24.d, p0/m, z2.d, z0.d, #0
 ; CHECK-NEXT:    fcmla z25.d, p0/m, z3.d, z1.d, #0
@@ -152,10 +152,10 @@ entry:
 define <vscale x 4 x double> @mul_conj_mull(<vscale x 4 x double> %a, <vscale x 4 x double> %b, <vscale x 4 x double> %c, <vscale x 4 x double> %d) {
 ; CHECK-LABEL: mul_conj_mull:
 ; CHECK:       // %bb.0: // %entry
-; CHECK-NEXT:    mov z24.d, #0 // =0x0
-; CHECK-NEXT:    mov z25.d, #0 // =0x0
-; CHECK-NEXT:    mov z26.d, #0 // =0x0
-; CHECK-NEXT:    mov z27.d, #0 // =0x0
+; CHECK-NEXT:    movi v24.2d, #0000000000000000
+; CHECK-NEXT:    movi v25.2d, #0000000000000000
+; CHECK-NEXT:    movi v26.2d, #0000000000000000
+; CHECK-NEXT:    movi v27.2d, #0000000000000000
 ; CHECK-NEXT:    ptrue p0.d
 ; CHECK-NEXT:    fcmla z24.d, p0/m, z2.d, z0.d, #0
 ; CHECK-NEXT:    fcmla z25.d, p0/m, z3.d, z1.d, #0
@@ -204,7 +204,7 @@ define <vscale x 4 x double> @mul_add_rot_mull(<vscale x 4 x double> %a, <vscale
 ; CHECK-LABEL: mul_add_rot_mull:
 ; CHECK:       // %bb.0: // %entry
 ; CHECK-NEXT:    uzp2 z24.d, z4.d, z5.d
-; CHECK-NEXT:    mov z25.d, #0 // =0x0
+; CHECK-NEXT:    movi v25.2d, #0000000000000000
 ; CHECK-NEXT:    uzp1 z4.d, z4.d, z5.d
 ; CHECK-NEXT:    ptrue p0.d
 ; CHECK-NEXT:    mov z26.d, z24.d
diff --git a/llvm/test/CodeGen/AArch64/complex-deinterleaving-add-mull-scalable-fast.ll b/llvm/test/CodeGen/AArch64/complex-deinterleaving-add-mull-scalable-fast.ll
index 2fc91125bc0ac..1eed9722f57be 100644
--- a/llvm/test/CodeGen/AArch64/complex-deinterleaving-add-mull-scalable-fast.ll
+++ b/llvm/test/CodeGen/AArch64/complex-deinterleaving-add-mull-scalable-fast.ll
@@ -41,8 +41,8 @@ entry:
 define <vscale x 4 x double> @mul_add_mull(<vscale x 4 x double> %a, <vscale x 4 x double> %b, <vscale x 4 x double> %c, <vscale x 4 x double> %d) {
 ; CHECK-LABEL: mul_add_mull:
 ; CHECK:       // %bb.0: // %entry
-; CHECK-NEXT:    mov z24.d, #0 // =0x0
-; CHECK-NEXT:    mov z25.d, #0 // =0x0
+; CHECK-NEXT:    movi v24.2d, #0000000000000000
+; CHECK-NEXT:    movi v25.2d, #0000000000000000
 ; CHECK-NEXT:    ptrue p0.d
 ; CHECK-NEXT:    fcmla z25.d, p0/m, z6.d, z4.d, #0
 ; CHECK-NEXT:    fcmla z24.d, p0/m, z7.d, z5.d, #0
@@ -90,8 +90,8 @@ entry:
 define <vscale x 4 x double> @mul_sub_mull(<vscale x 4 x double> %a, <vscale x 4 x double> %b, <vscale x 4 x double> %c, <vscale x 4 x double> %d) {
 ; CHECK-LABEL: mul_sub_mull:
 ; CHECK:       // %bb.0: // %entry
-; CHECK-NEXT:    mov z24.d, #0 // =0x0
-; CHECK-NEXT:    mov z25.d, #0 // =0x0
+; CHECK-NEXT:    movi v24.2d, #0000000000000000
+; CHECK-NEXT:    movi v25.2d, #0000000000000000
 ; CHECK-NEXT:    ptrue p0.d
 ; CHECK-NEXT:    fcmla z25.d, p0/m, z6.d, z4.d, #270
 ; CHECK-NEXT:    fcmla z24.d, p0/m, z7.d, z5.d, #270
@@ -139,8 +139,8 @@ entry:
 define <vscale x 4 x double> @mul_conj_mull(<vscale x 4 x double> %a, <vscale x 4 x double> %b, <vscale x 4 x double> %c, <vscale x 4 x double> %d) {
 ; CHECK-LABEL: mul_conj_mull:
 ; CHECK:       // %bb.0: // %entry
-; CHECK-NEXT:    mov z24.d, #0 // =0x0
-; CHECK-NEXT:    mov z25.d, #0 // =0x0
+; CHECK-NEXT:    movi v24.2d, #0000000000000000
+; CHECK-NEXT:    movi v25.2d, #0000000000000000
 ; CHECK-NEXT:    ptrue p0.d
 ; CHECK-NEXT:    fcmla z25.d, p0/m, z0.d, z2.d, #0
 ; CHECK-NEXT:    fcmla z24.d, p0/m, z1.d, z3.d, #0
diff --git a/llvm/test/CodeGen/AArch64/complex-deinterleaving-f16-mul-scalable.ll b/llvm/test/CodeGen/AArch64/complex-deinterleaving-f16-mul-scalable.ll
index 80934d2cb98c2..a7442cae84c2d 100644
--- a/llvm/test/CodeGen/AArch64/complex-deinterleaving-f16-mul-scalable.ll
+++ b/llvm/test/CodeGen/AArch64/complex-deinterleaving-f16-mul-scalable.ll
@@ -46,7 +46,7 @@ entry:
 define <vscale x 8 x half> @complex_mul_v8f16(<vscale x 8 x half> %a, <vscale x 8 x half> %b) {
 ; CHECK-LABEL: complex_mul_v8f16:
 ; CHECK:       // %bb.0: // %entry
-; CHECK-NEXT:    mov z2.h, #0 // =0x0
+; CHECK-NEXT:    movi v2.2d, #0000000000000000
 ; CHECK-NEXT:    ptrue p0.h
 ; CHECK-NEXT:    fcmla z2.h, p0/m, z1.h, z0.h, #0
 ; CHECK-NEXT:    fcmla z2.h, p0/m, z1.h, z0.h, #90
@@ -72,8 +72,8 @@ entry:
 define <vscale x 16 x half> @complex_mul_v16f16(<vscale x 16 x half> %a, <vscale x 16 x half> %b) {
 ; CHECK-LABEL: complex_mul_v16f16:
 ; CHECK:       // %bb.0: // %entry
-; CHECK-NEXT:    mov z4.h, #0 // =0x0
-; CHECK-NEXT:    mov z5.h, #0 // =0x0
+; CHECK-NEXT:    movi v4.2d, #0000000000000000
+; CHECK-NEXT:    movi v5.2d, #0000000000000000
 ; CHECK-NEXT:    ptrue p0.h
 ; CHECK-NEXT:    fcmla z5.h, p0/m, z2.h, z0.h, #0
 ; CHECK-NEXT:    fcmla z4.h, p0/m, z3.h, z1.h, #0
@@ -103,10 +103,10 @@ entry:
 define <vscale x 32 x half> @complex_mul_v32f16(<vscale x 32 x half> %a, <vscale x 32 x half> %b) {
 ; CHECK-LABEL: complex_mul_v32f16:
 ; CHECK:       // %bb.0: // %entry
-; CHECK-NEXT:    mov z24.h, #0 // =0x0
-; CHECK-NEXT:    mov z25.h, #0 // =0x0
-; CHECK-NEXT:    mov z26.h, #0 // =0x0
-; CHECK-NEXT:    mov z27.h, #0 // =0x0
+; CHECK-NEXT:    movi v24.2d, #0000000000000000
+; CHECK-NEXT:    movi v25.2d, #0000000000000000
+; CHECK-NEXT:    movi v26.2d, #0000000000000000
+; CHECK-NEXT:    movi v27.2d, #0000000000000000
 ; CHECK-NEXT:    ptrue p0.h
 ; CHECK-NEXT:    fcmla z24.h, p0/m, z4.h, z0.h, #0
 ; CHECK-NEXT:    fcmla z25.h, p0/m, z5.h, z1.h, #0
diff --git a/llvm/test/CodeGen/AArch64/complex-deinterleaving-f32-mul-scalable.ll b/llvm/test/CodeGen/AArch64/complex-deinterleaving-f32-mul-scalable.ll
index 874b5b538f1fd..3cad74b7f5fc6 100644
--- a/llvm/test/CodeGen/AArch64/complex-deinterleaving-f32-mul-scalable.ll
+++ b/llvm/test/CodeGen/AArch64/complex-deinterleaving-f32-mul-scalable.ll
@@ -7,7 +7,7 @@ target triple = "aarch64"
 define <vscale x 4 x float> @complex_mul_v4f32(<vscale x 4 x float> %a, <vscale x 4 x float> %b) {
 ; CHECK-LABEL: complex_mul_v4f32:
 ; CHECK:       // %bb.0: // %entry
-; CHECK-NEXT:    mov z2.s, #0 // =0x0
+; CHECK-NEXT:    movi v2.2d, #0000000000000000
 ; CHECK-NEXT:    ptrue p0.s
 ; CHECK-NEXT:    fcmla z2.s, p0/m, z1.s, z0.s, #0
 ; CHECK-NEXT:    fcmla z2.s, p0/m, z1.s, z0.s, #90
@@ -34,8 +34,8 @@ entry:
 define <vscale x 8 x float> @complex_mul_v8f32(<vscale x 8 x float> %a, <vscale x 8 x float> %b) {
 ; CHECK-LABEL: complex_mul_v8f32:
 ; CHECK:       // %bb.0: // %entry
-; CHECK-NEXT:    mov z4.s, #0 // =0x0
-; CHECK-NEXT:    mov z5.s, #0 // =0x0
+; CHECK-NEXT:    movi v4.2d, #0000000000000000
+; CHECK-NEXT:    movi v5.2d, #0000000000000000
 ; CHECK-NEXT:    ptrue p0.s
 ; CHECK-NEXT:    fcmla z5.s, p0/m, z2.s, z0.s, #0
 ; CHECK-NEXT:    fcmla z4.s, p0/m, z3.s, z1.s, #0
@@ -65,10 +65,10 @@ entry:
 define <vscale x 16 x float> @complex_mul_v16f32(<vscale x 16 x float> %a, <vscale x 16 x float> %b) {
 ; CHECK-LABEL: complex_mul_v16f32:
 ; CHECK:       // %bb.0: // %entry
-; CHECK-NEXT:    mov z24.s, #0 // =0x0
-; CHECK-NEXT:    mov z25.s, #0 // =0x0
-; CHECK-NEXT:    mov z26.s, #0 // =0x0
-; CHECK-NEXT:    mov z27.s, #0 // =0x0
+; CHECK-NEXT:    movi v24.2d, #0000000000000000
+; CHECK-NEXT:    movi v25.2d, #0000000000000000
+; CHECK-NEXT:    movi v26.2d, #0000000000000000
+; CHECK-NEXT:    movi v27.2d, #0000000000000000
 ; CHECK-NEXT:    ptrue p0.s
 ; CHECK-NEXT:    fcmla z24.s, p0/m, z4.s, z0.s, #0
 ; CHECK-NEXT:    fcmla z25.s, p0/m, z5.s, z1.s, #0
diff --git a/llvm/test/CodeGen/AArch64/complex-deinterleaving-f64-mul-scalable.ll b/llvm/test/CodeGen/AArch64/complex-deinterleaving-f64-mul-scalable.ll
index c9a092f52f159..e3d99fa457bbc 100644
--- a/llvm/test/CodeGen/AArch64/complex-deinterleaving-f64-mul-scalable.ll
+++ b/llvm/test/CodeGen/AArch64/complex-deinterleaving-f64-mul-scalable.ll
@@ -7,7 +7,7 @@ target triple = "aarch64"
 define <vscale x 2 x double> @complex_mul_v2f64(<vscale x 2 x double> %a, <vscale x 2 x double> %b) {
 ; CHECK-LABEL: complex_mul_v2f64:
 ; CHECK:       // %bb.0: // %entry
-; CHECK-NEXT:    mov z2.d, #0 // =0x0
+; CHECK-NEXT:    movi v2.2d, #0000000000000000
 ; CHECK-NEXT:    ptrue p0.d
 ; CHECK-NEXT:    fcmla z2.d, p0/m, z1.d, z0.d, #0
 ; CHECK-NEXT:    fcmla z2.d, p0/m, z1.d, z0.d, #90
@@ -34,8 +34,8 @@ entry:
 define <vscale x 4 x double> @complex_mul_v4f64(<vscale x 4 x double> %a, <vscale x 4 x double> %b) {
 ; CHECK-LABEL: complex_mul_v4f64:
 ; CHECK:       // %bb.0: // %entry
-; CHECK-NEXT:    mov z4.d, #0 // =0x0
-; CHECK-NEXT:    mov z5.d, #0 // =0x0
+; CHECK-NEXT:    movi v4.2d, #0000000000000000
+; CHECK-NEXT:    movi v5.2d, #0000000000000000
 ; CHECK-NEXT:    ptrue p0.d
 ; CHECK-NEXT:    fcmla z5.d, p0/m, z2.d, z0.d, #0
 ; CHECK-NEXT:    fcmla z4.d, p0/m, z3.d, z1.d, #0
@@ -65,10 +65,10 @@ entry:
 define <vscale x 8 x double> @complex_mul_v8f64(<vscale x 8 x double> %a, <vscale x 8 x double> %b) {
 ; CHECK-LABEL: complex_mul_v8f64:
 ; CHECK:       // %bb.0: // %entry
-; CHECK-NEXT:    mov z24.d, #0 // =0x0
-; CHECK-NEXT:    mov z25.d, #0 // =0x0
-; CHECK-NEXT:    mov z26.d, #0 // =0x0
-; CHECK-NEXT:    mov z27.d, #0 // =0x0
+; CHECK-NEXT:    movi v24.2d, #0000000000000000
+; CHECK-NEXT:    movi v25.2d, #0000000000000000
+; CHECK-NEXT:    movi v26.2d, #0000000000000000
+; CHECK-NEXT:    movi v27.2d, #0000000000000000
 ; CHECK-NEXT:    ptrue p0.d
 ; CHECK-NEXT:    fcmla z24.d, p0/m, z4.d, z0.d, #0
 ; CHECK-NEXT:    fcmla z25.d, p0/m, z5.d, z1.d, #0
diff --git a/llvm/test/CodeGen/AArch64/complex-deinterleaving-i16-mul-scalable.ll b/llvm/test/CodeGen/AArch64/complex-deinterleaving-i16-mul-scalable.ll
index 58a0809ee093f..061fd07489284 100644
--- a/llvm/test/CodeGen/AArch64/complex-deinterleaving-i16-mul-scalable.ll
+++ b/llvm/test/CodeGen/AArch64/complex-deinterleaving-i16-mul-scalable.ll
@@ -46,7 +46,7 @@ entry:
 define <vscale x 8 x i16> @complex_mul_v8i16(<vscale x 8 x i16> %a, <vscale x 8 x i16> %b) {
 ; CHECK-LABEL: complex_mul_v8i16:
 ; CHECK:       // %bb.0: // %entry
-; CHECK-NEXT:    mov z2.h, #0 // =0x0
+; CHECK-NEXT:    movi v2.2d, #0000000000000000
 ; CHECK-NEXT:    cmla z2.h, z1.h, z0.h, #0
 ; CHECK-NEXT:    cmla z2.h, z1.h, z0.h, #90
 ; CHECK-NEXT:    mov z0.d, z2.d
@@ -71,8 +71,8 @@ entry:
 define <vscale x 16 x i16> @complex_mul_v16i16(<vscale x 16 x i16> %a, <vscale x 16 x i16> %b) {
 ; CHECK-LABEL: complex_mul_v16i16:
 ; CHECK:       // %bb.0: // %entry
-; CHECK-NEXT:    mov z4.h, #0 // =0x0
-; CHECK-NEXT:    mov z5.h, #0 // =0x0
+; CHECK-NEXT:    movi v4.2d, #0000000000000000
+; CHECK-NEXT:    movi v5.2d, #0000000000000000
 ; CHECK-NEXT:    cmla z5.h, z2.h, z0.h, #0
 ; CHECK-NEXT:    cmla z4.h, z3.h, z1.h, #0
 ; CHECK-NEXT:    cmla z5.h, z2.h, z0.h, #90
@@ -101,10 +101,10 @@ entry:
 define <vscale x 32 x i16> @complex_mul_v32i16(<vscale x 32 x i16> %a, <vscale x 32 x i16> %b) {
 ; CHECK-LABEL: complex_mul_v32i16:
 ; CHECK:       // %bb.0: // %entry
-; CHECK-NEXT:    mov z24.h, #0 // =0x0
-; CHECK-NEXT:    mov z25.h, #0 // =0x0
-; CHECK-NEXT:    mov z26.h, #0 // =0x0
-; CHECK-NEXT:    mov z27.h, #0 // =0x0
+; CHECK-NEXT:    movi v24.2d, #0000000000000000
+; CHECK-NEXT:    movi v25.2d, #0000000000000000
+; CHECK-NEXT:    movi v26.2d, #0000000000000000
+; CHECK-NEXT:    movi v27.2d, #0000000000000000
 ; CHECK-NEXT:    cmla z24.h, z4.h, z0.h, #0
 ; CHECK-NEXT:    cmla z25.h, z5.h, z1.h, #0
 ; CHECK-NEXT:    cmla z27.h, z6.h, z2.h, #0
diff --git a/llvm/test/CodeGen/AArch64/complex-deinterleaving-i32-mul-scalable.ll b/llvm/test/CodeGen/AArch64/complex-deinterleaving-i32-mul-scalable.ll
index 0958c60ed7cb0..52caa3279b927 100644
--- a/llvm/test/CodeGen/AArch64/complex-deinterleaving-i32-mul-scalable.ll
+++ b/llvm/test/CodeGen/AArch64/complex-deinterleaving-i32-mul-scalable.ll
@@ -7,7 +7,7 @@ target triple = "aarch64"
 define <vscale x 4 x i32> @complex_mul_v4i32(<vscale x 4 x i32> %a, <vscale x 4 x i32> %b) {
 ; CHECK-LABEL: complex_mul_v4i32:
 ; CHECK:       // %bb.0: // %entry
-; CHECK-NEXT:    mov z2.s, #0 // =0x0
+; CHECK-NEXT:    movi v2.2d, #0000000000000000
 ; CHECK-NEXT:    cmla z2.s, z1.s, z0.s, #0
 ; CHECK-NEXT:    cmla z2.s, z1.s, z0.s, #90
 ; CHECK-NEXT:    mov z0.d, z2.d
@@ -33,8 +33,8 @@ entry:
 define <vscale x 8 x i32> @complex_mul_v8i32(<vscale x 8 x i32> %a, <vscale x 8 x i32> %b) {
 ; CHECK-LABEL: complex_mul_v8i32:
 ; CHECK:       // %bb.0: // %entry
-; CHECK-NEXT:    mov z4.s, #0 // =0x0
-; CHECK-NEXT:    mov z5.s, #0 // =0x0
+; CHECK-NEXT:    movi v4.2d, #0000000000000000
+; CHECK-NEXT:    movi v5.2d, #0000000000000000
 ; CHECK-NEXT:    cmla z5.s, z2.s, z0.s, #0
 ; CHECK-NEXT:    cmla z4.s, z3.s, z1.s, #0
 ; CHECK-NEXT:    cmla z5.s, z2.s, z0.s, #90
@@ -63,10 +63,10 @@ entry:
 define <vscale x 16 x i32> @complex_mul_v16i32(<vscale x 16 x i32> %a, <vscale x 16 x i32> %b) {
 ; CHECK-LABEL: complex_mul_v16i32:
 ; CHECK:       // %bb.0: // %entry
-; CHECK-NEXT:    mov z24.s, #0 // =0x0
-; CHECK-NEXT:    mov z25.s, #0 // =0x0
-; CHECK-NEXT:    mov z26.s, #0 // =0x0
-; CHECK-NEXT:    mov z27.s, #0 // =0x0
+; CHECK-NEXT:    movi v24.2d, #0000000000000000
+; CHECK-NEXT:    movi v25.2d, #0000000000000000
+; CHECK-NEXT:    movi v26.2d, #0000000000000000
+; CHECK-NEXT:    movi v27.2d, #0000000000000000
 ; CHECK-NEXT:    cmla z24.s, z4.s, z0.s, #0
 ; CHECK-NEXT:    cmla z25.s, z5.s, z1.s, #0
 ; CHECK-NEXT:    cmla z27.s, z6.s, z2.s, #0
diff --git a/llvm/test/CodeGen/AArch64/complex-deinterleaving-i64-mul-scalable.ll b/llvm/test/CodeGen/AArch64/complex-deinterleaving-i64-mul-scalable.ll
index 30c06838c81bc..bdc21e7828277 100644
--- a/llvm/test/CodeGen/AArch64/complex-deinterleaving-i64-mul-scalable.ll
+++ b/llvm/test/CodeGen/AArch64/complex-deinterleaving-i64-mul-scalable.ll
@@ -7,7 +7,7 @@ target triple = "aarch64"
 define <vscale x 2 x i64> @complex_mul_v2i64(<vscale x 2 x i64> %a, <vscale x 2 x i64> %b) {
 ; CHECK-LABEL: complex_mul_v2i64:
 ; CHECK:       // %bb.0: // %entry
-; CHECK-NEXT:    mov z2.d, #0 // =0x0
+; CHECK-NEXT:    movi v2.2d, #0000000000000000
 ; CHECK-NEXT:    cmla z2.d, z1.d, z0.d, #0
 ; CHECK-NEXT:    cmla z2.d, z1.d, z0.d, #90
 ; CHECK-NEXT:    mov z0.d, z2.d
@@ -33,8 +33,8 @@ entry:
 define <vscale x 4 x i64> @complex_mul_v4i64(<vscale x 4 x i64> %a, <vscale x 4 x i64> %b) {
 ; CHECK-LABEL: complex_mul_v4i64:
 ; CHECK:       // %bb.0: // %entry
-; CHECK-NEXT:    mov z4.d, #0 // =0x0
-; CHECK-NEXT:    mov z5.d, #0 // =0x0
+; CHECK-NEXT:    movi v4.2d, #0000000000000000
+; CHECK-NEXT:    movi v5.2d, #0000000000000000
 ; CHECK-NEXT:    cmla z5.d, z2.d, z0.d, #0
 ; CHECK-NEXT:    cmla z4.d, z3.d, z1.d, #0
 ; CHECK-NEXT:    cmla z5.d, z2.d, z0.d, #90
@@ -63,10 +63,10 @@ entry:
 define <vscale x 8 x i64> @complex_mul_v8i64(<vscale x 8 x i64> %a, <vscale x 8 x i64> %b) {
 ; CHECK-LABEL: complex_mul_v8i64:
 ; CHECK:       // %bb.0: // %entry
-; CHECK-NEXT:    mov z24.d, #0 // =0x0
-; CHECK-NEXT:    mov z25.d, #0 // =0x0
-; CHECK-NEXT:    mov z26.d, #0 // =0x0
-; CHECK-NEXT:    mov z27.d, #0 // =0x0
+; CHECK-NEXT:    movi v24.2d, #0000000000000000
+; CHECK-NEXT:    movi v25.2d, #0000000000000000
+; CHECK-NEXT:    movi v26.2d, #0000000000000000
+; CHECK-NEXT:    movi v27.2d, #0000000000000000
 ; CHECK-NEXT:    cmla z24.d, z4.d, z0.d, #0
 ; CHECK-NEXT:    cmla z25.d, z5.d, z1.d, #0
 ; CHECK-NEXT:    cmla z27.d, z6.d, z2.d, #0
@@ -101,10 +101,10 @@ entry:
 define <vscale x 8 x i64> @complex_minus_mul_v8i64(<vscale x 8 x i64> %a, <vscale x 8 x i64> %b) {
 ; CHECK-LABEL: complex_minus_mul_v8i64:
 ; CHECK:       // %bb.0: // %entry
-; CHECK-NEXT:    mov z24.d, #0 // =0x0
-; CHECK-NEXT:    mov z25.d, #0 // =0x0
-; CHECK-NEXT:    mov z26.d, #0 // =0x0
-; CHECK-NEXT:    mov z27.d, #0 // =0x0
+; CHECK-NEXT:    movi v24.2d, #0000000000000000
+; CHECK-NEXT:    movi v25.2d, #0000000000000000
+; CHECK-NEXT:    movi v26.2d, #0000000000000000
+; CHECK-NEXT:    movi v27.2d, #0000000000000000
 ; CHECK-NEXT:    cmla z24.d, z4.d, z0.d, #270
 ; CHECK-NEXT:    cmla z25.d, z5.d, z1.d, #270
 ; CHECK-NEXT:    cmla z27.d, z6.d, z2.d, #270
diff --git a/llvm/test/CodeGen/AArch64/complex-deinterleaving-reductions-p...
[truncated]

@paulwalker-arm paulwalker-arm merged commit 41a6bb4 into llvm:main Apr 3, 2025
10 of 12 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants