Skip to content

[AArch64][SVE2] Add pattern for BCAX #77159

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Jan 8, 2024
Merged

Conversation

UsmanNadeem
Copy link
Contributor

Bitwise clear and exclusive or
Add pattern for:
xor x, (and y, not(z)) -> bcax x, y, z

Change-Id: I42cb5e53eff59a12aef00b211400e59fd2112b54

Bitwise clear and exclusive or
Add pattern for:
    xor x, (and y, not(z)) -> bcax x, y, z

Change-Id: I42cb5e53eff59a12aef00b211400e59fd2112b54
@llvmbot
Copy link
Member

llvmbot commented Jan 5, 2024

@llvm/pr-subscribers-backend-aarch64

Author: Usman Nadeem (UsmanNadeem)

Changes

Bitwise clear and exclusive or
Add pattern for:
xor x, (and y, not(z)) -> bcax x, y, z

Change-Id: I42cb5e53eff59a12aef00b211400e59fd2112b54


Full diff: https://github.com/llvm/llvm-project/pull/77159.diff

2 Files Affected:

  • (modified) llvm/lib/Target/AArch64/AArch64SVEInstrInfo.td (+4-1)
  • (added) llvm/test/CodeGen/AArch64/sve2-bcax.ll (+143)
diff --git a/llvm/lib/Target/AArch64/AArch64SVEInstrInfo.td b/llvm/lib/Target/AArch64/AArch64SVEInstrInfo.td
index 344a153890631e..ee10a7d1c706fc 100644
--- a/llvm/lib/Target/AArch64/AArch64SVEInstrInfo.td
+++ b/llvm/lib/Target/AArch64/AArch64SVEInstrInfo.td
@@ -453,6 +453,9 @@ def AArch64msb_m1 : PatFrags<(ops node:$pred, node:$op1, node:$op2, node:$op3),
 def AArch64eor3 : PatFrags<(ops node:$op1, node:$op2, node:$op3),
                            [(int_aarch64_sve_eor3 node:$op1, node:$op2, node:$op3),
                             (xor node:$op1, (xor node:$op2, node:$op3))]>;
+def AArch64bcax : PatFrags<(ops node:$op1, node:$op2, node:$op3),
+                           [(int_aarch64_sve_bcax node:$op1, node:$op2, node:$op3),
+                            (xor node:$op1, (and node:$op2, (vnot node:$op3)))]>;
 
 def AArch64fmla_m1 : PatFrags<(ops node:$pg, node:$za, node:$zn, node:$zm),
                               [(int_aarch64_sve_fmla node:$pg, node:$za, node:$zn, node:$zm),
@@ -3714,7 +3717,7 @@ let Predicates = [HasSVE2orSME] in {
 
   // SVE2 bitwise ternary operations
   defm EOR3_ZZZZ  : sve2_int_bitwise_ternary_op<0b000, "eor3",  AArch64eor3>;
-  defm BCAX_ZZZZ  : sve2_int_bitwise_ternary_op<0b010, "bcax",  int_aarch64_sve_bcax>;
+  defm BCAX_ZZZZ  : sve2_int_bitwise_ternary_op<0b010, "bcax",  AArch64bcax>;
   defm BSL_ZZZZ   : sve2_int_bitwise_ternary_op<0b001, "bsl",   int_aarch64_sve_bsl, AArch64bsp>;
   defm BSL1N_ZZZZ : sve2_int_bitwise_ternary_op<0b011, "bsl1n", int_aarch64_sve_bsl1n>;
   defm BSL2N_ZZZZ : sve2_int_bitwise_ternary_op<0b101, "bsl2n", int_aarch64_sve_bsl2n>;
diff --git a/llvm/test/CodeGen/AArch64/sve2-bcax.ll b/llvm/test/CodeGen/AArch64/sve2-bcax.ll
new file mode 100644
index 00000000000000..cadba288e8e7d9
--- /dev/null
+++ b/llvm/test/CodeGen/AArch64/sve2-bcax.ll
@@ -0,0 +1,143 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 4
+; RUN: llc -mtriple=aarch64 -mattr=+sve < %s -o - | FileCheck --check-prefix=SVE %s
+; RUN: llc -mtriple=aarch64 -mattr=+sve2 < %s -o - | FileCheck --check-prefix=SVE2 %s
+
+define <vscale x 2 x i64> @bcax_nxv2i64_1(<vscale x 2 x i64> %0, <vscale x 2 x i64> %1, <vscale x 2 x i64> %2) #0 {
+; SVE-LABEL: bcax_nxv2i64_1:
+; SVE:       // %bb.0:
+; SVE-NEXT:    bic z1.d, z2.d, z1.d
+; SVE-NEXT:    eor z0.d, z1.d, z0.d
+; SVE-NEXT:    ret
+;
+; SVE2-LABEL: bcax_nxv2i64_1:
+; SVE2:       // %bb.0:
+; SVE2-NEXT:    bcax z0.d, z0.d, z2.d, z1.d
+; SVE2-NEXT:    ret
+  %4 = xor <vscale x 2 x i64> %1, shufflevector (<vscale x 2 x i64> insertelement (<vscale x 2 x i64> poison, i64 -1, i64 0), <vscale x 2 x i64> poison, <vscale x 2 x i32> zeroinitializer)
+  %5 = and <vscale x 2 x i64> %4, %2
+  %6 = xor <vscale x 2 x i64> %5, %0
+  ret <vscale x 2 x i64> %6
+}
+
+define <vscale x 2 x i64> @bcax_nxv2i64_2(<vscale x 2 x i64> %0, <vscale x 2 x i64> %1, <vscale x 2 x i64> %2) #0 {
+; SVE-LABEL: bcax_nxv2i64_2:
+; SVE:       // %bb.0:
+; SVE-NEXT:    bic z0.d, z0.d, z1.d
+; SVE-NEXT:    eor z0.d, z0.d, z2.d
+; SVE-NEXT:    ret
+;
+; SVE2-LABEL: bcax_nxv2i64_2:
+; SVE2:       // %bb.0:
+; SVE2-NEXT:    bcax z2.d, z2.d, z0.d, z1.d
+; SVE2-NEXT:    mov z0.d, z2.d
+; SVE2-NEXT:    ret
+  %4 = xor <vscale x 2 x i64> %1, shufflevector (<vscale x 2 x i64> insertelement (<vscale x 2 x i64> poison, i64 -1, i64 0), <vscale x 2 x i64> poison, <vscale x 2 x i32> zeroinitializer)
+  %5 = and <vscale x 2 x i64> %4, %0
+  %6 = xor <vscale x 2 x i64> %5, %2
+  ret <vscale x 2 x i64> %6
+}
+
+define <vscale x 4 x i32> @bcax_nxv4i32_1(<vscale x 4 x i32> %0, <vscale x 4 x i32> %1, <vscale x 4 x i32> %2) #0 {
+; SVE-LABEL: bcax_nxv4i32_1:
+; SVE:       // %bb.0:
+; SVE-NEXT:    bic z1.d, z2.d, z1.d
+; SVE-NEXT:    eor z0.d, z1.d, z0.d
+; SVE-NEXT:    ret
+;
+; SVE2-LABEL: bcax_nxv4i32_1:
+; SVE2:       // %bb.0:
+; SVE2-NEXT:    bcax z0.d, z0.d, z2.d, z1.d
+; SVE2-NEXT:    ret
+  %4 = xor <vscale x 4 x i32> %1, shufflevector (<vscale x 4 x i32> insertelement (<vscale x 4 x i32> poison, i32 -1, i64 0), <vscale x 4 x i32> poison, <vscale x 4 x i32> zeroinitializer)
+  %5 = and <vscale x 4 x i32> %4, %2
+  %6 = xor <vscale x 4 x i32> %5, %0
+  ret <vscale x 4 x i32> %6
+}
+
+define <vscale x 4 x i32> @bcax_nxv4i32_2(<vscale x 4 x i32> %0, <vscale x 4 x i32> %1, <vscale x 4 x i32> %2) #0 {
+; SVE-LABEL: bcax_nxv4i32_2:
+; SVE:       // %bb.0:
+; SVE-NEXT:    bic z0.d, z0.d, z1.d
+; SVE-NEXT:    eor z0.d, z0.d, z2.d
+; SVE-NEXT:    ret
+;
+; SVE2-LABEL: bcax_nxv4i32_2:
+; SVE2:       // %bb.0:
+; SVE2-NEXT:    bcax z2.d, z2.d, z0.d, z1.d
+; SVE2-NEXT:    mov z0.d, z2.d
+; SVE2-NEXT:    ret
+  %4 = xor <vscale x 4 x i32> %1, shufflevector (<vscale x 4 x i32> insertelement (<vscale x 4 x i32> poison, i32 -1, i64 0), <vscale x 4 x i32> poison, <vscale x 4 x i32> zeroinitializer)
+  %5 = and <vscale x 4 x i32> %4, %0
+  %6 = xor <vscale x 4 x i32> %5, %2
+  ret <vscale x 4 x i32> %6
+}
+
+define <vscale x 8 x i16> @bcax_nxv8i16_1(<vscale x 8 x i16> %0, <vscale x 8 x i16> %1, <vscale x 8 x i16> %2) #0 {
+; SVE-LABEL: bcax_nxv8i16_1:
+; SVE:       // %bb.0:
+; SVE-NEXT:    bic z1.d, z2.d, z1.d
+; SVE-NEXT:    eor z0.d, z1.d, z0.d
+; SVE-NEXT:    ret
+;
+; SVE2-LABEL: bcax_nxv8i16_1:
+; SVE2:       // %bb.0:
+; SVE2-NEXT:    bcax z0.d, z0.d, z2.d, z1.d
+; SVE2-NEXT:    ret
+  %4 = xor <vscale x 8 x i16> %1, shufflevector (<vscale x 8 x i16> insertelement (<vscale x 8 x i16> poison, i16 -1, i64 0), <vscale x 8 x i16> poison, <vscale x 8 x i32> zeroinitializer)
+  %5 = and <vscale x 8 x i16> %4, %2
+  %6 = xor <vscale x 8 x i16> %5, %0
+  ret <vscale x 8 x i16> %6
+}
+
+define <vscale x 8 x i16> @bcax_nxv8i16_2(<vscale x 8 x i16> %0, <vscale x 8 x i16> %1, <vscale x 8 x i16> %2) #0 {
+; SVE-LABEL: bcax_nxv8i16_2:
+; SVE:       // %bb.0:
+; SVE-NEXT:    bic z0.d, z0.d, z1.d
+; SVE-NEXT:    eor z0.d, z0.d, z2.d
+; SVE-NEXT:    ret
+;
+; SVE2-LABEL: bcax_nxv8i16_2:
+; SVE2:       // %bb.0:
+; SVE2-NEXT:    bcax z2.d, z2.d, z0.d, z1.d
+; SVE2-NEXT:    mov z0.d, z2.d
+; SVE2-NEXT:    ret
+  %4 = xor <vscale x 8 x i16> %1, shufflevector (<vscale x 8 x i16> insertelement (<vscale x 8 x i16> poison, i16 -1, i64 0), <vscale x 8 x i16> poison, <vscale x 8 x i32> zeroinitializer)
+  %5 = and <vscale x 8 x i16> %4, %0
+  %6 = xor <vscale x 8 x i16> %5, %2
+  ret <vscale x 8 x i16> %6
+}
+
+define <vscale x 16 x i8> @bcax_nxv16i8_1(<vscale x 16 x i8> %0, <vscale x 16 x i8> %1, <vscale x 16 x i8> %2) #0 {
+; SVE-LABEL: bcax_nxv16i8_1:
+; SVE:       // %bb.0:
+; SVE-NEXT:    bic z1.d, z2.d, z1.d
+; SVE-NEXT:    eor z0.d, z1.d, z0.d
+; SVE-NEXT:    ret
+;
+; SVE2-LABEL: bcax_nxv16i8_1:
+; SVE2:       // %bb.0:
+; SVE2-NEXT:    bcax z0.d, z0.d, z2.d, z1.d
+; SVE2-NEXT:    ret
+  %4 = xor <vscale x 16 x i8> %1, shufflevector (<vscale x 16 x i8> insertelement (<vscale x 16 x i8> poison, i8 -1, i64 0), <vscale x 16 x i8> poison, <vscale x 16 x i32> zeroinitializer)
+  %5 = and <vscale x 16 x i8> %4, %2
+  %6 = xor <vscale x 16 x i8> %5, %0
+  ret <vscale x 16 x i8> %6
+}
+
+define <vscale x 16 x i8> @bcax_nxv16i8_2(<vscale x 16 x i8> %0, <vscale x 16 x i8> %1, <vscale x 16 x i8> %2) #0 {
+; SVE-LABEL: bcax_nxv16i8_2:
+; SVE:       // %bb.0:
+; SVE-NEXT:    bic z0.d, z0.d, z1.d
+; SVE-NEXT:    eor z0.d, z0.d, z2.d
+; SVE-NEXT:    ret
+;
+; SVE2-LABEL: bcax_nxv16i8_2:
+; SVE2:       // %bb.0:
+; SVE2-NEXT:    bcax z2.d, z2.d, z0.d, z1.d
+; SVE2-NEXT:    mov z0.d, z2.d
+; SVE2-NEXT:    ret
+  %4 = xor <vscale x 16 x i8> %1, shufflevector (<vscale x 16 x i8> insertelement (<vscale x 16 x i8> poison, i8 -1, i64 0), <vscale x 16 x i8> poison, <vscale x 16 x i32> zeroinitializer)
+  %5 = and <vscale x 16 x i8> %4, %0
+  %6 = xor <vscale x 16 x i8> %5, %2
+  ret <vscale x 16 x i8> %6
+}

; SVE2: // %bb.0:
; SVE2-NEXT: bcax z0.d, z0.d, z2.d, z1.d
; SVE2-NEXT: ret
%4 = xor <vscale x 2 x i64> %1, shufflevector (<vscale x 2 x i64> insertelement (<vscale x 2 x i64> poison, i64 -1, i64 0), <vscale x 2 x i64> poison, <vscale x 2 x i32> zeroinitializer)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We're migrating away from ConstantExpr so do you mind using the new splat shorthand for these tests (i.e. splat ( i64 -1)). The created IR will be the same but the tests will at least be ready for the migration, plus easier to read.

@UsmanNadeem UsmanNadeem merged commit ac8b4f8 into llvm:main Jan 8, 2024
justinfargnoli pushed a commit to justinfargnoli/llvm-project that referenced this pull request Jan 28, 2024
Bitwise clear and exclusive or
Add pattern for:
    xor x, (and y, not(z)) -> bcax x, y, z
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants