Skip to content

DAG: Avoid introducing stack usage in vector->int bitcast int op promotion #125636

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

arsenm
Copy link
Contributor

@arsenm arsenm commented Feb 4, 2025

Avoids stack usage in the v5i32 to i160 case for AMDGPU, which appears
in fat pointer lowering.

Copy link
Contributor Author

arsenm commented Feb 4, 2025

@arsenm arsenm added the llvm:SelectionDAG SelectionDAGISel as well label Feb 4, 2025 — with Graphite App
@arsenm arsenm changed the title AMDGPU: Add baseline tests for some bitcast lowering DAG: Avoid introducing stack usage in vector->int bitcast int op promotion Feb 4, 2025
@arsenm arsenm marked this pull request as ready for review February 4, 2025 05:52
@llvmbot
Copy link
Member

llvmbot commented Feb 4, 2025

@llvm/pr-subscribers-backend-amdgpu

@llvm/pr-subscribers-llvm-selectiondag

Author: Matt Arsenault (arsenm)

Changes

Avoids stack usage in the v5i32 to i160 case for AMDGPU, which appears
in fat pointer lowering.


Patch is 79.39 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/125636.diff

5 Files Affected:

  • (modified) llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp (+21)
  • (added) llvm/test/CodeGen/AMDGPU/bitcast_vector_bigint.ll (+511)
  • (modified) llvm/test/CodeGen/AMDGPU/buffer-fat-pointers-contents-legalization.ll (-9)
  • (modified) llvm/test/CodeGen/AMDGPU/lower-buffer-fat-pointers-lastuse-metadata.ll (+104-260)
  • (modified) llvm/test/CodeGen/AMDGPU/lower-buffer-fat-pointers-nontemporal-metadata.ll (+230-437)
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
index 625052be657ca0..95fb8b406e51bf 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
@@ -566,6 +566,27 @@ SDValue DAGTypeLegalizer::PromoteIntRes_BITCAST(SDNode *N) {
     }
   }
 
+  if (!NOutVT.isVector() && InOp.getValueType().isVector()) {
+    // Pad the vector operand with undef and cast to a wider integer.
+    EVT EltVT = InOp.getValueType().getVectorElementType();
+    TypeSize EltSize = EltVT.getSizeInBits();
+    TypeSize OutSize = NOutVT.getSizeInBits();
+
+    if (OutSize.hasKnownScalarFactor(EltSize)) {
+      unsigned NumEltsWithPadding = OutSize.getKnownScalarFactor(EltSize);
+      EVT WideVecVT =
+          EVT::getVectorVT(*DAG.getContext(), EltVT, NumEltsWithPadding);
+
+      if (isTypeLegal(WideVecVT)) {
+        SDValue Inserted = DAG.getNode(ISD::INSERT_SUBVECTOR, dl, WideVecVT,
+                                       DAG.getUNDEF(WideVecVT), InOp,
+                                       DAG.getVectorIdxConstant(0, dl));
+
+        return DAG.getNode(ISD::BITCAST, dl, NOutVT, Inserted);
+      }
+    }
+  }
+
   return DAG.getNode(ISD::ANY_EXTEND, dl, NOutVT,
                      CreateStackStoreLoad(InOp, OutVT));
 }
diff --git a/llvm/test/CodeGen/AMDGPU/bitcast_vector_bigint.ll b/llvm/test/CodeGen/AMDGPU/bitcast_vector_bigint.ll
new file mode 100644
index 00000000000000..ab89bb293f6e6e
--- /dev/null
+++ b/llvm/test/CodeGen/AMDGPU/bitcast_vector_bigint.ll
@@ -0,0 +1,511 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5
+; RUN: llc -mtriple=amdgcn-amd-amdhsa -mcpu=gfx900 < %s | FileCheck -check-prefix=GFX9 %s
+; RUN: llc -mtriple=amdgcn-amd-amdhsa -mcpu=gfx1200 < %s | FileCheck -check-prefix=GFX12 %s
+
+; Make sure stack use isn't introduced for these bitcasts.
+
+define i160 @bitcast_v5i32_to_i160(<5 x i32> %vec) {
+; GFX9-LABEL: bitcast_v5i32_to_i160:
+; GFX9:       ; %bb.0:
+; GFX9-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX9-NEXT:    s_setpc_b64 s[30:31]
+;
+; GFX12-LABEL: bitcast_v5i32_to_i160:
+; GFX12:       ; %bb.0:
+; GFX12-NEXT:    s_wait_loadcnt_dscnt 0x0
+; GFX12-NEXT:    s_wait_expcnt 0x0
+; GFX12-NEXT:    s_wait_samplecnt 0x0
+; GFX12-NEXT:    s_wait_bvhcnt 0x0
+; GFX12-NEXT:    s_wait_kmcnt 0x0
+; GFX12-NEXT:    s_setpc_b64 s[30:31]
+  %bitcast = bitcast <5 x i32> %vec to i160
+  ret i160 %bitcast
+}
+
+define i192 @bitcast_v6i32_to_i192(<6 x i32> %vec) {
+; GFX9-LABEL: bitcast_v6i32_to_i192:
+; GFX9:       ; %bb.0:
+; GFX9-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX9-NEXT:    s_setpc_b64 s[30:31]
+;
+; GFX12-LABEL: bitcast_v6i32_to_i192:
+; GFX12:       ; %bb.0:
+; GFX12-NEXT:    s_wait_loadcnt_dscnt 0x0
+; GFX12-NEXT:    s_wait_expcnt 0x0
+; GFX12-NEXT:    s_wait_samplecnt 0x0
+; GFX12-NEXT:    s_wait_bvhcnt 0x0
+; GFX12-NEXT:    s_wait_kmcnt 0x0
+; GFX12-NEXT:    s_setpc_b64 s[30:31]
+  %bitcast = bitcast <6 x i32> %vec to i192
+  ret i192 %bitcast
+}
+
+define i224 @bitcast_v7i32_to_i224(<7 x i32> %vec) {
+; GFX9-LABEL: bitcast_v7i32_to_i224:
+; GFX9:       ; %bb.0:
+; GFX9-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX9-NEXT:    s_setpc_b64 s[30:31]
+;
+; GFX12-LABEL: bitcast_v7i32_to_i224:
+; GFX12:       ; %bb.0:
+; GFX12-NEXT:    s_wait_loadcnt_dscnt 0x0
+; GFX12-NEXT:    s_wait_expcnt 0x0
+; GFX12-NEXT:    s_wait_samplecnt 0x0
+; GFX12-NEXT:    s_wait_bvhcnt 0x0
+; GFX12-NEXT:    s_wait_kmcnt 0x0
+; GFX12-NEXT:    s_setpc_b64 s[30:31]
+  %bitcast = bitcast <7 x i32> %vec to i224
+  ret i224 %bitcast
+}
+
+define i256 @bitcast_v8i32_to_i256(<8 x i32> %vec) {
+; GFX9-LABEL: bitcast_v8i32_to_i256:
+; GFX9:       ; %bb.0:
+; GFX9-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX9-NEXT:    s_setpc_b64 s[30:31]
+;
+; GFX12-LABEL: bitcast_v8i32_to_i256:
+; GFX12:       ; %bb.0:
+; GFX12-NEXT:    s_wait_loadcnt_dscnt 0x0
+; GFX12-NEXT:    s_wait_expcnt 0x0
+; GFX12-NEXT:    s_wait_samplecnt 0x0
+; GFX12-NEXT:    s_wait_bvhcnt 0x0
+; GFX12-NEXT:    s_wait_kmcnt 0x0
+; GFX12-NEXT:    s_setpc_b64 s[30:31]
+  %bitcast = bitcast <8 x i32> %vec to i256
+  ret i256 %bitcast
+}
+
+define <5 x i32> @bitcast_i160_to_v5i32(i160 %int) {
+; GFX9-LABEL: bitcast_i160_to_v5i32:
+; GFX9:       ; %bb.0:
+; GFX9-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX9-NEXT:    s_mov_b32 s4, s33
+; GFX9-NEXT:    s_add_i32 s33, s32, 0x7c0
+; GFX9-NEXT:    s_and_b32 s33, s33, 0xfffff800
+; GFX9-NEXT:    s_mov_b32 s5, s34
+; GFX9-NEXT:    s_mov_b32 s34, s32
+; GFX9-NEXT:    s_addk_i32 s32, 0x1000
+; GFX9-NEXT:    s_mov_b32 s32, s34
+; GFX9-NEXT:    s_mov_b32 s34, s5
+; GFX9-NEXT:    s_mov_b32 s33, s4
+; GFX9-NEXT:    s_setpc_b64 s[30:31]
+;
+; GFX12-LABEL: bitcast_i160_to_v5i32:
+; GFX12:       ; %bb.0:
+; GFX12-NEXT:    s_wait_loadcnt_dscnt 0x0
+; GFX12-NEXT:    s_wait_expcnt 0x0
+; GFX12-NEXT:    s_wait_samplecnt 0x0
+; GFX12-NEXT:    s_wait_bvhcnt 0x0
+; GFX12-NEXT:    s_wait_kmcnt 0x0
+; GFX12-NEXT:    s_mov_b32 s0, s33
+; GFX12-NEXT:    s_add_co_i32 s33, s32, 31
+; GFX12-NEXT:    s_mov_b32 s1, s34
+; GFX12-NEXT:    s_wait_alu 0xfffe
+; GFX12-NEXT:    s_and_not1_b32 s33, s33, 31
+; GFX12-NEXT:    s_clause 0x1
+; GFX12-NEXT:    scratch_store_b64 off, v[2:3], s33 offset:8
+; GFX12-NEXT:    scratch_store_b64 off, v[0:1], s33
+; GFX12-NEXT:    scratch_load_b128 v[0:3], off, s33
+; GFX12-NEXT:    s_mov_b32 s34, s32
+; GFX12-NEXT:    s_add_co_i32 s32, s32, 64
+; GFX12-NEXT:    s_wait_alu 0xfffe
+; GFX12-NEXT:    s_mov_b32 s32, s34
+; GFX12-NEXT:    s_mov_b32 s34, s1
+; GFX12-NEXT:    s_mov_b32 s33, s0
+; GFX12-NEXT:    s_wait_loadcnt 0x0
+; GFX12-NEXT:    s_wait_alu 0xfffe
+; GFX12-NEXT:    s_setpc_b64 s[30:31]
+  %bitcast = bitcast i160 %int to <5 x i32>
+  ret <5 x i32> %bitcast
+}
+
+define <6 x i32> @bitcast_i192_to_v6i32(i192 %int) {
+; GFX9-LABEL: bitcast_i192_to_v6i32:
+; GFX9:       ; %bb.0:
+; GFX9-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX9-NEXT:    s_mov_b32 s4, s33
+; GFX9-NEXT:    s_add_i32 s33, s32, 0x7c0
+; GFX9-NEXT:    s_and_b32 s33, s33, 0xfffff800
+; GFX9-NEXT:    s_mov_b32 s5, s34
+; GFX9-NEXT:    s_mov_b32 s34, s32
+; GFX9-NEXT:    s_addk_i32 s32, 0x1000
+; GFX9-NEXT:    s_mov_b32 s32, s34
+; GFX9-NEXT:    s_mov_b32 s34, s5
+; GFX9-NEXT:    s_mov_b32 s33, s4
+; GFX9-NEXT:    s_setpc_b64 s[30:31]
+;
+; GFX12-LABEL: bitcast_i192_to_v6i32:
+; GFX12:       ; %bb.0:
+; GFX12-NEXT:    s_wait_loadcnt_dscnt 0x0
+; GFX12-NEXT:    s_wait_expcnt 0x0
+; GFX12-NEXT:    s_wait_samplecnt 0x0
+; GFX12-NEXT:    s_wait_bvhcnt 0x0
+; GFX12-NEXT:    s_wait_kmcnt 0x0
+; GFX12-NEXT:    s_mov_b32 s0, s33
+; GFX12-NEXT:    s_add_co_i32 s33, s32, 31
+; GFX12-NEXT:    s_mov_b32 s1, s34
+; GFX12-NEXT:    s_wait_alu 0xfffe
+; GFX12-NEXT:    s_and_not1_b32 s33, s33, 31
+; GFX12-NEXT:    s_clause 0x1
+; GFX12-NEXT:    scratch_store_b64 off, v[2:3], s33 offset:8
+; GFX12-NEXT:    scratch_store_b64 off, v[0:1], s33
+; GFX12-NEXT:    scratch_load_b128 v[0:3], off, s33
+; GFX12-NEXT:    s_mov_b32 s34, s32
+; GFX12-NEXT:    s_add_co_i32 s32, s32, 64
+; GFX12-NEXT:    s_wait_alu 0xfffe
+; GFX12-NEXT:    s_mov_b32 s32, s34
+; GFX12-NEXT:    s_mov_b32 s34, s1
+; GFX12-NEXT:    s_mov_b32 s33, s0
+; GFX12-NEXT:    s_wait_loadcnt 0x0
+; GFX12-NEXT:    s_wait_alu 0xfffe
+; GFX12-NEXT:    s_setpc_b64 s[30:31]
+  %bitcast = bitcast i192 %int to <6 x i32>
+  ret <6 x i32> %bitcast
+}
+
+define <7 x i32> @bitcast_i224_to_v7i32(i224 %int) {
+; GFX9-LABEL: bitcast_i224_to_v7i32:
+; GFX9:       ; %bb.0:
+; GFX9-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX9-NEXT:    s_mov_b32 s4, s33
+; GFX9-NEXT:    s_add_i32 s33, s32, 0x7c0
+; GFX9-NEXT:    s_and_b32 s33, s33, 0xfffff800
+; GFX9-NEXT:    s_mov_b32 s5, s34
+; GFX9-NEXT:    s_mov_b32 s34, s32
+; GFX9-NEXT:    s_addk_i32 s32, 0x1000
+; GFX9-NEXT:    s_mov_b32 s32, s34
+; GFX9-NEXT:    s_mov_b32 s34, s5
+; GFX9-NEXT:    s_mov_b32 s33, s4
+; GFX9-NEXT:    s_setpc_b64 s[30:31]
+;
+; GFX12-LABEL: bitcast_i224_to_v7i32:
+; GFX12:       ; %bb.0:
+; GFX12-NEXT:    s_wait_loadcnt_dscnt 0x0
+; GFX12-NEXT:    s_wait_expcnt 0x0
+; GFX12-NEXT:    s_wait_samplecnt 0x0
+; GFX12-NEXT:    s_wait_bvhcnt 0x0
+; GFX12-NEXT:    s_wait_kmcnt 0x0
+; GFX12-NEXT:    s_mov_b32 s0, s33
+; GFX12-NEXT:    s_add_co_i32 s33, s32, 31
+; GFX12-NEXT:    s_mov_b32 s1, s34
+; GFX12-NEXT:    s_wait_alu 0xfffe
+; GFX12-NEXT:    s_and_not1_b32 s33, s33, 31
+; GFX12-NEXT:    s_clause 0x1
+; GFX12-NEXT:    scratch_store_b64 off, v[2:3], s33 offset:8
+; GFX12-NEXT:    scratch_store_b64 off, v[0:1], s33
+; GFX12-NEXT:    scratch_load_b128 v[0:3], off, s33
+; GFX12-NEXT:    s_clause 0x1
+; GFX12-NEXT:    scratch_store_b32 off, v6, s33 offset:24
+; GFX12-NEXT:    scratch_store_b64 off, v[4:5], s33 offset:16
+; GFX12-NEXT:    scratch_load_b96 v[4:6], off, s33 offset:16
+; GFX12-NEXT:    s_mov_b32 s34, s32
+; GFX12-NEXT:    s_add_co_i32 s32, s32, 64
+; GFX12-NEXT:    s_wait_alu 0xfffe
+; GFX12-NEXT:    s_mov_b32 s32, s34
+; GFX12-NEXT:    s_mov_b32 s34, s1
+; GFX12-NEXT:    s_mov_b32 s33, s0
+; GFX12-NEXT:    s_wait_loadcnt 0x0
+; GFX12-NEXT:    s_wait_alu 0xfffe
+; GFX12-NEXT:    s_setpc_b64 s[30:31]
+  %bitcast = bitcast i224 %int to <7 x i32>
+  ret <7 x i32> %bitcast
+}
+
+define <8 x i32> @bitcast_i256_to_v8i32(i256 %int) {
+; GFX9-LABEL: bitcast_i256_to_v8i32:
+; GFX9:       ; %bb.0:
+; GFX9-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX9-NEXT:    s_setpc_b64 s[30:31]
+;
+; GFX12-LABEL: bitcast_i256_to_v8i32:
+; GFX12:       ; %bb.0:
+; GFX12-NEXT:    s_wait_loadcnt_dscnt 0x0
+; GFX12-NEXT:    s_wait_expcnt 0x0
+; GFX12-NEXT:    s_wait_samplecnt 0x0
+; GFX12-NEXT:    s_wait_bvhcnt 0x0
+; GFX12-NEXT:    s_wait_kmcnt 0x0
+; GFX12-NEXT:    s_setpc_b64 s[30:31]
+  %bitcast = bitcast i256 %int to <8 x i32>
+  ret <8 x i32> %bitcast
+}
+
+define i192 @bitcast_v3i64_to_i192(<3 x i64> %vec) {
+; GFX9-LABEL: bitcast_v3i64_to_i192:
+; GFX9:       ; %bb.0:
+; GFX9-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX9-NEXT:    s_setpc_b64 s[30:31]
+;
+; GFX12-LABEL: bitcast_v3i64_to_i192:
+; GFX12:       ; %bb.0:
+; GFX12-NEXT:    s_wait_loadcnt_dscnt 0x0
+; GFX12-NEXT:    s_wait_expcnt 0x0
+; GFX12-NEXT:    s_wait_samplecnt 0x0
+; GFX12-NEXT:    s_wait_bvhcnt 0x0
+; GFX12-NEXT:    s_wait_kmcnt 0x0
+; GFX12-NEXT:    s_setpc_b64 s[30:31]
+  %bitcast = bitcast <3 x i64> %vec to i192
+  ret i192 %bitcast
+}
+
+define <3 x i64> @bitcast_i192_to_v3i64(i192 %int) {
+; GFX9-LABEL: bitcast_i192_to_v3i64:
+; GFX9:       ; %bb.0:
+; GFX9-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX9-NEXT:    s_mov_b32 s4, s33
+; GFX9-NEXT:    s_add_i32 s33, s32, 0x7c0
+; GFX9-NEXT:    s_and_b32 s33, s33, 0xfffff800
+; GFX9-NEXT:    s_mov_b32 s5, s34
+; GFX9-NEXT:    s_mov_b32 s34, s32
+; GFX9-NEXT:    s_addk_i32 s32, 0x1000
+; GFX9-NEXT:    s_mov_b32 s32, s34
+; GFX9-NEXT:    s_mov_b32 s34, s5
+; GFX9-NEXT:    s_mov_b32 s33, s4
+; GFX9-NEXT:    s_setpc_b64 s[30:31]
+;
+; GFX12-LABEL: bitcast_i192_to_v3i64:
+; GFX12:       ; %bb.0:
+; GFX12-NEXT:    s_wait_loadcnt_dscnt 0x0
+; GFX12-NEXT:    s_wait_expcnt 0x0
+; GFX12-NEXT:    s_wait_samplecnt 0x0
+; GFX12-NEXT:    s_wait_bvhcnt 0x0
+; GFX12-NEXT:    s_wait_kmcnt 0x0
+; GFX12-NEXT:    s_mov_b32 s0, s33
+; GFX12-NEXT:    s_add_co_i32 s33, s32, 31
+; GFX12-NEXT:    s_mov_b32 s1, s34
+; GFX12-NEXT:    s_wait_alu 0xfffe
+; GFX12-NEXT:    s_and_not1_b32 s33, s33, 31
+; GFX12-NEXT:    s_clause 0x1
+; GFX12-NEXT:    scratch_store_b64 off, v[2:3], s33 offset:8
+; GFX12-NEXT:    scratch_store_b64 off, v[0:1], s33
+; GFX12-NEXT:    scratch_load_b128 v[0:3], off, s33
+; GFX12-NEXT:    s_mov_b32 s34, s32
+; GFX12-NEXT:    s_add_co_i32 s32, s32, 64
+; GFX12-NEXT:    s_wait_alu 0xfffe
+; GFX12-NEXT:    s_mov_b32 s32, s34
+; GFX12-NEXT:    s_mov_b32 s34, s1
+; GFX12-NEXT:    s_mov_b32 s33, s0
+; GFX12-NEXT:    s_wait_loadcnt 0x0
+; GFX12-NEXT:    s_wait_alu 0xfffe
+; GFX12-NEXT:    s_setpc_b64 s[30:31]
+  %bitcast = bitcast i192 %int to <3 x i64>
+  ret <3 x i64> %bitcast
+}
+
+define <10 x i16> @bitcast_i160_to_v10i16(i160 %int) {
+; GFX9-LABEL: bitcast_i160_to_v10i16:
+; GFX9:       ; %bb.0:
+; GFX9-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX9-NEXT:    s_mov_b32 s4, 0xffff
+; GFX9-NEXT:    v_bfi_b32 v0, s4, v0, v0
+; GFX9-NEXT:    v_bfi_b32 v2, s4, v2, v2
+; GFX9-NEXT:    s_setpc_b64 s[30:31]
+;
+; GFX12-LABEL: bitcast_i160_to_v10i16:
+; GFX12:       ; %bb.0:
+; GFX12-NEXT:    s_wait_loadcnt_dscnt 0x0
+; GFX12-NEXT:    s_wait_expcnt 0x0
+; GFX12-NEXT:    s_wait_samplecnt 0x0
+; GFX12-NEXT:    s_wait_bvhcnt 0x0
+; GFX12-NEXT:    s_wait_kmcnt 0x0
+; GFX12-NEXT:    v_bfi_b32 v0, 0xffff, v0, v0
+; GFX12-NEXT:    v_bfi_b32 v2, 0xffff, v2, v2
+; GFX12-NEXT:    s_setpc_b64 s[30:31]
+  %bitcast = bitcast i160 %int to <10 x i16>
+  ret <10 x i16> %bitcast
+}
+
+define i160 @bitcast_v10i16_to_i160(<10 x i16> %vec) {
+; GFX9-LABEL: bitcast_v10i16_to_i160:
+; GFX9:       ; %bb.0:
+; GFX9-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX9-NEXT:    s_setpc_b64 s[30:31]
+;
+; GFX12-LABEL: bitcast_v10i16_to_i160:
+; GFX12:       ; %bb.0:
+; GFX12-NEXT:    s_wait_loadcnt_dscnt 0x0
+; GFX12-NEXT:    s_wait_expcnt 0x0
+; GFX12-NEXT:    s_wait_samplecnt 0x0
+; GFX12-NEXT:    s_wait_bvhcnt 0x0
+; GFX12-NEXT:    s_wait_kmcnt 0x0
+; GFX12-NEXT:    s_setpc_b64 s[30:31]
+  %bitcast = bitcast <10 x i16> %vec to i160
+  ret i160 %bitcast
+}
+
+define i12 @bitcast_v2i6_to_i12(<2 x i6> %vec) {
+; GFX9-LABEL: bitcast_v2i6_to_i12:
+; GFX9:       ; %bb.0:
+; GFX9-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX9-NEXT:    v_lshlrev_b16_e32 v1, 6, v1
+; GFX9-NEXT:    v_and_b32_e32 v0, 63, v0
+; GFX9-NEXT:    v_or_b32_e32 v0, v0, v1
+; GFX9-NEXT:    v_and_b32_e32 v0, 0xfff, v0
+; GFX9-NEXT:    s_setpc_b64 s[30:31]
+;
+; GFX12-LABEL: bitcast_v2i6_to_i12:
+; GFX12:       ; %bb.0:
+; GFX12-NEXT:    s_wait_loadcnt_dscnt 0x0
+; GFX12-NEXT:    s_wait_expcnt 0x0
+; GFX12-NEXT:    s_wait_samplecnt 0x0
+; GFX12-NEXT:    s_wait_bvhcnt 0x0
+; GFX12-NEXT:    s_wait_kmcnt 0x0
+; GFX12-NEXT:    v_lshlrev_b16 v1, 6, v1
+; GFX12-NEXT:    v_and_b32_e32 v0, 63, v0
+; GFX12-NEXT:    s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1)
+; GFX12-NEXT:    v_or_b32_e32 v0, v0, v1
+; GFX12-NEXT:    v_and_b32_e32 v0, 0xfff, v0
+; GFX12-NEXT:    s_setpc_b64 s[30:31]
+  %bitcast = bitcast <2 x i6> %vec to i12
+  ret i12 %bitcast
+}
+
+define <2 x i6> @bitcast_i12_to_v2i6(i12 %int) {
+; GFX9-LABEL: bitcast_i12_to_v2i6:
+; GFX9:       ; %bb.0:
+; GFX9-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX9-NEXT:    v_and_b32_e32 v2, 63, v0
+; GFX9-NEXT:    v_lshrrev_b16_e32 v0, 6, v0
+; GFX9-NEXT:    v_and_b32_e32 v1, 63, v0
+; GFX9-NEXT:    v_mov_b32_e32 v0, v2
+; GFX9-NEXT:    s_setpc_b64 s[30:31]
+;
+; GFX12-LABEL: bitcast_i12_to_v2i6:
+; GFX12:       ; %bb.0:
+; GFX12-NEXT:    s_wait_loadcnt_dscnt 0x0
+; GFX12-NEXT:    s_wait_expcnt 0x0
+; GFX12-NEXT:    s_wait_samplecnt 0x0
+; GFX12-NEXT:    s_wait_bvhcnt 0x0
+; GFX12-NEXT:    s_wait_kmcnt 0x0
+; GFX12-NEXT:    v_lshrrev_b16 v1, 6, v0
+; GFX12-NEXT:    v_and_b32_e32 v0, 63, v0
+; GFX12-NEXT:    s_delay_alu instid0(VALU_DEP_2)
+; GFX12-NEXT:    v_and_b32_e32 v1, 63, v1
+; GFX12-NEXT:    s_setpc_b64 s[30:31]
+  %bitcast = bitcast i12 %int to <2 x i6>
+  ret <2 x i6> %bitcast
+}
+
+define i160 @bitcast_v5f32_to_i160(<5 x float> %vec) {
+; GFX9-LABEL: bitcast_v5f32_to_i160:
+; GFX9:       ; %bb.0:
+; GFX9-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX9-NEXT:    s_setpc_b64 s[30:31]
+;
+; GFX12-LABEL: bitcast_v5f32_to_i160:
+; GFX12:       ; %bb.0:
+; GFX12-NEXT:    s_wait_loadcnt_dscnt 0x0
+; GFX12-NEXT:    s_wait_expcnt 0x0
+; GFX12-NEXT:    s_wait_samplecnt 0x0
+; GFX12-NEXT:    s_wait_bvhcnt 0x0
+; GFX12-NEXT:    s_wait_kmcnt 0x0
+; GFX12-NEXT:    s_setpc_b64 s[30:31]
+  %bitcast = bitcast <5 x float> %vec to i160
+  ret i160 %bitcast
+}
+
+define <5 x float> @bitcast_i160_to_v5f32(i160 %int) {
+; GFX9-LABEL: bitcast_i160_to_v5f32:
+; GFX9:       ; %bb.0:
+; GFX9-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX9-NEXT:    s_mov_b32 s4, s33
+; GFX9-NEXT:    s_add_i32 s33, s32, 0x7c0
+; GFX9-NEXT:    s_and_b32 s33, s33, 0xfffff800
+; GFX9-NEXT:    s_mov_b32 s5, s34
+; GFX9-NEXT:    s_mov_b32 s34, s32
+; GFX9-NEXT:    s_addk_i32 s32, 0x1000
+; GFX9-NEXT:    s_mov_b32 s32, s34
+; GFX9-NEXT:    s_mov_b32 s34, s5
+; GFX9-NEXT:    s_mov_b32 s33, s4
+; GFX9-NEXT:    s_setpc_b64 s[30:31]
+;
+; GFX12-LABEL: bitcast_i160_to_v5f32:
+; GFX12:       ; %bb.0:
+; GFX12-NEXT:    s_wait_loadcnt_dscnt 0x0
+; GFX12-NEXT:    s_wait_expcnt 0x0
+; GFX12-NEXT:    s_wait_samplecnt 0x0
+; GFX12-NEXT:    s_wait_bvhcnt 0x0
+; GFX12-NEXT:    s_wait_kmcnt 0x0
+; GFX12-NEXT:    s_mov_b32 s0, s33
+; GFX12-NEXT:    s_add_co_i32 s33, s32, 31
+; GFX12-NEXT:    s_mov_b32 s1, s34
+; GFX12-NEXT:    s_wait_alu 0xfffe
+; GFX12-NEXT:    s_and_not1_b32 s33, s33, 31
+; GFX12-NEXT:    s_clause 0x1
+; GFX12-NEXT:    scratch_store_b64 off, v[2:3], s33 offset:8
+; GFX12-NEXT:    scratch_store_b64 off, v[0:1], s33
+; GFX12-NEXT:    scratch_load_b128 v[0:3], off, s33
+; GFX12-NEXT:    s_mov_b32 s34, s32
+; GFX12-NEXT:    s_add_co_i32 s32, s32, 64
+; GFX12-NEXT:    s_wait_alu 0xfffe
+; GFX12-NEXT:    s_mov_b32 s32, s34
+; GFX12-NEXT:    s_mov_b32 s34, s1
+; GFX12-NEXT:    s_mov_b32 s33, s0
+; GFX12-NEXT:    s_wait_loadcnt 0x0
+; GFX12-NEXT:    s_wait_alu 0xfffe
+; GFX12-NEXT:    s_setpc_b64 s[30:31]
+  %bitcast = bitcast i160 %int to <5 x float>
+  ret <5 x float> %bitcast
+}
+
+define <6 x float> @bitcast_i192_to_v6f32(i192 %int) {
+; GFX9-LABEL: bitcast_i192_to_v6f32:
+; GFX9:       ; %bb.0:
+; GFX9-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX9-NEXT:    s_mov_b32 s4, s33
+; GFX9-NEXT:    s_add_i32 s33, s32, 0x7c0
+; GFX9-NEXT:    s_and_b32 s33, s33, 0xfffff800
+; GFX9-NEXT:    s_mov_b32 s5, s34
+; GFX9-NEXT:    s_mov_b32 s34, s32
+; GFX9-NEXT:    s_addk_i32 s32, 0x1000
+; GFX9-NEXT:    s_mov_b32 s32, s34
+; GFX9-NEXT:    s_mov_b32 s34, s5
+; GFX9-NEXT:    s_mov_b32 s33, s4
+; GFX9-NEXT:    s_setpc_b64 s[30:31]
+;
+; GFX12-LABEL: bitcast_i192_to_v6f32:
+; GFX12:       ; %bb.0:
+; GFX12-NEXT:    s_wait_loadcnt_dscnt 0x0
+; GFX12-NEXT:    s_wait_expcnt 0x0
+; GFX12-NEXT:    s_wait_samplecnt 0x0
+; GFX12-NEXT:    s_wait_bvhcnt 0x0
+; GFX12-NEXT:    s_wait_kmcnt 0x0
+; GFX12-NEXT:    s_mov_b32 s0, s33
+; GFX12-NEXT:    s_add_co_i32 s33, s32, 31
+; GFX12-NEXT:    s_mov_b32 s1, s34
+; GFX12-NEXT:    s_wait_alu 0xfffe
+; GFX12-NEXT:    s_and_not1_b32 s33, s33, 31
+; GFX12-NEXT:    s_clause 0x1
+; GFX12-NEXT:    scratch_store_b64 off, v[2:3], s33 offset:8
+; GFX12-NEXT:    scratch_store_b64 off, v[0:1], s33
+; GFX12-NEXT:    scratch_load_b128 v[0:3], off, s33
+; GFX12-NEXT:    s_mov_b32 s34, s32
+; GFX12-NEXT:    s_add_co_i32 s32, s32, 64
+; GFX12-NEXT:    s_wait_alu 0xfffe
+; GFX12-NEXT:    s_mov_b32 s32, s34
+; GFX12-NEXT:    s_mov_b32 s34, s1
+; GFX12-NEXT:    s_mov_b32 s33, s0
+; GFX12-NEXT:    s_wait_loadcnt 0x0
+; GFX12-NEXT:    s_wait_alu 0xfffe
+; GFX12-NEXT:    s_setpc_b64 s[30:31]
+  %bitcast = bitcast i192 %int to <6 x float>
+  ret <6 x float> %bitcast
+}
+
+define i192 @bitcast_v6f32_to_i192(<6 x float> %vec) {
+; GFX9-LABEL: bitcast_v6f32_to_i192:
+; GFX9:       ; %bb.0:
+; GFX9-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; GFX9-NEXT:    s_setpc_b64 s[30:31]
+;
+; GFX12-LABEL: bitcast_v6f32_to_i192:
+; GFX12:       ; %bb.0:
+; GFX12-NEXT:    s_wait_loadcnt_dscnt 0x0
+; GFX12-NEXT:    s_wait_expcnt 0x0
+; GFX12-NEXT:    s_wait_samplecnt 0x0
+; GFX12-NEXT:    s_wait_bvhcnt 0x0
+; GFX12-NEXT:    s_wait_kmcnt 0x0
+; GFX12-NEXT:    s_setpc_b64 s[30:31]
+  %bitcast = bitcast <6 x float> %vec to i192
+  ret i192 %bitcast
+}
diff --git a/llvm/test/CodeGen/AMDGPU/buffer-fat-pointers-contents-legalization.ll b/llvm/test/CodeGen/AMDGPU/buffer-fat-pointers-contents-legalization.ll
index 7eaa52d89b9b68..5f49e69a58ed87 100644
--- a/llvm/test/CodeGen/AMDGPU/buffer-fat-pointers-contents-legalization.ll
+++ b/llvm/test/CodeGen/AMDGPU/buffer-fat-pointers-contents-legalization.ll
@@ -3091,15 +3091,6 @@ define i160 @load_i160(ptr addrspace(8) inreg %buf) {
 ; SDAG-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
 ; SDAG-NEXT:    buffer_load_dwordx4 v[0:3], off, s[16:19], 0
 ; SDAG-NEXT:    buffer_load_dword v4, off, s[16:19], 0 offset:16
-; SDAG...
[truncated]

Copy link

github-actions bot commented Feb 4, 2025

⚠️ undef deprecator found issues in your code. ⚠️

You can test this locally with the following command:
git diff -U0 --pickaxe-regex -S '([^a-zA-Z0-9#_-]undef[^a-zA-Z0-9_-]|UndefValue::get)' 077e0c134a31cc16c432ce685458b1de80bfbf84 5b18109cbd776cee017aa1b4e57bee98e3c407cc llvm/test/CodeGen/AMDGPU/bitcast_vector_bigint.ll llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp llvm/test/CodeGen/AMDGPU/buffer-fat-pointers-contents-legalization.ll llvm/test/CodeGen/AMDGPU/lower-buffer-fat-pointers-lastuse-metadata.ll llvm/test/CodeGen/AMDGPU/lower-buffer-fat-pointers-nontemporal-metadata.ll

The following files introduce new uses of undef:

  • llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp

Undef is now deprecated and should only be used in the rare cases where no replacement is possible. For example, a load of uninitialized memory yields undef. You should use poison values for placeholders instead.

In tests, avoid using undef and having tests that trigger undefined behavior. If you need an operand with some unimportant value, you can add a new argument to the function and use that instead.

For example, this is considered a bad practice:

define void @fn() {
  ...
  br i1 undef, ...
}

Please use the following instead:

define void @fn(i1 %cond) {
  ...
  br i1 %cond, ...
}

Please refer to the Undefined Behavior Manual for more information.

DAG.getUNDEF(WideVecVT), InOp,
DAG.getVectorIdxConstant(0, dl));

return DAG.getNode(ISD::BITCAST, dl, NOutVT, Inserted);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need a shift after this for Big Endian like the TypeWidenVector case on line 540?

Copy link
Collaborator

@topperc topperc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor Author

arsenm commented Feb 4, 2025

Merge activity

  • Feb 4, 4:28 AM EST: A user started a stack merge that includes this pull request via Graphite.
  • Feb 4, 4:30 AM EST: Graphite rebased this pull request as part of a merge.
  • Feb 4, 4:32 AM EST: A user merged this pull request with Graphite.

These introduce stack lowering. Somehow it manages to get cleaned
up in the gfx9 cases (leaving a dead object behind), but gfx12
still has leftover memory instructions.
…otion

Avoids stack usage in the v5i32 to i160 case for AMDGPU, which appears
in fat pointer lowering.
@arsenm arsenm force-pushed the users/arsenm/dag/promoteintres-bitcast-avoid-stack-vector-to-int branch from 5b18109 to 4b47549 Compare February 4, 2025 09:30
@arsenm arsenm merged commit cdca049 into main Feb 4, 2025
4 of 7 checks passed
@arsenm arsenm deleted the users/arsenm/dag/promoteintres-bitcast-avoid-stack-vector-to-int branch February 4, 2025 09:32
Icohedron pushed a commit to Icohedron/llvm-project that referenced this pull request Feb 11, 2025
…otion

 (llvm#125636)

Avoids stack usage in the v5i32 to i160 case for AMDGPU, which appears
in fat pointer lowering.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backend:AMDGPU llvm:SelectionDAG SelectionDAGISel as well
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants