Skip to content

[X86][CodeGen] Prefer KMOVkk_EVEX than KMOVkk when EGPR is supported #74048

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Dec 2, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 14 additions & 9 deletions llvm/lib/Target/X86/X86DomainReassignment.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -625,8 +625,10 @@ void X86DomainReassignment::initConverters() {
createReplacerDstCOPY(X86::MOVZX64rm16,
HasEGPR ? X86::KMOVWkm_EVEX : X86::KMOVWkm);

createReplacerDstCOPY(X86::MOVZX32rr16, X86::KMOVWkk);
createReplacerDstCOPY(X86::MOVZX64rr16, X86::KMOVWkk);
createReplacerDstCOPY(X86::MOVZX32rr16,
HasEGPR ? X86::KMOVWkk_EVEX : X86::KMOVWkk);
createReplacerDstCOPY(X86::MOVZX64rr16,
HasEGPR ? X86::KMOVWkk_EVEX : X86::KMOVWkk);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we access + repurpose the EVEX->VEX tables to do this for us?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you suggesting the usage like

  createReplacerDstCOPY(X86::MOVZX64rr16,
                        getEVEXVariant(X86::KMOVWkk, /*UseEVEXVariant=*/HasEGPR));

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's probably not worth it. First, KMOV may be the only instruction that needs to be promoted in CPP, and other instructions are implemented in TD through predicate HasEGPR/NoEGPR. Second, looking up the table will change the complexity from O(1) to O(lgN) and increase compile time.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, no need to do it - I have found during debugging that tracking down all the places that we switch instruction encoding is not always easy. But maybe we should just improve debug output - the X86DomainReassignment pass for instance doesn't do a good job listing all the changes it makes.


if (STI->hasDQI()) {
createReplacerDstCOPY(X86::MOVZX16rm8,
Expand All @@ -636,9 +638,12 @@ void X86DomainReassignment::initConverters() {
createReplacerDstCOPY(X86::MOVZX64rm8,
HasEGPR ? X86::KMOVBkm_EVEX : X86::KMOVBkm);

createReplacerDstCOPY(X86::MOVZX16rr8, X86::KMOVBkk);
createReplacerDstCOPY(X86::MOVZX32rr8, X86::KMOVBkk);
createReplacerDstCOPY(X86::MOVZX64rr8, X86::KMOVBkk);
createReplacerDstCOPY(X86::MOVZX16rr8,
HasEGPR ? X86::KMOVBkk_EVEX : X86::KMOVBkk);
createReplacerDstCOPY(X86::MOVZX32rr8,
HasEGPR ? X86::KMOVBkk_EVEX : X86::KMOVBkk);
createReplacerDstCOPY(X86::MOVZX64rr8,
HasEGPR ? X86::KMOVBkk_EVEX : X86::KMOVBkk);
}

auto createReplacer = [&](unsigned From, unsigned To) {
Expand All @@ -647,7 +652,7 @@ void X86DomainReassignment::initConverters() {

createReplacer(X86::MOV16rm, HasEGPR ? X86::KMOVWkm_EVEX : X86::KMOVWkm);
createReplacer(X86::MOV16mr, HasEGPR ? X86::KMOVWmk_EVEX : X86::KMOVWmk);
createReplacer(X86::MOV16rr, X86::KMOVWkk);
createReplacer(X86::MOV16rr, HasEGPR ? X86::KMOVWkk_EVEX : X86::KMOVWkk);
createReplacer(X86::SHR16ri, X86::KSHIFTRWri);
createReplacer(X86::SHL16ri, X86::KSHIFTLWri);
createReplacer(X86::NOT16r, X86::KNOTWrr);
Expand All @@ -662,8 +667,8 @@ void X86DomainReassignment::initConverters() {
createReplacer(X86::MOV32mr, HasEGPR ? X86::KMOVDmk_EVEX : X86::KMOVDmk);
createReplacer(X86::MOV64mr, HasEGPR ? X86::KMOVQmk_EVEX : X86::KMOVQmk);

createReplacer(X86::MOV32rr, X86::KMOVDkk);
createReplacer(X86::MOV64rr, X86::KMOVQkk);
createReplacer(X86::MOV32rr, HasEGPR ? X86::KMOVDkk_EVEX : X86::KMOVDkk);
createReplacer(X86::MOV64rr, HasEGPR ? X86::KMOVQkk_EVEX : X86::KMOVQkk);

createReplacer(X86::SHR32ri, X86::KSHIFTRDri);
createReplacer(X86::SHR64ri, X86::KSHIFTRQri);
Expand Down Expand Up @@ -703,7 +708,7 @@ void X86DomainReassignment::initConverters() {

createReplacer(X86::MOV8rm, HasEGPR ? X86::KMOVBkm_EVEX : X86::KMOVBkm);
createReplacer(X86::MOV8mr, HasEGPR ? X86::KMOVBmk_EVEX : X86::KMOVBmk);
createReplacer(X86::MOV8rr, X86::KMOVBkk);
createReplacer(X86::MOV8rr, HasEGPR ? X86::KMOVBkk_EVEX : X86::KMOVBkk);

createReplacer(X86::NOT8r, X86::KNOTBrr);

Expand Down
4 changes: 3 additions & 1 deletion llvm/lib/Target/X86/X86InstrInfo.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -4070,6 +4070,7 @@ void X86InstrInfo::copyPhysReg(MachineBasicBlock &MBB,
// First deal with the normal symmetric copies.
bool HasAVX = Subtarget.hasAVX();
bool HasVLX = Subtarget.hasVLX();
bool HasEGPR = Subtarget.hasEGPR();
unsigned Opc = 0;
if (X86::GR64RegClass.contains(DestReg, SrcReg))
Opc = X86::MOV64rr;
Expand Down Expand Up @@ -4124,7 +4125,8 @@ void X86InstrInfo::copyPhysReg(MachineBasicBlock &MBB,
// All KMASK RegClasses hold the same k registers, can be tested against
// anyone.
else if (X86::VK16RegClass.contains(DestReg, SrcReg))
Opc = Subtarget.hasBWI() ? X86::KMOVQkk : X86::KMOVWkk;
Opc = Subtarget.hasBWI() ? (HasEGPR ? X86::KMOVQkk_EVEX : X86::KMOVQkk)
: (HasEGPR ? X86::KMOVQkk_EVEX : X86::KMOVWkk);
if (!Opc)
Opc = CopyToFromAsymmetricReg(DestReg, SrcReg, Subtarget);

Expand Down
14 changes: 14 additions & 0 deletions llvm/test/CodeGen/X86/apx/kmov-kk.ll
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
; RUN: llc < %s -mtriple=x86_64-unknown -mattr=+avx512f,+egpr -show-mc-encoding | FileCheck --check-prefix=EGPR %s

define <16 x i32> @kmovkk(ptr %base, <16 x i32> %ind, i16 %mask) {
; EGPR: kmovq %k1, %k2 # EVEX TO VEX Compression encoding: [0xc4,0xe1,0xf8,0x90,0xd1]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible to add a test to cover the memory fold case?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not straightforward to me. And I think the memory fold is already tested by ./llvm/test/TableGen/x86-fold-tables.inc

%broadcast.splatinsert = insertelement <16 x ptr> undef, ptr %base, i32 0
%broadcast.splat = shufflevector <16 x ptr> %broadcast.splatinsert, <16 x ptr> undef, <16 x i32> zeroinitializer
%gep.random = getelementptr i32, <16 x ptr> %broadcast.splat, <16 x i32> %ind
%imask = bitcast i16 %mask to <16 x i1>
%gt1 = call <16 x i32> @llvm.masked.gather.v16i32.v16p0(<16 x ptr> %gep.random, i32 4, <16 x i1> %imask, <16 x i32>undef)
%gt2 = call <16 x i32> @llvm.masked.gather.v16i32.v16p0(<16 x ptr> %gep.random, i32 4, <16 x i1> %imask, <16 x i32>%gt1)
%res = add <16 x i32> %gt1, %gt2
ret <16 x i32> %res
}
declare <16 x i32> @llvm.masked.gather.v16i32.v16p0(<16 x ptr>, i32, <16 x i1>, <16 x i32>)