-
Notifications
You must be signed in to change notification settings - Fork 14.3k
[InstCombine] Do not simplify lshr/shl arg if it is part of fshl rotate pattern. #73441
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
The matchFunnelShift function was doing pattern matching and creating the fshl/fshr instruction if needed. Moved the pattern matching code to function convertShlOrLShrToFShlOrFShr. It can be reused for other optimizations.
@llvm/pr-subscribers-llvm-transforms Author: None (quic-eikansh) ChangesThe fshl/fshr having first two arguments as same gets lowered to targets This patch prevents the simplification of the arguments of lshr/shl if they are Full diff: https://github.com/llvm/llvm-project/pull/73441.diff 3 Files Affected:
diff --git a/llvm/lib/Transforms/InstCombine/InstCombineAndOrXor.cpp b/llvm/lib/Transforms/InstCombine/InstCombineAndOrXor.cpp
index 02881109f17d29f..fd4b416ec87922f 100644
--- a/llvm/lib/Transforms/InstCombine/InstCombineAndOrXor.cpp
+++ b/llvm/lib/Transforms/InstCombine/InstCombineAndOrXor.cpp
@@ -2706,9 +2706,8 @@ Instruction *InstCombinerImpl::matchBSwapOrBitReverse(Instruction &I,
return LastInst;
}
-/// Match UB-safe variants of the funnel shift intrinsic.
-static Instruction *matchFunnelShift(Instruction &Or, InstCombinerImpl &IC,
- const DominatorTree &DT) {
+std::optional<std::tuple<Intrinsic::ID, SmallVector<Value *, 3>>>
+InstCombinerImpl::convertShlOrLShrToFShlOrFShr(Instruction &Or) {
// TODO: Can we reduce the code duplication between this and the related
// rotate matching code under visitSelect and visitTrunc?
unsigned Width = Or.getType()->getScalarSizeInBits();
@@ -2716,7 +2715,7 @@ static Instruction *matchFunnelShift(Instruction &Or, InstCombinerImpl &IC,
Instruction *Or0, *Or1;
if (!match(Or.getOperand(0), m_Instruction(Or0)) ||
!match(Or.getOperand(1), m_Instruction(Or1)))
- return nullptr;
+ return std::nullopt;
bool IsFshl = true; // Sub on LSHR.
SmallVector<Value *, 3> FShiftArgs;
@@ -2730,7 +2729,7 @@ static Instruction *matchFunnelShift(Instruction &Or, InstCombinerImpl &IC,
!match(Or1,
m_OneUse(m_LogicalShift(m_Value(ShVal1), m_Value(ShAmt1)))) ||
Or0->getOpcode() == Or1->getOpcode())
- return nullptr;
+ return std::nullopt;
// Canonicalize to or(shl(ShVal0, ShAmt0), lshr(ShVal1, ShAmt1)).
if (Or0->getOpcode() == BinaryOperator::LShr) {
@@ -2766,7 +2765,7 @@ static Instruction *matchFunnelShift(Instruction &Or, InstCombinerImpl &IC,
// might remove it after this fold). This still doesn't guarantee that the
// final codegen will match this original pattern.
if (match(R, m_OneUse(m_Sub(m_SpecificInt(Width), m_Specific(L))))) {
- KnownBits KnownL = IC.computeKnownBits(L, /*Depth*/ 0, &Or);
+ KnownBits KnownL = computeKnownBits(L, /*Depth*/ 0, &Or);
return KnownL.getMaxValue().ult(Width) ? L : nullptr;
}
@@ -2810,7 +2809,7 @@ static Instruction *matchFunnelShift(Instruction &Or, InstCombinerImpl &IC,
IsFshl = false; // Sub on SHL.
}
if (!ShAmt)
- return nullptr;
+ return std::nullopt;
FShiftArgs = {ShVal0, ShVal1, ShAmt};
} else if (isa<ZExtInst>(Or0) || isa<ZExtInst>(Or1)) {
@@ -2832,18 +2831,18 @@ static Instruction *matchFunnelShift(Instruction &Or, InstCombinerImpl &IC,
const APInt *ZextHighShlAmt;
if (!match(Or0,
m_OneUse(m_Shl(m_Value(ZextHigh), m_APInt(ZextHighShlAmt)))))
- return nullptr;
+ return std::nullopt;
if (!match(Or1, m_ZExt(m_Value(Low))) ||
!match(ZextHigh, m_ZExt(m_Value(High))))
- return nullptr;
+ return std::nullopt;
unsigned HighSize = High->getType()->getScalarSizeInBits();
unsigned LowSize = Low->getType()->getScalarSizeInBits();
// Make sure High does not overlap with Low and most significant bits of
// High aren't shifted out.
if (ZextHighShlAmt->ult(LowSize) || ZextHighShlAmt->ugt(Width - HighSize))
- return nullptr;
+ return std::nullopt;
for (User *U : ZextHigh->users()) {
Value *X, *Y;
@@ -2874,11 +2873,22 @@ static Instruction *matchFunnelShift(Instruction &Or, InstCombinerImpl &IC,
}
if (FShiftArgs.empty())
- return nullptr;
+ return std::nullopt;
Intrinsic::ID IID = IsFshl ? Intrinsic::fshl : Intrinsic::fshr;
- Function *F = Intrinsic::getDeclaration(Or.getModule(), IID, Or.getType());
- return CallInst::Create(F, FShiftArgs);
+ return std::make_tuple(IID, FShiftArgs);
+}
+
+/// Match UB-safe variants of the funnel shift intrinsic.
+static Instruction *matchFunnelShift(Instruction &Or, InstCombinerImpl &IC,
+ const DominatorTree &DT) {
+ if (auto Opt = IC.convertShlOrLShrToFShlOrFShr(Or)) {
+ auto [IID, FShiftArgs] = *Opt;
+ Function *F = Intrinsic::getDeclaration(Or.getModule(), IID, Or.getType());
+ return CallInst::Create(F, FShiftArgs);
+ }
+
+ return nullptr;
}
/// Attempt to combine or(zext(x),shl(zext(y),bw/2) concat packing patterns.
diff --git a/llvm/lib/Transforms/InstCombine/InstCombineInternal.h b/llvm/lib/Transforms/InstCombine/InstCombineInternal.h
index 0bbb22be71569f6..303d02cc24fc9d3 100644
--- a/llvm/lib/Transforms/InstCombine/InstCombineInternal.h
+++ b/llvm/lib/Transforms/InstCombine/InstCombineInternal.h
@@ -236,6 +236,9 @@ class LLVM_LIBRARY_VISIBILITY InstCombinerImpl final
return getLosslessTrunc(C, TruncTy, Instruction::SExt);
}
+ std::optional<std::tuple<Intrinsic::ID, SmallVector<Value *, 3>>>
+ convertShlOrLShrToFShlOrFShr(Instruction &Or);
+
private:
bool annotateAnyAllocSite(CallBase &Call, const TargetLibraryInfo *TLI);
bool isDesirableIntType(unsigned BitWidth) const;
diff --git a/llvm/lib/Transforms/InstCombine/InstCombineSimplifyDemanded.cpp b/llvm/lib/Transforms/InstCombine/InstCombineSimplifyDemanded.cpp
index fa076098d63cde5..518fc84a6cca013 100644
--- a/llvm/lib/Transforms/InstCombine/InstCombineSimplifyDemanded.cpp
+++ b/llvm/lib/Transforms/InstCombine/InstCombineSimplifyDemanded.cpp
@@ -610,6 +610,19 @@ Value *InstCombinerImpl::SimplifyDemandedUseBits(Value *V, APInt DemandedMask,
DemandedMask, Known))
return R;
+ // Do not simplify if shl is part of fshl rotate pattern
+ if (I->hasOneUse()) {
+ auto *Inst = dyn_cast<Instruction>(I->user_back());
+ if (Inst && Inst->getOpcode() == BinaryOperator::Or) {
+ if (auto Opt = convertShlOrLShrToFShlOrFShr(*Inst)) {
+ auto [IID, FShiftArgs] = *Opt;
+ if ((IID == Intrinsic::fshl || IID == Intrinsic::fshr) &&
+ FShiftArgs[0] == FShiftArgs[1])
+ return nullptr;
+ }
+ }
+ }
+
// TODO: If we only want bits that already match the signbit then we don't
// need to shift.
@@ -670,6 +683,19 @@ Value *InstCombinerImpl::SimplifyDemandedUseBits(Value *V, APInt DemandedMask,
if (match(I->getOperand(1), m_APInt(SA))) {
uint64_t ShiftAmt = SA->getLimitedValue(BitWidth-1);
+ // Do not simplify if lshr is part of fshl rotate pattern
+ if (I->hasOneUse()) {
+ auto *Inst = dyn_cast<Instruction>(I->user_back());
+ if (Inst && Inst->getOpcode() == BinaryOperator::Or) {
+ if (auto Opt = convertShlOrLShrToFShlOrFShr(*Inst)) {
+ auto [IID, FShiftArgs] = *Opt;
+ if ((IID == Intrinsic::fshl || IID == Intrinsic::fshr) &&
+ FShiftArgs[0] == FShiftArgs[1])
+ return nullptr;
+ }
+ }
+ }
+
// If we are just demanding the shifted sign bit and below, then this can
// be treated as an ASHR in disguise.
if (DemandedMask.countl_zero() >= ShiftAmt) {
|
This pull request is created to address the reviews in #66115 |
I would expect to see test difference in the commit adding the new restriction. |
I didn't get what you meant by this. Should I have code changes and test in the same commit? |
Yes, otherwise the commit with the impl will not be passing (and it makes review more difficult as its hard to see the proper side affects of a change). |
5244762
to
6eceff6
Compare
I have squashed the changes and test into one commit. @goldsteinn |
Thats not really ideal either... Then its difficult to review the test changes. commit 1: Tests (w/ checks before the implementation) That way in commit 2 we can see what actual changes the implementation is causing. |
…te pattern. The fshl/fshr having first two arguments as same gets lowered to targets specific rotate. But based on the uses, one of the arguments can get simplified resulting in different arguments performing equivalent operation. This patch prevents the simplification of the arguments of lshr/shl if they are part of fshl pattern.
6eceff6
to
a2300b4
Compare
@goldsteinn I have made the changes as you suggested. |
@goldsteinn Did you get a chance to look at it? |
/// Match UB-safe variants of the funnel shift intrinsic. | ||
static Instruction *matchFunnelShift(Instruction &Or, InstCombinerImpl &IC, | ||
const DominatorTree &DT) { | ||
std::optional<std::tuple<Intrinsic::ID, SmallVector<Value *, 3>>> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
use std::pair
instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do std::pair
has advantage over std::tuple
? I see std::tuple
used in codebase even for 2 element. I have addressed other 2 reviews.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that we should use std::pair instead of std::tuple if there are only two elements.
This patch emits `ROTL(Cond, BitWidth - Shift)` directly in `ReduceSwitchRange`. This should give better codegen because `SimplifyDemandedBits` will break the rotation patterns in the original form. See also #73441 and the IR diff https://github.com/dtcxzyw/llvm-opt-benchmark/pull/115/files. This patch should cover most of cases handled by #73441.
This patch emits `ROTL(Cond, BitWidth - Shift)` directly in `ReduceSwitchRange`. This should give better codegen because `SimplifyDemandedBits` will break the rotation patterns in the original form. See also llvm#73441 and the IR diff https://github.com/dtcxzyw/llvm-opt-benchmark/pull/115/files. This patch should cover most of cases handled by llvm#73441.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Thanks!
Please wait for additional approval from other reviewers.
Thanks |
Ping. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @nikic @efriedma-quic |
The fshl/fshr having first two arguments as same gets lowered to targets
specific rotate. But based on the uses, one of the arguments can get
simplified resulting in different arguments performing equivalent operation.
This patch prevents the simplification of the arguments of lshr/shl if they are
part of fshl pattern.