Skip to content

[InstCombine] Do not simplify lshr/shl arg if it is part of fshl rotate pattern. #73441

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 5 commits into from

Conversation

quic-eikansh
Copy link
Contributor

The fshl/fshr having first two arguments as same gets lowered to targets
specific rotate. But based on the uses, one of the arguments can get
simplified resulting in different arguments performing equivalent operation.

This patch prevents the simplification of the arguments of lshr/shl if they are
part of fshl pattern.

The matchFunnelShift function was doing pattern matching and creating
the fshl/fshr instruction if needed. Moved the pattern matching code to function convertShlOrLShrToFShlOrFShr. It can be reused for other optimizations.
@llvmbot
Copy link
Member

llvmbot commented Nov 26, 2023

@llvm/pr-subscribers-llvm-transforms

Author: None (quic-eikansh)

Changes

The fshl/fshr having first two arguments as same gets lowered to targets
specific rotate. But based on the uses, one of the arguments can get
simplified resulting in different arguments performing equivalent operation.

This patch prevents the simplification of the arguments of lshr/shl if they are
part of fshl pattern.


Full diff: https://github.com/llvm/llvm-project/pull/73441.diff

3 Files Affected:

  • (modified) llvm/lib/Transforms/InstCombine/InstCombineAndOrXor.cpp (+23-13)
  • (modified) llvm/lib/Transforms/InstCombine/InstCombineInternal.h (+3)
  • (modified) llvm/lib/Transforms/InstCombine/InstCombineSimplifyDemanded.cpp (+26)
diff --git a/llvm/lib/Transforms/InstCombine/InstCombineAndOrXor.cpp b/llvm/lib/Transforms/InstCombine/InstCombineAndOrXor.cpp
index 02881109f17d29f..fd4b416ec87922f 100644
--- a/llvm/lib/Transforms/InstCombine/InstCombineAndOrXor.cpp
+++ b/llvm/lib/Transforms/InstCombine/InstCombineAndOrXor.cpp
@@ -2706,9 +2706,8 @@ Instruction *InstCombinerImpl::matchBSwapOrBitReverse(Instruction &I,
   return LastInst;
 }
 
-/// Match UB-safe variants of the funnel shift intrinsic.
-static Instruction *matchFunnelShift(Instruction &Or, InstCombinerImpl &IC,
-                                     const DominatorTree &DT) {
+std::optional<std::tuple<Intrinsic::ID, SmallVector<Value *, 3>>>
+InstCombinerImpl::convertShlOrLShrToFShlOrFShr(Instruction &Or) {
   // TODO: Can we reduce the code duplication between this and the related
   // rotate matching code under visitSelect and visitTrunc?
   unsigned Width = Or.getType()->getScalarSizeInBits();
@@ -2716,7 +2715,7 @@ static Instruction *matchFunnelShift(Instruction &Or, InstCombinerImpl &IC,
   Instruction *Or0, *Or1;
   if (!match(Or.getOperand(0), m_Instruction(Or0)) ||
       !match(Or.getOperand(1), m_Instruction(Or1)))
-    return nullptr;
+    return std::nullopt;
 
   bool IsFshl = true; // Sub on LSHR.
   SmallVector<Value *, 3> FShiftArgs;
@@ -2730,7 +2729,7 @@ static Instruction *matchFunnelShift(Instruction &Or, InstCombinerImpl &IC,
         !match(Or1,
                m_OneUse(m_LogicalShift(m_Value(ShVal1), m_Value(ShAmt1)))) ||
         Or0->getOpcode() == Or1->getOpcode())
-      return nullptr;
+      return std::nullopt;
 
     // Canonicalize to or(shl(ShVal0, ShAmt0), lshr(ShVal1, ShAmt1)).
     if (Or0->getOpcode() == BinaryOperator::LShr) {
@@ -2766,7 +2765,7 @@ static Instruction *matchFunnelShift(Instruction &Or, InstCombinerImpl &IC,
       // might remove it after this fold). This still doesn't guarantee that the
       // final codegen will match this original pattern.
       if (match(R, m_OneUse(m_Sub(m_SpecificInt(Width), m_Specific(L))))) {
-        KnownBits KnownL = IC.computeKnownBits(L, /*Depth*/ 0, &Or);
+        KnownBits KnownL = computeKnownBits(L, /*Depth*/ 0, &Or);
         return KnownL.getMaxValue().ult(Width) ? L : nullptr;
       }
 
@@ -2810,7 +2809,7 @@ static Instruction *matchFunnelShift(Instruction &Or, InstCombinerImpl &IC,
       IsFshl = false; // Sub on SHL.
     }
     if (!ShAmt)
-      return nullptr;
+      return std::nullopt;
 
     FShiftArgs = {ShVal0, ShVal1, ShAmt};
   } else if (isa<ZExtInst>(Or0) || isa<ZExtInst>(Or1)) {
@@ -2832,18 +2831,18 @@ static Instruction *matchFunnelShift(Instruction &Or, InstCombinerImpl &IC,
     const APInt *ZextHighShlAmt;
     if (!match(Or0,
                m_OneUse(m_Shl(m_Value(ZextHigh), m_APInt(ZextHighShlAmt)))))
-      return nullptr;
+      return std::nullopt;
 
     if (!match(Or1, m_ZExt(m_Value(Low))) ||
         !match(ZextHigh, m_ZExt(m_Value(High))))
-      return nullptr;
+      return std::nullopt;
 
     unsigned HighSize = High->getType()->getScalarSizeInBits();
     unsigned LowSize = Low->getType()->getScalarSizeInBits();
     // Make sure High does not overlap with Low and most significant bits of
     // High aren't shifted out.
     if (ZextHighShlAmt->ult(LowSize) || ZextHighShlAmt->ugt(Width - HighSize))
-      return nullptr;
+      return std::nullopt;
 
     for (User *U : ZextHigh->users()) {
       Value *X, *Y;
@@ -2874,11 +2873,22 @@ static Instruction *matchFunnelShift(Instruction &Or, InstCombinerImpl &IC,
   }
 
   if (FShiftArgs.empty())
-    return nullptr;
+    return std::nullopt;
 
   Intrinsic::ID IID = IsFshl ? Intrinsic::fshl : Intrinsic::fshr;
-  Function *F = Intrinsic::getDeclaration(Or.getModule(), IID, Or.getType());
-  return CallInst::Create(F, FShiftArgs);
+  return std::make_tuple(IID, FShiftArgs);
+}
+
+/// Match UB-safe variants of the funnel shift intrinsic.
+static Instruction *matchFunnelShift(Instruction &Or, InstCombinerImpl &IC,
+                                     const DominatorTree &DT) {
+  if (auto Opt = IC.convertShlOrLShrToFShlOrFShr(Or)) {
+    auto [IID, FShiftArgs] = *Opt;
+    Function *F = Intrinsic::getDeclaration(Or.getModule(), IID, Or.getType());
+    return CallInst::Create(F, FShiftArgs);
+  }
+
+  return nullptr;
 }
 
 /// Attempt to combine or(zext(x),shl(zext(y),bw/2) concat packing patterns.
diff --git a/llvm/lib/Transforms/InstCombine/InstCombineInternal.h b/llvm/lib/Transforms/InstCombine/InstCombineInternal.h
index 0bbb22be71569f6..303d02cc24fc9d3 100644
--- a/llvm/lib/Transforms/InstCombine/InstCombineInternal.h
+++ b/llvm/lib/Transforms/InstCombine/InstCombineInternal.h
@@ -236,6 +236,9 @@ class LLVM_LIBRARY_VISIBILITY InstCombinerImpl final
     return getLosslessTrunc(C, TruncTy, Instruction::SExt);
   }
 
+  std::optional<std::tuple<Intrinsic::ID, SmallVector<Value *, 3>>>
+  convertShlOrLShrToFShlOrFShr(Instruction &Or);
+
 private:
   bool annotateAnyAllocSite(CallBase &Call, const TargetLibraryInfo *TLI);
   bool isDesirableIntType(unsigned BitWidth) const;
diff --git a/llvm/lib/Transforms/InstCombine/InstCombineSimplifyDemanded.cpp b/llvm/lib/Transforms/InstCombine/InstCombineSimplifyDemanded.cpp
index fa076098d63cde5..518fc84a6cca013 100644
--- a/llvm/lib/Transforms/InstCombine/InstCombineSimplifyDemanded.cpp
+++ b/llvm/lib/Transforms/InstCombine/InstCombineSimplifyDemanded.cpp
@@ -610,6 +610,19 @@ Value *InstCombinerImpl::SimplifyDemandedUseBits(Value *V, APInt DemandedMask,
                                                     DemandedMask, Known))
             return R;
 
+      // Do not simplify if shl is part of fshl rotate pattern
+      if (I->hasOneUse()) {
+        auto *Inst = dyn_cast<Instruction>(I->user_back());
+        if (Inst && Inst->getOpcode() == BinaryOperator::Or) {
+          if (auto Opt = convertShlOrLShrToFShlOrFShr(*Inst)) {
+            auto [IID, FShiftArgs] = *Opt;
+            if ((IID == Intrinsic::fshl || IID == Intrinsic::fshr) &&
+                FShiftArgs[0] == FShiftArgs[1])
+              return nullptr;
+          }
+        }
+      }
+
       // TODO: If we only want bits that already match the signbit then we don't
       // need to shift.
 
@@ -670,6 +683,19 @@ Value *InstCombinerImpl::SimplifyDemandedUseBits(Value *V, APInt DemandedMask,
     if (match(I->getOperand(1), m_APInt(SA))) {
       uint64_t ShiftAmt = SA->getLimitedValue(BitWidth-1);
 
+      // Do not simplify if lshr is part of fshl rotate pattern
+      if (I->hasOneUse()) {
+        auto *Inst = dyn_cast<Instruction>(I->user_back());
+        if (Inst && Inst->getOpcode() == BinaryOperator::Or) {
+          if (auto Opt = convertShlOrLShrToFShlOrFShr(*Inst)) {
+            auto [IID, FShiftArgs] = *Opt;
+            if ((IID == Intrinsic::fshl || IID == Intrinsic::fshr) &&
+                FShiftArgs[0] == FShiftArgs[1])
+              return nullptr;
+          }
+        }
+      }
+
       // If we are just demanding the shifted sign bit and below, then this can
       // be treated as an ASHR in disguise.
       if (DemandedMask.countl_zero() >= ShiftAmt) {

@quic-eikansh
Copy link
Contributor Author

This pull request is created to address the reviews in #66115

@goldsteinn
Copy link
Contributor

I would expect to see test difference in the commit adding the new restriction.

@quic-eikansh
Copy link
Contributor Author

I would expect to see test difference in the commit adding the new restriction.

I didn't get what you meant by this. Should I have code changes and test in the same commit?

@goldsteinn
Copy link
Contributor

I would expect to see test difference in the commit adding the new restriction.

I didn't get what you meant by this. Should I have code changes and test in the same commit?

Yes, otherwise the commit with the impl will not be passing (and it makes review more difficult as its hard to see the proper side affects of a change).

@quic-eikansh
Copy link
Contributor Author

I have squashed the changes and test into one commit. @goldsteinn

@goldsteinn
Copy link
Contributor

I have squashed the changes and test into one commit. @goldsteinn

Thats not really ideal either... Then its difficult to review the test changes.
Typically the way we do it is two seperate commits i.e

commit 1: Tests (w/ checks before the implementation)
commit 2: Implementation (re update test checks)

That way in commit 2 we can see what actual changes the implementation is causing.

…te pattern.

The fshl/fshr having first two arguments as same gets lowered to targets
specific rotate. But based on the uses, one of the arguments can get
simplified resulting in different arguments performing equivalent
operation.

This patch prevents the simplification of the arguments of lshr/shl if
they are part of fshl pattern.
@quic-eikansh
Copy link
Contributor Author

@goldsteinn I have made the changes as you suggested.

@quic-eikansh
Copy link
Contributor Author

@goldsteinn Did you get a chance to look at it?

@dtcxzyw dtcxzyw requested a review from goldsteinn January 8, 2024 18:19
dtcxzyw added a commit to dtcxzyw/llvm-opt-benchmark that referenced this pull request Jan 8, 2024
/// Match UB-safe variants of the funnel shift intrinsic.
static Instruction *matchFunnelShift(Instruction &Or, InstCombinerImpl &IC,
const DominatorTree &DT) {
std::optional<std::tuple<Intrinsic::ID, SmallVector<Value *, 3>>>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use std::pair instead?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do std::pair has advantage over std::tuple? I see std::tuple used in codebase even for 2 element. I have addressed other 2 reviews.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that we should use std::pair instead of std::tuple if there are only two elements.

dtcxzyw added a commit that referenced this pull request Jan 10, 2024
This patch emits `ROTL(Cond, BitWidth - Shift)` directly in
`ReduceSwitchRange`. This should give better codegen because
`SimplifyDemandedBits` will break the rotation patterns in the original
form.

See also #73441 and the IR diff
https://github.com/dtcxzyw/llvm-opt-benchmark/pull/115/files.
This patch should cover most of cases handled by #73441.
dtcxzyw added a commit to dtcxzyw/llvm-opt-benchmark that referenced this pull request Jan 10, 2024
justinfargnoli pushed a commit to justinfargnoli/llvm-project that referenced this pull request Jan 28, 2024
This patch emits `ROTL(Cond, BitWidth - Shift)` directly in
`ReduceSwitchRange`. This should give better codegen because
`SimplifyDemandedBits` will break the rotation patterns in the original
form.

See also llvm#73441 and the IR diff
https://github.com/dtcxzyw/llvm-opt-benchmark/pull/115/files.
This patch should cover most of cases handled by llvm#73441.
@quic-eikansh quic-eikansh requested a review from dtcxzyw February 1, 2024 18:02
dtcxzyw added a commit to dtcxzyw/llvm-opt-benchmark that referenced this pull request Feb 2, 2024
Copy link
Member

@dtcxzyw dtcxzyw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thanks!
Please wait for additional approval from other reviewers.

@quic-eikansh
Copy link
Contributor Author

@goldsteinn @nikic

@quic-eikansh
Copy link
Contributor Author

LGTM. Thanks! Please wait for additional approval from other reviewers.

Thanks

@dtcxzyw
Copy link
Member

dtcxzyw commented Feb 5, 2024

Ping.

Copy link
Collaborator

@efriedma-quic efriedma-quic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changes LGTM... but I'd like to land as a series of three commits; one for the refactor (6bbc776 and the fixup), one for the new tests (b896ff5), then one for the actual behavior change.

Please push separate PRs for the refactor and the test, then I'll merge everything for you.

@nikic nikic closed this in 3363d23 Feb 16, 2024
@quic-eikansh
Copy link
Contributor Author

Thanks @nikic @efriedma-quic

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants