Skip to content

KnownBits: refine high-bits of mul in signed case #113051

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 4 commits into from

Conversation

artagnon
Copy link
Contributor

KnownBits::mul suffers from the deficiency that it doesn't account for signed inputs. Fix it by refining known leading zeros when both inputs are signed, and setting known leading ones when one of the inputs is signed. The strategy we've used is to still use umul_ov, after adjusting for signed inputs, and setting known leading ones from the negation of the result, when it is known to be negative, noting that a possibly-zero result is a special case.

-- 8< --
Based on #113050.

@llvmbot llvmbot added llvm:support llvm:analysis Includes value tracking, cost tables and constant folding labels Oct 19, 2024
@llvmbot
Copy link
Member

llvmbot commented Oct 19, 2024

@llvm/pr-subscribers-llvm-support

Author: Ramkumar Ramachandra (artagnon)

Changes

KnownBits::mul suffers from the deficiency that it doesn't account for signed inputs. Fix it by refining known leading zeros when both inputs are signed, and setting known leading ones when one of the inputs is signed. The strategy we've used is to still use umul_ov, after adjusting for signed inputs, and setting known leading ones from the negation of the result, when it is known to be negative, noting that a possibly-zero result is a special case.

-- 8< --
Based on #113050.


Full diff: https://github.com/llvm/llvm-project/pull/113051.diff

2 Files Affected:

  • (modified) llvm/lib/Support/KnownBits.cpp (+20-12)
  • (added) llvm/test/Analysis/ValueTracking/knownbits-mul.ll (+143)
diff --git a/llvm/lib/Support/KnownBits.cpp b/llvm/lib/Support/KnownBits.cpp
index 89668af378070b..b63945f202a34d 100644
--- a/llvm/lib/Support/KnownBits.cpp
+++ b/llvm/lib/Support/KnownBits.cpp
@@ -796,19 +796,26 @@ KnownBits KnownBits::mul(const KnownBits &LHS, const KnownBits &RHS,
   assert((!NoUndefSelfMultiply || LHS == RHS) &&
          "Self multiplication knownbits mismatch");
 
-  // Compute the high known-0 bits by multiplying the unsigned max of each side.
-  // Conservatively, M active bits * N active bits results in M + N bits in the
-  // result. But if we know a value is a power-of-2 for example, then this
-  // computes one more leading zero.
-  // TODO: This could be generalized to number of sign bits (negative numbers).
-  APInt UMaxLHS = LHS.getMaxValue();
-  APInt UMaxRHS = RHS.getMaxValue();
-
-  // For leading zeros in the result to be valid, the unsigned max product must
+  // Compute the high known-0 or known-1 bits by multiplying the max of each
+  // side. Conservatively, M active bits * N active bits results in M + N bits
+  // in the result. But if we know a value is a power-of-2 for example, then
+  // this computes one more leading zero or one.
+  APInt MaxLHS = LHS.isNegative() ? LHS.getMinValue().abs() : LHS.getMaxValue(),
+        MaxRHS = RHS.isNegative() ? RHS.getMinValue().abs() : RHS.getMaxValue();
+
+  // For leading zeros or ones in the result to be valid, the max product must
   // fit in the bitwidth (it must not overflow).
   bool HasOverflow;
-  APInt UMaxResult = UMaxLHS.umul_ov(UMaxRHS, HasOverflow);
-  unsigned LeadZ = HasOverflow ? 0 : UMaxResult.countl_zero();
+  APInt Result = MaxLHS.umul_ov(MaxRHS, HasOverflow);
+  bool NegResult = LHS.isNegative() ^ RHS.isNegative();
+  unsigned LeadZ = 0, LeadO = 0;
+  if (!HasOverflow) {
+    // Do not set leading ones unless the result is known to be non-zero.
+    if (NegResult && LHS.isNonZero() && RHS.isNonZero())
+      LeadO = (-Result).countLeadingOnes();
+    else if (!NegResult)
+      LeadZ = Result.countLeadingZeros();
+  }
 
   // The result of the bottom bits of an integer multiply can be
   // inferred by looking at the bottom bits of both operands and
@@ -873,8 +880,9 @@ KnownBits KnownBits::mul(const KnownBits &LHS, const KnownBits &RHS,
 
   KnownBits Res(BitWidth);
   Res.Zero.setHighBits(LeadZ);
+  Res.One.setHighBits(LeadO);
   Res.Zero |= (~BottomKnown).getLoBits(ResultBitsKnown);
-  Res.One = BottomKnown.getLoBits(ResultBitsKnown);
+  Res.One |= BottomKnown.getLoBits(ResultBitsKnown);
 
   // If we're self-multiplying then bit[1] is guaranteed to be zero.
   if (NoUndefSelfMultiply && BitWidth > 1) {
diff --git a/llvm/test/Analysis/ValueTracking/knownbits-mul.ll b/llvm/test/Analysis/ValueTracking/knownbits-mul.ll
new file mode 100644
index 00000000000000..37526c67f0d9e1
--- /dev/null
+++ b/llvm/test/Analysis/ValueTracking/knownbits-mul.ll
@@ -0,0 +1,143 @@
+; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --version 5
+; RUN: opt < %s -passes=instcombine -S | FileCheck %s
+
+define i8 @mul_low_bits_know(i8 %xx, i8 %yy) {
+; CHECK-LABEL: define i8 @mul_low_bits_know(
+; CHECK-SAME: i8 [[XX:%.*]], i8 [[YY:%.*]]) {
+; CHECK-NEXT:    ret i8 0
+;
+  %x = and i8 %xx, 2
+  %y = and i8 %yy, 4
+  %mul = mul i8 %x, %y
+  %r = and i8 %mul, 6
+  ret i8 %r
+}
+
+define i8 @mul_low_bits_know2(i8 %xx, i8 %yy) {
+; CHECK-LABEL: define i8 @mul_low_bits_know2(
+; CHECK-SAME: i8 [[XX:%.*]], i8 [[YY:%.*]]) {
+; CHECK-NEXT:    ret i8 0
+;
+  %x = or i8 %xx, -2
+  %y = and i8 %yy, 4
+  %mul = mul i8 %x, %y
+  %r = and i8 %mul, 2
+  ret i8 %r
+}
+
+define i8 @mul_low_bits_partially_known(i8 %xx, i8 %yy) {
+; CHECK-LABEL: define i8 @mul_low_bits_partially_known(
+; CHECK-SAME: i8 [[XX:%.*]], i8 [[YY:%.*]]) {
+; CHECK-NEXT:    [[Y:%.*]] = or i8 [[YY]], 2
+; CHECK-NEXT:    [[MUL:%.*]] = sub nsw i8 0, [[Y]]
+; CHECK-NEXT:    [[R:%.*]] = and i8 [[MUL]], 2
+; CHECK-NEXT:    ret i8 [[R]]
+;
+  %x = or i8 %xx, -4
+  %x.notsmin = or i8 %x, 3
+  %y = or i8 %yy, -2
+  %mul = mul i8 %x.notsmin, %y
+  %r = and i8 %mul, 6
+  ret i8 %r
+}
+
+define i8 @mul_low_bits_unknown(i8 %xx, i8 %yy) {
+; CHECK-LABEL: define i8 @mul_low_bits_unknown(
+; CHECK-SAME: i8 [[XX:%.*]], i8 [[YY:%.*]]) {
+; CHECK-NEXT:    [[X:%.*]] = or i8 [[XX]], 4
+; CHECK-NEXT:    [[Y:%.*]] = or i8 [[YY]], 6
+; CHECK-NEXT:    [[MUL:%.*]] = mul i8 [[X]], [[Y]]
+; CHECK-NEXT:    [[R:%.*]] = and i8 [[MUL]], 6
+; CHECK-NEXT:    ret i8 [[R]]
+;
+  %x = or i8 %xx, -4
+  %y = or i8 %yy, -2
+  %mul = mul i8 %x, %y
+  %r = and i8 %mul, 6
+  ret i8 %r
+}
+
+define i8 @mul_high_bits_know(i8 %xx, i8 %yy) {
+; CHECK-LABEL: define i8 @mul_high_bits_know(
+; CHECK-SAME: i8 [[XX:%.*]], i8 [[YY:%.*]]) {
+; CHECK-NEXT:    ret i8 0
+;
+  %x = and i8 %xx, 2
+  %y = and i8 %yy, 4
+  %mul = mul i8 %x, %y
+  %r = and i8 %mul, 16
+  ret i8 %r
+}
+
+define i8 @mul_high_bits_know2(i8 %xx, i8 %yy) {
+; CHECK-LABEL: define i8 @mul_high_bits_know2(
+; CHECK-SAME: i8 [[XX:%.*]], i8 [[YY:%.*]]) {
+; CHECK-NEXT:    ret i8 -16
+;
+  %x = or i8 %xx, -2
+  %y = and i8 %yy, 4
+  %y.nonzero = or i8 %y, 1
+  %mul = mul i8 %x, %y.nonzero
+  %r = and i8 %mul, -16
+  ret i8 %r
+}
+
+define i8 @mul_high_bits_know3(i8 %xx, i8 %yy) {
+; CHECK-LABEL: define i8 @mul_high_bits_know3(
+; CHECK-SAME: i8 [[XX:%.*]], i8 [[YY:%.*]]) {
+; CHECK-NEXT:    ret i8 0
+;
+  %x = or i8 %xx, -4
+  %y = or i8 %yy, -2
+  %mul = mul i8 %x, %y
+  %r = and i8 %mul, -16
+  ret i8 %r
+}
+
+define i8 @mul_high_bits_unknown(i8 %xx, i8 %yy) {
+; CHECK-LABEL: define i8 @mul_high_bits_unknown(
+; CHECK-SAME: i8 [[XX:%.*]], i8 [[YY:%.*]]) {
+; CHECK-NEXT:    [[X:%.*]] = and i8 [[XX]], 2
+; CHECK-NEXT:    [[Y:%.*]] = and i8 [[YY]], 4
+; CHECK-NEXT:    [[MUL:%.*]] = mul nuw nsw i8 [[X]], [[Y]]
+; CHECK-NEXT:    ret i8 [[MUL]]
+;
+  %x = and i8 %xx, 2
+  %y = and i8 %yy, 4
+  %mul = mul i8 %x, %y
+  %r = and i8 %mul, 8
+  ret i8 %r
+}
+
+define i8 @mul_high_bits_unknown2(i8 %xx, i8 %yy) {
+; CHECK-LABEL: define i8 @mul_high_bits_unknown2(
+; CHECK-SAME: i8 [[XX:%.*]], i8 [[YY:%.*]]) {
+; CHECK-NEXT:    [[X:%.*]] = or i8 [[XX]], -2
+; CHECK-NEXT:    [[Y:%.*]] = and i8 [[YY]], 4
+; CHECK-NEXT:    [[MUL:%.*]] = mul nsw i8 [[X]], [[Y]]
+; CHECK-NEXT:    [[R:%.*]] = and i8 [[MUL]], -16
+; CHECK-NEXT:    ret i8 [[R]]
+;
+  %x = or i8 %xx, -2
+  %y = and i8 %yy, 4
+  %mul = mul i8 %x, %y
+  %r = and i8 %mul, -16
+  ret i8 %r
+}
+
+; TODO: This can be reduced to zero.
+define i8 @mul_high_bits_unknown3(i8 %xx, i8 %yy) {
+; CHECK-LABEL: define i8 @mul_high_bits_unknown3(
+; CHECK-SAME: i8 [[XX:%.*]], i8 [[YY:%.*]]) {
+; CHECK-NEXT:    [[X:%.*]] = or i8 [[XX]], 28
+; CHECK-NEXT:    [[Y:%.*]] = or i8 [[YY]], 30
+; CHECK-NEXT:    [[MUL:%.*]] = mul i8 [[X]], [[Y]]
+; CHECK-NEXT:    [[R:%.*]] = and i8 [[MUL]], 16
+; CHECK-NEXT:    ret i8 [[R]]
+;
+  %x = or i8 %xx, -4
+  %y = or i8 %yy, -2
+  %mul = mul i8 %x, %y
+  %r = and i8 %mul, 16
+  ret i8 %r
+}

@llvmbot
Copy link
Member

llvmbot commented Oct 19, 2024

@llvm/pr-subscribers-llvm-analysis

Author: Ramkumar Ramachandra (artagnon)

Changes

KnownBits::mul suffers from the deficiency that it doesn't account for signed inputs. Fix it by refining known leading zeros when both inputs are signed, and setting known leading ones when one of the inputs is signed. The strategy we've used is to still use umul_ov, after adjusting for signed inputs, and setting known leading ones from the negation of the result, when it is known to be negative, noting that a possibly-zero result is a special case.

-- 8< --
Based on #113050.


Full diff: https://github.com/llvm/llvm-project/pull/113051.diff

2 Files Affected:

  • (modified) llvm/lib/Support/KnownBits.cpp (+20-12)
  • (added) llvm/test/Analysis/ValueTracking/knownbits-mul.ll (+143)
diff --git a/llvm/lib/Support/KnownBits.cpp b/llvm/lib/Support/KnownBits.cpp
index 89668af378070b..b63945f202a34d 100644
--- a/llvm/lib/Support/KnownBits.cpp
+++ b/llvm/lib/Support/KnownBits.cpp
@@ -796,19 +796,26 @@ KnownBits KnownBits::mul(const KnownBits &LHS, const KnownBits &RHS,
   assert((!NoUndefSelfMultiply || LHS == RHS) &&
          "Self multiplication knownbits mismatch");
 
-  // Compute the high known-0 bits by multiplying the unsigned max of each side.
-  // Conservatively, M active bits * N active bits results in M + N bits in the
-  // result. But if we know a value is a power-of-2 for example, then this
-  // computes one more leading zero.
-  // TODO: This could be generalized to number of sign bits (negative numbers).
-  APInt UMaxLHS = LHS.getMaxValue();
-  APInt UMaxRHS = RHS.getMaxValue();
-
-  // For leading zeros in the result to be valid, the unsigned max product must
+  // Compute the high known-0 or known-1 bits by multiplying the max of each
+  // side. Conservatively, M active bits * N active bits results in M + N bits
+  // in the result. But if we know a value is a power-of-2 for example, then
+  // this computes one more leading zero or one.
+  APInt MaxLHS = LHS.isNegative() ? LHS.getMinValue().abs() : LHS.getMaxValue(),
+        MaxRHS = RHS.isNegative() ? RHS.getMinValue().abs() : RHS.getMaxValue();
+
+  // For leading zeros or ones in the result to be valid, the max product must
   // fit in the bitwidth (it must not overflow).
   bool HasOverflow;
-  APInt UMaxResult = UMaxLHS.umul_ov(UMaxRHS, HasOverflow);
-  unsigned LeadZ = HasOverflow ? 0 : UMaxResult.countl_zero();
+  APInt Result = MaxLHS.umul_ov(MaxRHS, HasOverflow);
+  bool NegResult = LHS.isNegative() ^ RHS.isNegative();
+  unsigned LeadZ = 0, LeadO = 0;
+  if (!HasOverflow) {
+    // Do not set leading ones unless the result is known to be non-zero.
+    if (NegResult && LHS.isNonZero() && RHS.isNonZero())
+      LeadO = (-Result).countLeadingOnes();
+    else if (!NegResult)
+      LeadZ = Result.countLeadingZeros();
+  }
 
   // The result of the bottom bits of an integer multiply can be
   // inferred by looking at the bottom bits of both operands and
@@ -873,8 +880,9 @@ KnownBits KnownBits::mul(const KnownBits &LHS, const KnownBits &RHS,
 
   KnownBits Res(BitWidth);
   Res.Zero.setHighBits(LeadZ);
+  Res.One.setHighBits(LeadO);
   Res.Zero |= (~BottomKnown).getLoBits(ResultBitsKnown);
-  Res.One = BottomKnown.getLoBits(ResultBitsKnown);
+  Res.One |= BottomKnown.getLoBits(ResultBitsKnown);
 
   // If we're self-multiplying then bit[1] is guaranteed to be zero.
   if (NoUndefSelfMultiply && BitWidth > 1) {
diff --git a/llvm/test/Analysis/ValueTracking/knownbits-mul.ll b/llvm/test/Analysis/ValueTracking/knownbits-mul.ll
new file mode 100644
index 00000000000000..37526c67f0d9e1
--- /dev/null
+++ b/llvm/test/Analysis/ValueTracking/knownbits-mul.ll
@@ -0,0 +1,143 @@
+; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --version 5
+; RUN: opt < %s -passes=instcombine -S | FileCheck %s
+
+define i8 @mul_low_bits_know(i8 %xx, i8 %yy) {
+; CHECK-LABEL: define i8 @mul_low_bits_know(
+; CHECK-SAME: i8 [[XX:%.*]], i8 [[YY:%.*]]) {
+; CHECK-NEXT:    ret i8 0
+;
+  %x = and i8 %xx, 2
+  %y = and i8 %yy, 4
+  %mul = mul i8 %x, %y
+  %r = and i8 %mul, 6
+  ret i8 %r
+}
+
+define i8 @mul_low_bits_know2(i8 %xx, i8 %yy) {
+; CHECK-LABEL: define i8 @mul_low_bits_know2(
+; CHECK-SAME: i8 [[XX:%.*]], i8 [[YY:%.*]]) {
+; CHECK-NEXT:    ret i8 0
+;
+  %x = or i8 %xx, -2
+  %y = and i8 %yy, 4
+  %mul = mul i8 %x, %y
+  %r = and i8 %mul, 2
+  ret i8 %r
+}
+
+define i8 @mul_low_bits_partially_known(i8 %xx, i8 %yy) {
+; CHECK-LABEL: define i8 @mul_low_bits_partially_known(
+; CHECK-SAME: i8 [[XX:%.*]], i8 [[YY:%.*]]) {
+; CHECK-NEXT:    [[Y:%.*]] = or i8 [[YY]], 2
+; CHECK-NEXT:    [[MUL:%.*]] = sub nsw i8 0, [[Y]]
+; CHECK-NEXT:    [[R:%.*]] = and i8 [[MUL]], 2
+; CHECK-NEXT:    ret i8 [[R]]
+;
+  %x = or i8 %xx, -4
+  %x.notsmin = or i8 %x, 3
+  %y = or i8 %yy, -2
+  %mul = mul i8 %x.notsmin, %y
+  %r = and i8 %mul, 6
+  ret i8 %r
+}
+
+define i8 @mul_low_bits_unknown(i8 %xx, i8 %yy) {
+; CHECK-LABEL: define i8 @mul_low_bits_unknown(
+; CHECK-SAME: i8 [[XX:%.*]], i8 [[YY:%.*]]) {
+; CHECK-NEXT:    [[X:%.*]] = or i8 [[XX]], 4
+; CHECK-NEXT:    [[Y:%.*]] = or i8 [[YY]], 6
+; CHECK-NEXT:    [[MUL:%.*]] = mul i8 [[X]], [[Y]]
+; CHECK-NEXT:    [[R:%.*]] = and i8 [[MUL]], 6
+; CHECK-NEXT:    ret i8 [[R]]
+;
+  %x = or i8 %xx, -4
+  %y = or i8 %yy, -2
+  %mul = mul i8 %x, %y
+  %r = and i8 %mul, 6
+  ret i8 %r
+}
+
+define i8 @mul_high_bits_know(i8 %xx, i8 %yy) {
+; CHECK-LABEL: define i8 @mul_high_bits_know(
+; CHECK-SAME: i8 [[XX:%.*]], i8 [[YY:%.*]]) {
+; CHECK-NEXT:    ret i8 0
+;
+  %x = and i8 %xx, 2
+  %y = and i8 %yy, 4
+  %mul = mul i8 %x, %y
+  %r = and i8 %mul, 16
+  ret i8 %r
+}
+
+define i8 @mul_high_bits_know2(i8 %xx, i8 %yy) {
+; CHECK-LABEL: define i8 @mul_high_bits_know2(
+; CHECK-SAME: i8 [[XX:%.*]], i8 [[YY:%.*]]) {
+; CHECK-NEXT:    ret i8 -16
+;
+  %x = or i8 %xx, -2
+  %y = and i8 %yy, 4
+  %y.nonzero = or i8 %y, 1
+  %mul = mul i8 %x, %y.nonzero
+  %r = and i8 %mul, -16
+  ret i8 %r
+}
+
+define i8 @mul_high_bits_know3(i8 %xx, i8 %yy) {
+; CHECK-LABEL: define i8 @mul_high_bits_know3(
+; CHECK-SAME: i8 [[XX:%.*]], i8 [[YY:%.*]]) {
+; CHECK-NEXT:    ret i8 0
+;
+  %x = or i8 %xx, -4
+  %y = or i8 %yy, -2
+  %mul = mul i8 %x, %y
+  %r = and i8 %mul, -16
+  ret i8 %r
+}
+
+define i8 @mul_high_bits_unknown(i8 %xx, i8 %yy) {
+; CHECK-LABEL: define i8 @mul_high_bits_unknown(
+; CHECK-SAME: i8 [[XX:%.*]], i8 [[YY:%.*]]) {
+; CHECK-NEXT:    [[X:%.*]] = and i8 [[XX]], 2
+; CHECK-NEXT:    [[Y:%.*]] = and i8 [[YY]], 4
+; CHECK-NEXT:    [[MUL:%.*]] = mul nuw nsw i8 [[X]], [[Y]]
+; CHECK-NEXT:    ret i8 [[MUL]]
+;
+  %x = and i8 %xx, 2
+  %y = and i8 %yy, 4
+  %mul = mul i8 %x, %y
+  %r = and i8 %mul, 8
+  ret i8 %r
+}
+
+define i8 @mul_high_bits_unknown2(i8 %xx, i8 %yy) {
+; CHECK-LABEL: define i8 @mul_high_bits_unknown2(
+; CHECK-SAME: i8 [[XX:%.*]], i8 [[YY:%.*]]) {
+; CHECK-NEXT:    [[X:%.*]] = or i8 [[XX]], -2
+; CHECK-NEXT:    [[Y:%.*]] = and i8 [[YY]], 4
+; CHECK-NEXT:    [[MUL:%.*]] = mul nsw i8 [[X]], [[Y]]
+; CHECK-NEXT:    [[R:%.*]] = and i8 [[MUL]], -16
+; CHECK-NEXT:    ret i8 [[R]]
+;
+  %x = or i8 %xx, -2
+  %y = and i8 %yy, 4
+  %mul = mul i8 %x, %y
+  %r = and i8 %mul, -16
+  ret i8 %r
+}
+
+; TODO: This can be reduced to zero.
+define i8 @mul_high_bits_unknown3(i8 %xx, i8 %yy) {
+; CHECK-LABEL: define i8 @mul_high_bits_unknown3(
+; CHECK-SAME: i8 [[XX:%.*]], i8 [[YY:%.*]]) {
+; CHECK-NEXT:    [[X:%.*]] = or i8 [[XX]], 28
+; CHECK-NEXT:    [[Y:%.*]] = or i8 [[YY]], 30
+; CHECK-NEXT:    [[MUL:%.*]] = mul i8 [[X]], [[Y]]
+; CHECK-NEXT:    [[R:%.*]] = and i8 [[MUL]], 16
+; CHECK-NEXT:    ret i8 [[R]]
+;
+  %x = or i8 %xx, -4
+  %y = or i8 %yy, -2
+  %mul = mul i8 %x, %y
+  %r = and i8 %mul, 16
+  ret i8 %r
+}

@artagnon artagnon requested a review from dtcxzyw October 19, 2024 17:03
@goldsteinn
Copy link
Contributor

How does this compare to Jay's impl #86671 (comment)?

@artagnon
Copy link
Contributor Author

How does this compare to Jay's impl #86671 (comment)?

My patch makes KnownBits::mul strictly better for the high bits: I don't touch the low bits. Jay's implementation attempts to rewrite mul completely, at the cost of compile time.

@goldsteinn
Copy link
Contributor

goldsteinn commented Oct 22, 2024

How does this compare to Jay's impl #86671 (comment)?

My patch makes KnownBits::mul strictly better for the high bits: I don't touch the low bits. Jay's implementation attempts to rewrite mul completely, at the cost of compile time.

Strictly better than current or jays impl?

edit: Im generally in favor of this, but I'm also in favors of jays impl getting in unless the compile time data is really bad. So my feeling is if we are going to update it, we might as well do it will the best impl we can. We already loop through nbits for shifts, so why not for muls.

Copy link
Contributor

@jayfoad jayfoad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The implementation looks fine to me. As you say it's strictly better than the status quo.

I'd prefer to put the ad hoc tests in unittests/Support/KnownBitsTest.cpp since it's a more direct way of testing the KnownBits implementation than going via ValueTracking.

For future work: I suspect that what you have implemented here is not quite optimal for the high bits. We could fix that and extend #113316 to check optimality of high bits as well as low bits.

@artagnon
Copy link
Contributor Author

I'd prefer to put the ad hoc tests in unittests/Support/KnownBitsTest.cpp since it's a more direct way of testing the KnownBits implementation than going via ValueTracking.

I'm not sure we'd want to pollute the unit tests with ad-hoc tests: to me, KnownBitsTest.cpp is a collection of disciplined tests that should never fail. We already have several knownbits-*.ll in ValueTracking/test, and I think adding knownbits-mul.ll works in practice, as long as we pre-commit a reduced test, show non-reduction, and show a reduction after the patch.

For future work: I suspect that what you have implemented here is not quite optimal for the high bits. We could fix that and extend #113316 to check optimality of high bits as well as low bits.

I also suspect that it's not optimal for high bits: I think we need some logic similar to the low-bits logic later in the function. Will think about a follow-up when I have some free time, if you don't beat me to it first.

@artagnon
Copy link
Contributor Author

I'm generally in favor of this, but I'm also in favors of jays impl getting in unless the compile time data is really bad.

I think looping through bits is unnecessary, as low-bits are already optimal, and we need something like that for high bits: we can work on a follow-up.

@artagnon
Copy link
Contributor Author

For future work: I suspect that what you have implemented here is not quite optimal for the high bits. We could fix that and extend #113316 to check optimality of high bits as well as low bits.

I also suspect that it's not optimal for high bits: I think we need some logic similar to the low-bits logic later in the function.

Quick remark: I think this is sub-optimal due to the mul_ov overflowing in many cases.

@jayfoad
Copy link
Contributor

jayfoad commented Oct 23, 2024

I'd prefer to put the ad hoc tests in unittests/Support/KnownBitsTest.cpp since it's a more direct way of testing the KnownBits implementation than going via ValueTracking.

I'm not sure we'd want to pollute the unit tests with ad-hoc tests: to me, KnownBitsTest.cpp is a collection of disciplined tests that should never fail.

There's no question of "pollution". It's the correct place for KnownBits tests. Yes we have tried to make most of the existing tests "disciplined" and exhaustive, but ad hoc tests can go in there too. They also have the advantage that it's much more obvious what cases are being tested - you don't have to craft IR with ands and ors to persuade ValueTracking to create the KnownBits objects you want to test.

@jayfoad
Copy link
Contributor

jayfoad commented Oct 23, 2024

For future work: I suspect that what you have implemented here is not quite optimal for the high bits. We could fix that and extend #113316 to check optimality of high bits as well as low bits.

I also suspect that it's not optimal for high bits: I think we need some logic similar to the low-bits logic later in the function.

Quick remark: I think this is sub-optimal due to the mul_ov overflowing in many cases.

Yeah, I have thought about this a bit more. I suspect that making high bits optimal in all cases is just as hard as making "middle" bits optimal. So we might have to give up on that goal.

KnownBits::mul suffers from the deficiency that it doesn't account for
signed inputs. Fix it by refining known leading zeros when both inputs
are signed, and setting known leading ones when one of the inputs is
signed. The strategy we've used is to still use umul_ov, after adjusting
for signed inputs, and setting known leading ones from the negation of
the result, when it is known to be negative, noting that a possibly-zero
result is a special case.
@artagnon
Copy link
Contributor Author

For future work: I suspect that what you have implemented here is not quite optimal for the high bits. We could fix that and extend #113316 to check optimality of high bits as well as low bits.

I also suspect that it's not optimal for high bits: I think we need some logic similar to the low-bits logic later in the function.

Quick remark: I think this is sub-optimal due to the mul_ov overflowing in many cases.

Yeah, I have thought about this a bit more. I suspect that making high bits optimal in all cases is just as hard as making "middle" bits optimal. So we might have to give up on that goal.

I have a question. Can we simply mul_ov with twice the bitwidth to get the optimal result?

@jayfoad
Copy link
Contributor

jayfoad commented Oct 23, 2024

If you multiply with twice the bit width, it will never overflow.

@artagnon
Copy link
Contributor Author

I'd prefer to put the ad hoc tests in unittests/Support/KnownBitsTest.cpp since it's a more direct way of testing the KnownBits implementation than going via ValueTracking.

I'm not sure we'd want to pollute the unit tests with ad-hoc tests: to me, KnownBitsTest.cpp is a collection of disciplined tests that should never fail.

There's no question of "pollution". It's the correct place for KnownBits tests. Yes we have tried to make most of the existing tests "disciplined" and exhaustive, but ad hoc tests can go in there too. They also have the advantage that it's much more obvious what cases are being tested - you don't have to craft IR with ands and ors to persuade ValueTracking to create the KnownBits objects you want to test.

Did you mean something like this? If you can check that it is okay, I will push it.

TEST(KnownBitsTest, MulHighBits) {
  unsigned Bits = 8;
  SmallVector<std::pair<int, int>, 4> TestPairs = {
      {2, 4}, {-2, -4}, {2, -4}, {-2, 4}};
  for (auto [K1, K2] : TestPairs) {
    KnownBits Known1(Bits), Known2(Bits);
    if (K1 > 0) {
      Known1.Zero |= ~(K1 | 1);
      Known1.One |= 1;
    } else {
      Known1.One |= K1;
    }
    if (K2 > 0) {
      Known2.Zero |= ~(K2 | 1);
      Known2.One |= 1;
    } else {
      Known2.One |= K2;
    }
    KnownBits Computed = KnownBits::mul(Known1, Known2);
    KnownBits Exact(Bits);
    Exact.Zero.setAllBits();
    Exact.One.setAllBits();

    ForeachNumInKnownBits(Known1, [&](const APInt &N1) {
      ForeachNumInKnownBits(Known2, [&](const APInt &N2) {
        APInt Res = N1 * N2;
        Exact.One &= Res;
        Exact.Zero &= ~Res;
      });
    });

    APInt Mask = APInt::getHighBitsSet(
        Bits, (Exact.Zero | Exact.One).countLeadingOnes());
    Exact.Zero &= Mask;
    Exact.One &= Mask;
    Computed.Zero &= Mask;
    Computed.One &= Mask;
    EXPECT_TRUE(checkResult("mul", Exact, Computed, {Known1, Known2},
                            /*CheckOptimality=*/true));
  }
}

@jayfoad
Copy link
Contributor

jayfoad commented Oct 23, 2024

Sure that looks good, with a couple of comments to explain what is going on. It is much more general than I was expecting.

@artagnon
Copy link
Contributor Author

If you multiply with twice the bit width, it will never overflow.

Right. I think I can work on a follow-up doing the mul_ov with twice the bitwidth, and make the high bits exhaustive. Will investigate after this patch lands.

@artagnon
Copy link
Contributor Author

artagnon commented Oct 23, 2024

Right. I think I can work on a follow-up doing the mul_ov with twice the bitwidth, and make the high bits exhaustive. Will investigate after this patch lands.

I thought about it some more, and have come to the same conclusion as you: making high bits optimal is as hard as making the entire mul optimal.

@jayfoad
Copy link
Contributor

jayfoad commented Oct 24, 2024

Right. I think I can work on a follow-up doing the mul_ov with twice the bitwidth, and make the high bits exhaustive. Will investigate after this patch lands.

I thought about it some more, and have come to the same conclusion as you: making high bits optimal is as hard as making the entire mul optimal.

Right. To explain my thinking, it is easy to make the high bits optimal for a (signed or unsigned) extending multiply where the result is twice the width of the inputs, just based on the range of the inputs. But the high bits of a normal (non-extending) multiply come from somewhere in the middle of an extending multiply's result.

@artagnon
Copy link
Contributor Author

artagnon commented Nov 5, 2024

Gentle ping. An alternative is to directly review #114211.

@nikic
Copy link
Contributor

nikic commented Nov 5, 2024

For the future, can you please not create separate PRs for test additions and instead include them as a separate commit? Otherwise we cannot test the impact on llvm-opt-benchmark.

@artagnon
Copy link
Contributor Author

artagnon commented Nov 5, 2024

For the future, can you please not create separate PRs for test additions and instead include them as a separate commit? Otherwise we cannot test the impact on llvm-opt-benchmark.

The test addition is included as the first commit in this PR, no?

@nikic
Copy link
Contributor

nikic commented Nov 5, 2024

For the future, can you please not create separate PRs for test additions and instead include them as a separate commit? Otherwise we cannot test the impact on llvm-opt-benchmark.

The test addition is included as the first commit in this PR, no?

Oh right, I got confused here. And I see now that your follow-up PR was already tested in dtcxzyw/llvm-opt-benchmark#1591, and does not seem to provide benefit on real code.

@artagnon
Copy link
Contributor Author

artagnon commented Nov 5, 2024

And I see now that your follow-up PR was already tested in dtcxzyw/llvm-opt-benchmark#1591, and does not seem to provide benefit on real code.

True, and that came as a very unfortunate surprise, as I spent quite a lot of time on these PRs :(

APInt UMaxRHS = RHS.getMaxValue();

// For leading zeros in the result to be valid, the unsigned max product must
// Compute the high known-0 or known-1 bits by multiplying the max of each
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you can wrap the whole of this section in if (!LHS.isSignUnknown() && !RHS.isSignUnknown). If either sign was unknown then we would not get any useful high-zeros or high-ones info from this calculation. That would resolve my confusion about your use of isNegative below by making it clear that there is no "unknown sign" case to worry about - both operand are known negative or known positive.

Comment on lines +803 to +804
APInt MaxLHS = LHS.isNegative() ? LHS.getMinValue().abs() : LHS.getMaxValue(),
MaxRHS = RHS.isNegative() ? RHS.getMinValue().abs() : RHS.getMaxValue();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you remove the .abs() calls here and use smul_ov below instead?

@artagnon
Copy link
Contributor Author

artagnon commented Nov 6, 2024

@jayfoad Thanks for the review. However, after some reflection and offline discussion with @nikic, we have decided to close both PRs, since the patches don't have any benefit on real-world code, and just add compile-time. A completely optimal mul might be beneficial from a testing point of view but, as the real-world benchmarks indicate, a purely academic exercise.

@artagnon artagnon closed this Nov 6, 2024
@artagnon artagnon deleted the knownbits-mul branch November 6, 2024 15:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
llvm:analysis Includes value tracking, cost tables and constant folding llvm:support
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants