Skip to content

Commit 51e9f33

Browse files
committed
[BasicAA] Use saturating multiply on range if nsw
If we know that the var * scale multiplication is nsw, we can use a saturating multiplication on the range (as a good approximation of an nsw multiply). This recovers some cases where the fix from D112611 is unnecessarily strict. (This can be further strengthened by using a saturating add, but we currently don't track all the necessary information for that.) This exposes an issue in our NSW tracking for multiplies. The code was assuming that (X +nsw Y) *nsw Z results in (X *nsw Z) +nsw (Y *nsw Z) -- however, it is possible that the distributed multiplications overflow, even if the non-distributed one does not. We should discard the nsw flag if the the offset is non-zero. If we just have (X *nsw Y) *nsw Z then concluding X *nsw (Y *nsw Z) is fine. Differential Revision: https://reviews.llvm.org/D112848
1 parent f1d32a5 commit 51e9f33

File tree

2 files changed

+11
-7
lines changed

2 files changed

+11
-7
lines changed

llvm/lib/Analysis/BasicAliasAnalysis.cpp

Lines changed: 9 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -360,8 +360,10 @@ struct LinearExpression {
360360
}
361361

362362
LinearExpression mul(const APInt &Other, bool MulIsNSW) const {
363-
return LinearExpression(Val, Scale * Other, Offset * Other,
364-
IsNSW && (Other.isOne() || MulIsNSW));
363+
// The check for zero offset is necessary, because generally
364+
// (X +nsw Y) *nsw Z does not imply (X *nsw Z) +nsw (Y *nsw Z).
365+
bool NSW = IsNSW && (Other.isOne() || (MulIsNSW && Offset.isZero()));
366+
return LinearExpression(Val, Scale * Other, Offset * Other, NSW);
365367
}
366368
};
367369
}
@@ -1249,12 +1251,14 @@ AliasResult BasicAAResult::aliasGEP(
12491251
CR = CR.intersectWith(
12501252
ConstantRange::fromKnownBits(Known, /* Signed */ true),
12511253
ConstantRange::Signed);
1254+
CR = Index.Val.evaluateWith(CR).sextOrTrunc(OffsetRange.getBitWidth());
12521255

12531256
assert(OffsetRange.getBitWidth() == Scale.getBitWidth() &&
12541257
"Bit widths are normalized to MaxPointerSize");
1255-
OffsetRange = OffsetRange.add(
1256-
Index.Val.evaluateWith(CR).sextOrTrunc(OffsetRange.getBitWidth())
1257-
.smul_fast(ConstantRange(Scale)));
1258+
if (Index.IsNSW)
1259+
OffsetRange = OffsetRange.add(CR.smul_sat(ConstantRange(Scale)));
1260+
else
1261+
OffsetRange = OffsetRange.add(CR.smul_fast(ConstantRange(Scale)));
12581262
}
12591263

12601264
// We now have accesses at two offsets from the same base:

llvm/test/Analysis/BasicAA/assume-index-positive.ll

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -145,12 +145,12 @@ define void @shl_of_non_negative(i8* %ptr, i64 %a) {
145145
ret void
146146
}
147147

148-
; TODO: Unlike the previous case, %ptr.neg and %ptr.shl can't alias, because
148+
; Unlike the previous case, %ptr.neg and %ptr.shl can't alias, because
149149
; shl nsw of non-negative is non-negative.
150150
define void @shl_nsw_of_non_negative(i8* %ptr, i64 %a) {
151151
; CHECK-LABEL: Function: shl_nsw_of_non_negative
152152
; CHECK: NoAlias: i8* %ptr.a, i8* %ptr.neg
153-
; CHECK: MayAlias: i8* %ptr.neg, i8* %ptr.shl
153+
; CHECK: NoAlias: i8* %ptr.neg, i8* %ptr.shl
154154
%a.cmp = icmp sge i64 %a, 0
155155
call void @llvm.assume(i1 %a.cmp)
156156
%ptr.neg = getelementptr i8, i8* %ptr, i64 -2

0 commit comments

Comments
 (0)