-
Notifications
You must be signed in to change notification settings - Fork 14.3k
[SCEV] Support larger than 64-bit types in ashr(add(shl(x, n), c), m) #71600
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
In commit 5a9a02f scalar evolution got support for computing SCEV:s for (ashr(add(shl(x, n), c), m)) constructs. The code however used APInt::getZExtValue without first checking that the APInt would fit inside an uint64_t. When for example using 128-bit types we ended up in assertion failures (or maybe miscompiles in non-assert builds). This patch simply avoid converting from APInt to uint64_t when creating the truncated constant. We can just truncate the APInt instead.
@llvm/pr-subscribers-llvm-analysis Author: Björn Pettersson (bjope) ChangesIn commit 5a9a02f scalar evolution got support for computing SCEV:s for (ashr(add(shl(x, n), c), m)) constructs. The code however used APInt::getZExtValue without first checking that the APInt would fit inside an uint64_t. When for example using 128-bit types we ended up in assertion failures (or maybe miscompiles in non-assert builds). Full diff: https://github.com/llvm/llvm-project/pull/71600.diff 2 Files Affected:
diff --git a/llvm/lib/Analysis/ScalarEvolution.cpp b/llvm/lib/Analysis/ScalarEvolution.cpp
index de24aa986688a1e..a57a16600ca1e69 100644
--- a/llvm/lib/Analysis/ScalarEvolution.cpp
+++ b/llvm/lib/Analysis/ScalarEvolution.cpp
@@ -7888,8 +7888,7 @@ const SCEV *ScalarEvolution::createSCEV(Value *V) {
// same type, so create a new Constant with type same as TruncTy.
// Also, the Add constant should be shifted right by AShr amount.
APInt AddOperand = AddOperandCI->getValue().ashr(AShrAmt);
- AddConstant = getConstant(TruncTy, AddOperand.getZExtValue(),
- AddOperand.isSignBitSet());
+ AddConstant = getConstant(AddOperand.trunc(BitWidth - AShrAmt));
// we model the expression as sext(add(trunc(A), c << n)), since the
// sext(trunc) part is already handled below, we create a
// AddExpr(TruncExp) which will be used later.
diff --git a/llvm/test/Analysis/ScalarEvolution/sext-add-inreg.ll b/llvm/test/Analysis/ScalarEvolution/sext-add-inreg.ll
index 8aac64ea4f6dc4b..1a51172edf0f3e3 100644
--- a/llvm/test/Analysis/ScalarEvolution/sext-add-inreg.ll
+++ b/llvm/test/Analysis/ScalarEvolution/sext-add-inreg.ll
@@ -1,19 +1,36 @@
; NOTE: Assertions have been autogenerated by utils/update_analyze_test_checks.py UTC_ARGS: --version 2
; RUN: opt < %s -disable-output "-passes=print<scalar-evolution>" 2>&1 | FileCheck %s
-define i64 @test(i64 %a) {
-; CHECK-LABEL: 'test'
-; CHECK-NEXT: Classifying expressions for: @test
+define i64 @test64(i64 %a) {
+; CHECK-LABEL: 'test64'
+; CHECK-NEXT: Classifying expressions for: @test64
; CHECK-NEXT: %add = shl i64 %a, 8
; CHECK-NEXT: --> (256 * %a) U: [0,-255) S: [-9223372036854775808,9223372036854775553)
; CHECK-NEXT: %shl = add i64 %add, 256
; CHECK-NEXT: --> (256 + (256 * %a)) U: [0,-255) S: [-9223372036854775808,9223372036854775553)
; CHECK-NEXT: %ashr = ashr exact i64 %shl, 8
; CHECK-NEXT: --> (sext i56 (1 + (trunc i64 %a to i56)) to i64) U: [-36028797018963968,36028797018963968) S: [-36028797018963968,36028797018963968)
-; CHECK-NEXT: Determining loop execution counts for: @test
+; CHECK-NEXT: Determining loop execution counts for: @test64
;
%add = shl i64 %a, 8
%shl = add i64 %add, 256
%ashr = ashr exact i64 %shl, 8
ret i64 %ashr
}
+
+define i128 @test128(i128 %a) {
+; CHECK-LABEL: 'test128'
+; CHECK-NEXT: Classifying expressions for: @test128
+; CHECK-NEXT: %shl = shl i128 %a, 4
+; CHECK-NEXT: --> (16 * %a) U: [0,-15) S: [-170141183460469231731687303715884105728,170141183460469231731687303715884105713)
+; CHECK-NEXT: %add = add i128 %shl, -55
+; CHECK-NEXT: --> (-55 + (16 * %a)) U: [-55,-70) S: [170141183460469231731687303715884105673,170141183460469231731687303715884105658)
+; CHECK-NEXT: %ashr = ashr i128 %add, 4
+; CHECK-NEXT: --> (sext i124 (-4 + (trunc i128 %a to i124)) to i128) U: [-10633823966279326983230456482242756608,10633823966279326983230456482242756608) S: [-10633823966279326983230456482242756608,10633823966279326983230456482242756608)
+; CHECK-NEXT: Determining loop execution counts for: @test128
+;
+ %shl = shl i128 %a, 4
+ %add = add i128 %shl, -55
+ %ashr = ashr i128 %add, 4
+ ret i128 %ashr
+}
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the fix, I really didn't think about this scenario while using getZExtValue() !
LGTM ! Feel free to merge it.
In commit 5a9a02f scalar evolution got support for computing SCEV:s for (ashr(add(shl(x, n), c), m)) constructs. The code however used APInt::getZExtValue without first checking that the APInt would fit inside an uint64_t. When for example using 128-bit types we ended up in assertion failures (or maybe miscompiles in non-assert builds).
This patch simply avoid converting from APInt to uint64_t when creating the truncated constant. We can just truncate the APInt instead.