-
Notifications
You must be signed in to change notification settings - Fork 14.3k
[msan] Handle llvm.[us]cmp (starship operator) #125804
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Apply handleShadowOr to llvm.[us]cmp. Previously, llvm.[su]cmp was correctly handled heuristically when each parameter type is the same as the return type (e.g., call i8 @llvm.ucmp.i8.i8(i8 %x, i8 %y)) but handled incorrectly by visitInstruction when the return type is different e.g., (call i8 @llvm.ucmp.i8.i62(i62 %x, i62 %y), call <4 x i8> @llvm.ucmp.v4i8.v4i32(<4 x i32> %x, <4 x i32> %y)) Updates the tests in llvm#125790
@llvm/pr-subscribers-llvm-transforms @llvm/pr-subscribers-compiler-rt-sanitizer Author: Thurston Dang (thurstond) ChangesApply handleShadowOr to llvm.[us]cmp. Previously, llvm.[su]cmp was correctly handled heuristically when each parameter type is the same as the return type (e.g., Updates the tests from #125790 Patch is 51.30 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/125804.diff 3 Files Affected:
diff --git a/llvm/lib/Transforms/Instrumentation/MemorySanitizer.cpp b/llvm/lib/Transforms/Instrumentation/MemorySanitizer.cpp
index f3f2e5041fb1d3..7f714d0b3d1b0a 100644
--- a/llvm/lib/Transforms/Instrumentation/MemorySanitizer.cpp
+++ b/llvm/lib/Transforms/Instrumentation/MemorySanitizer.cpp
@@ -4800,6 +4800,12 @@ struct MemorySanitizerVisitor : public InstVisitor<MemorySanitizerVisitor> {
break;
}
+ case Intrinsic::scmp:
+ case Intrinsic::ucmp: {
+ handleShadowOr(I);
+ break;
+ }
+
default:
if (!handleUnknownIntrinsic(I))
visitInstruction(I);
diff --git a/llvm/test/Instrumentation/MemorySanitizer/scmp.ll b/llvm/test/Instrumentation/MemorySanitizer/scmp.ll
index 89c5b283b25103..5c94c216106a2c 100644
--- a/llvm/test/Instrumentation/MemorySanitizer/scmp.ll
+++ b/llvm/test/Instrumentation/MemorySanitizer/scmp.ll
@@ -1,14 +1,6 @@
; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --version 4
; RUN: opt < %s -passes=msan -S | FileCheck %s
;
-; llvm.scmp is correctly handled heuristically when each parameter is the same
-; type as the return type e.g.,
-; call i8 @llvm.scmp.i8.i8(i8 %x, i8 %y)
-; but handled incorrectly by visitInstruction when the return type is different
-; e.g.,
-; call i8 @llvm.scmp.i8.i62(i62 %x, i62 %y)
-; call <4 x i8> @llvm.scmp.v4i8.v4i32(<4 x i32> %x, <4 x i32> %y)
-;
; Forked from llvm/test/CodeGen/X86/scmp.ll
target datalayout = "e-m:o-p270:32:32-p271:32:32-p272:64:64-i64:64-f80:128-n8:16:32:64-S128"
@@ -21,8 +13,9 @@ define i8 @scmp.8.8(i8 %x, i8 %y) nounwind #0 {
; CHECK-NEXT: [[TMP2:%.*]] = load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__msan_param_tls to i64), i64 8) to ptr), align 8
; CHECK-NEXT: call void @llvm.donothing()
; CHECK-NEXT: [[_MSPROP:%.*]] = or i8 [[TMP1]], [[TMP2]]
+; CHECK-NEXT: [[_MSPROP1:%.*]] = or i8 [[_MSPROP]], 0
; CHECK-NEXT: [[TMP3:%.*]] = call i8 @llvm.scmp.i8.i8(i8 [[X]], i8 [[Y]])
-; CHECK-NEXT: store i8 [[_MSPROP]], ptr @__msan_retval_tls, align 8
+; CHECK-NEXT: store i8 [[_MSPROP1]], ptr @__msan_retval_tls, align 8
; CHECK-NEXT: ret i8 [[TMP3]]
;
%1 = call i8 @llvm.scmp(i8 %x, i8 %y)
@@ -35,16 +28,11 @@ define i8 @scmp.8.16(i16 %x, i16 %y) nounwind #0 {
; CHECK-NEXT: [[TMP1:%.*]] = load i16, ptr @__msan_param_tls, align 8
; CHECK-NEXT: [[TMP2:%.*]] = load i16, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__msan_param_tls to i64), i64 8) to ptr), align 8
; CHECK-NEXT: call void @llvm.donothing()
-; CHECK-NEXT: [[_MSCMP:%.*]] = icmp ne i16 [[TMP1]], 0
-; CHECK-NEXT: [[_MSCMP1:%.*]] = icmp ne i16 [[TMP2]], 0
-; CHECK-NEXT: [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
-; CHECK-NEXT: br i1 [[_MSOR]], label [[TMP3:%.*]], label [[TMP4:%.*]], !prof [[PROF1:![0-9]+]]
-; CHECK: 3:
-; CHECK-NEXT: call void @__msan_warning_noreturn() #[[ATTR4:[0-9]+]]
-; CHECK-NEXT: unreachable
-; CHECK: 4:
+; CHECK-NEXT: [[_MSPROP:%.*]] = or i16 [[TMP1]], [[TMP2]]
+; CHECK-NEXT: [[_MSPROP1:%.*]] = or i16 [[_MSPROP]], 0
+; CHECK-NEXT: [[TMP3:%.*]] = trunc i16 [[_MSPROP1]] to i8
; CHECK-NEXT: [[TMP5:%.*]] = call i8 @llvm.scmp.i8.i16(i16 [[X]], i16 [[Y]])
-; CHECK-NEXT: store i8 0, ptr @__msan_retval_tls, align 8
+; CHECK-NEXT: store i8 [[TMP3]], ptr @__msan_retval_tls, align 8
; CHECK-NEXT: ret i8 [[TMP5]]
;
%1 = call i8 @llvm.scmp(i16 %x, i16 %y)
@@ -57,16 +45,11 @@ define i8 @scmp.8.32(i32 %x, i32 %y) nounwind #0 {
; CHECK-NEXT: [[TMP1:%.*]] = load i32, ptr @__msan_param_tls, align 8
; CHECK-NEXT: [[TMP2:%.*]] = load i32, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__msan_param_tls to i64), i64 8) to ptr), align 8
; CHECK-NEXT: call void @llvm.donothing()
-; CHECK-NEXT: [[_MSCMP:%.*]] = icmp ne i32 [[TMP1]], 0
-; CHECK-NEXT: [[_MSCMP1:%.*]] = icmp ne i32 [[TMP2]], 0
-; CHECK-NEXT: [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
-; CHECK-NEXT: br i1 [[_MSOR]], label [[TMP3:%.*]], label [[TMP4:%.*]], !prof [[PROF1]]
-; CHECK: 3:
-; CHECK-NEXT: call void @__msan_warning_noreturn() #[[ATTR4]]
-; CHECK-NEXT: unreachable
-; CHECK: 4:
+; CHECK-NEXT: [[_MSPROP:%.*]] = or i32 [[TMP1]], [[TMP2]]
+; CHECK-NEXT: [[_MSPROP1:%.*]] = or i32 [[_MSPROP]], 0
+; CHECK-NEXT: [[TMP3:%.*]] = trunc i32 [[_MSPROP1]] to i8
; CHECK-NEXT: [[TMP5:%.*]] = call i8 @llvm.scmp.i8.i32(i32 [[X]], i32 [[Y]])
-; CHECK-NEXT: store i8 0, ptr @__msan_retval_tls, align 8
+; CHECK-NEXT: store i8 [[TMP3]], ptr @__msan_retval_tls, align 8
; CHECK-NEXT: ret i8 [[TMP5]]
;
%1 = call i8 @llvm.scmp(i32 %x, i32 %y)
@@ -79,16 +62,11 @@ define i8 @scmp.8.64(i64 %x, i64 %y) nounwind #0 {
; CHECK-NEXT: [[TMP1:%.*]] = load i64, ptr @__msan_param_tls, align 8
; CHECK-NEXT: [[TMP2:%.*]] = load i64, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__msan_param_tls to i64), i64 8) to ptr), align 8
; CHECK-NEXT: call void @llvm.donothing()
-; CHECK-NEXT: [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
-; CHECK-NEXT: [[_MSCMP1:%.*]] = icmp ne i64 [[TMP2]], 0
-; CHECK-NEXT: [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
-; CHECK-NEXT: br i1 [[_MSOR]], label [[TMP3:%.*]], label [[TMP4:%.*]], !prof [[PROF1]]
-; CHECK: 3:
-; CHECK-NEXT: call void @__msan_warning_noreturn() #[[ATTR4]]
-; CHECK-NEXT: unreachable
-; CHECK: 4:
+; CHECK-NEXT: [[_MSPROP:%.*]] = or i64 [[TMP1]], [[TMP2]]
+; CHECK-NEXT: [[_MSPROP1:%.*]] = or i64 [[_MSPROP]], 0
+; CHECK-NEXT: [[TMP3:%.*]] = trunc i64 [[_MSPROP1]] to i8
; CHECK-NEXT: [[TMP5:%.*]] = call i8 @llvm.scmp.i8.i64(i64 [[X]], i64 [[Y]])
-; CHECK-NEXT: store i8 0, ptr @__msan_retval_tls, align 8
+; CHECK-NEXT: store i8 [[TMP3]], ptr @__msan_retval_tls, align 8
; CHECK-NEXT: ret i8 [[TMP5]]
;
%1 = call i8 @llvm.scmp(i64 %x, i64 %y)
@@ -101,16 +79,11 @@ define i8 @scmp.8.128(i128 %x, i128 %y) nounwind #0 {
; CHECK-NEXT: [[TMP1:%.*]] = load i128, ptr @__msan_param_tls, align 8
; CHECK-NEXT: [[TMP2:%.*]] = load i128, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__msan_param_tls to i64), i64 16) to ptr), align 8
; CHECK-NEXT: call void @llvm.donothing()
-; CHECK-NEXT: [[_MSCMP:%.*]] = icmp ne i128 [[TMP1]], 0
-; CHECK-NEXT: [[_MSCMP1:%.*]] = icmp ne i128 [[TMP2]], 0
-; CHECK-NEXT: [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
-; CHECK-NEXT: br i1 [[_MSOR]], label [[TMP3:%.*]], label [[TMP4:%.*]], !prof [[PROF1]]
-; CHECK: 3:
-; CHECK-NEXT: call void @__msan_warning_noreturn() #[[ATTR4]]
-; CHECK-NEXT: unreachable
-; CHECK: 4:
+; CHECK-NEXT: [[_MSPROP:%.*]] = or i128 [[TMP1]], [[TMP2]]
+; CHECK-NEXT: [[_MSPROP1:%.*]] = or i128 [[_MSPROP]], 0
+; CHECK-NEXT: [[TMP3:%.*]] = trunc i128 [[_MSPROP1]] to i8
; CHECK-NEXT: [[TMP5:%.*]] = call i8 @llvm.scmp.i8.i128(i128 [[X]], i128 [[Y]])
-; CHECK-NEXT: store i8 0, ptr @__msan_retval_tls, align 8
+; CHECK-NEXT: store i8 [[TMP3]], ptr @__msan_retval_tls, align 8
; CHECK-NEXT: ret i8 [[TMP5]]
;
%1 = call i8 @llvm.scmp(i128 %x, i128 %y)
@@ -124,8 +97,9 @@ define i32 @scmp.32.32(i32 %x, i32 %y) nounwind #0 {
; CHECK-NEXT: [[TMP2:%.*]] = load i32, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__msan_param_tls to i64), i64 8) to ptr), align 8
; CHECK-NEXT: call void @llvm.donothing()
; CHECK-NEXT: [[_MSPROP:%.*]] = or i32 [[TMP1]], [[TMP2]]
+; CHECK-NEXT: [[_MSPROP1:%.*]] = or i32 [[_MSPROP]], 0
; CHECK-NEXT: [[TMP3:%.*]] = call i32 @llvm.scmp.i32.i32(i32 [[X]], i32 [[Y]])
-; CHECK-NEXT: store i32 [[_MSPROP]], ptr @__msan_retval_tls, align 8
+; CHECK-NEXT: store i32 [[_MSPROP1]], ptr @__msan_retval_tls, align 8
; CHECK-NEXT: ret i32 [[TMP3]]
;
%1 = call i32 @llvm.scmp(i32 %x, i32 %y)
@@ -138,16 +112,11 @@ define i32 @scmp.32.64(i64 %x, i64 %y) nounwind #0 {
; CHECK-NEXT: [[TMP1:%.*]] = load i64, ptr @__msan_param_tls, align 8
; CHECK-NEXT: [[TMP2:%.*]] = load i64, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__msan_param_tls to i64), i64 8) to ptr), align 8
; CHECK-NEXT: call void @llvm.donothing()
-; CHECK-NEXT: [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
-; CHECK-NEXT: [[_MSCMP1:%.*]] = icmp ne i64 [[TMP2]], 0
-; CHECK-NEXT: [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
-; CHECK-NEXT: br i1 [[_MSOR]], label [[TMP3:%.*]], label [[TMP4:%.*]], !prof [[PROF1]]
-; CHECK: 3:
-; CHECK-NEXT: call void @__msan_warning_noreturn() #[[ATTR4]]
-; CHECK-NEXT: unreachable
-; CHECK: 4:
+; CHECK-NEXT: [[_MSPROP:%.*]] = or i64 [[TMP1]], [[TMP2]]
+; CHECK-NEXT: [[_MSPROP1:%.*]] = or i64 [[_MSPROP]], 0
+; CHECK-NEXT: [[TMP3:%.*]] = trunc i64 [[_MSPROP1]] to i32
; CHECK-NEXT: [[TMP5:%.*]] = call i32 @llvm.scmp.i32.i64(i64 [[X]], i64 [[Y]])
-; CHECK-NEXT: store i32 0, ptr @__msan_retval_tls, align 8
+; CHECK-NEXT: store i32 [[TMP3]], ptr @__msan_retval_tls, align 8
; CHECK-NEXT: ret i32 [[TMP5]]
;
%1 = call i32 @llvm.scmp(i64 %x, i64 %y)
@@ -161,8 +130,9 @@ define i64 @scmp.64.64(i64 %x, i64 %y) nounwind #0 {
; CHECK-NEXT: [[TMP2:%.*]] = load i64, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__msan_param_tls to i64), i64 8) to ptr), align 8
; CHECK-NEXT: call void @llvm.donothing()
; CHECK-NEXT: [[_MSPROP:%.*]] = or i64 [[TMP1]], [[TMP2]]
+; CHECK-NEXT: [[_MSPROP1:%.*]] = or i64 [[_MSPROP]], 0
; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.scmp.i64.i64(i64 [[X]], i64 [[Y]])
-; CHECK-NEXT: store i64 [[_MSPROP]], ptr @__msan_retval_tls, align 8
+; CHECK-NEXT: store i64 [[_MSPROP1]], ptr @__msan_retval_tls, align 8
; CHECK-NEXT: ret i64 [[TMP3]]
;
%1 = call i64 @llvm.scmp(i64 %x, i64 %y)
@@ -175,16 +145,11 @@ define i4 @scmp_narrow_result(i32 %x, i32 %y) nounwind #0 {
; CHECK-NEXT: [[TMP1:%.*]] = load i32, ptr @__msan_param_tls, align 8
; CHECK-NEXT: [[TMP2:%.*]] = load i32, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__msan_param_tls to i64), i64 8) to ptr), align 8
; CHECK-NEXT: call void @llvm.donothing()
-; CHECK-NEXT: [[_MSCMP:%.*]] = icmp ne i32 [[TMP1]], 0
-; CHECK-NEXT: [[_MSCMP1:%.*]] = icmp ne i32 [[TMP2]], 0
-; CHECK-NEXT: [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
-; CHECK-NEXT: br i1 [[_MSOR]], label [[TMP3:%.*]], label [[TMP4:%.*]], !prof [[PROF1]]
-; CHECK: 3:
-; CHECK-NEXT: call void @__msan_warning_noreturn() #[[ATTR4]]
-; CHECK-NEXT: unreachable
-; CHECK: 4:
+; CHECK-NEXT: [[_MSPROP:%.*]] = or i32 [[TMP1]], [[TMP2]]
+; CHECK-NEXT: [[_MSPROP1:%.*]] = or i32 [[_MSPROP]], 0
+; CHECK-NEXT: [[TMP3:%.*]] = trunc i32 [[_MSPROP1]] to i4
; CHECK-NEXT: [[TMP5:%.*]] = call i4 @llvm.scmp.i4.i32(i32 [[X]], i32 [[Y]])
-; CHECK-NEXT: store i4 0, ptr @__msan_retval_tls, align 8
+; CHECK-NEXT: store i4 [[TMP3]], ptr @__msan_retval_tls, align 8
; CHECK-NEXT: ret i4 [[TMP5]]
;
%1 = call i4 @llvm.scmp(i32 %x, i32 %y)
@@ -197,16 +162,11 @@ define i8 @scmp_narrow_op(i62 %x, i62 %y) nounwind #0 {
; CHECK-NEXT: [[TMP1:%.*]] = load i62, ptr @__msan_param_tls, align 8
; CHECK-NEXT: [[TMP2:%.*]] = load i62, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__msan_param_tls to i64), i64 8) to ptr), align 8
; CHECK-NEXT: call void @llvm.donothing()
-; CHECK-NEXT: [[_MSCMP:%.*]] = icmp ne i62 [[TMP1]], 0
-; CHECK-NEXT: [[_MSCMP1:%.*]] = icmp ne i62 [[TMP2]], 0
-; CHECK-NEXT: [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
-; CHECK-NEXT: br i1 [[_MSOR]], label [[TMP3:%.*]], label [[TMP4:%.*]], !prof [[PROF1]]
-; CHECK: 3:
-; CHECK-NEXT: call void @__msan_warning_noreturn() #[[ATTR4]]
-; CHECK-NEXT: unreachable
-; CHECK: 4:
+; CHECK-NEXT: [[_MSPROP:%.*]] = or i62 [[TMP1]], [[TMP2]]
+; CHECK-NEXT: [[_MSPROP1:%.*]] = or i62 [[_MSPROP]], 0
+; CHECK-NEXT: [[TMP3:%.*]] = trunc i62 [[_MSPROP1]] to i8
; CHECK-NEXT: [[TMP5:%.*]] = call i8 @llvm.scmp.i8.i62(i62 [[X]], i62 [[Y]])
-; CHECK-NEXT: store i8 0, ptr @__msan_retval_tls, align 8
+; CHECK-NEXT: store i8 [[TMP3]], ptr @__msan_retval_tls, align 8
; CHECK-NEXT: ret i8 [[TMP5]]
;
%1 = call i8 @llvm.scmp(i62 %x, i62 %y)
@@ -219,16 +179,11 @@ define i141 @scmp_wide_result(i32 %x, i32 %y) nounwind #0 {
; CHECK-NEXT: [[TMP1:%.*]] = load i32, ptr @__msan_param_tls, align 8
; CHECK-NEXT: [[TMP2:%.*]] = load i32, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__msan_param_tls to i64), i64 8) to ptr), align 8
; CHECK-NEXT: call void @llvm.donothing()
-; CHECK-NEXT: [[_MSCMP:%.*]] = icmp ne i32 [[TMP1]], 0
-; CHECK-NEXT: [[_MSCMP1:%.*]] = icmp ne i32 [[TMP2]], 0
-; CHECK-NEXT: [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
-; CHECK-NEXT: br i1 [[_MSOR]], label [[TMP3:%.*]], label [[TMP4:%.*]], !prof [[PROF1]]
-; CHECK: 3:
-; CHECK-NEXT: call void @__msan_warning_noreturn() #[[ATTR4]]
-; CHECK-NEXT: unreachable
-; CHECK: 4:
+; CHECK-NEXT: [[_MSPROP:%.*]] = or i32 [[TMP1]], [[TMP2]]
+; CHECK-NEXT: [[_MSPROP1:%.*]] = or i32 [[_MSPROP]], 0
+; CHECK-NEXT: [[TMP3:%.*]] = zext i32 [[_MSPROP1]] to i141
; CHECK-NEXT: [[TMP5:%.*]] = call i141 @llvm.scmp.i141.i32(i32 [[X]], i32 [[Y]])
-; CHECK-NEXT: store i141 0, ptr @__msan_retval_tls, align 8
+; CHECK-NEXT: store i141 [[TMP3]], ptr @__msan_retval_tls, align 8
; CHECK-NEXT: ret i141 [[TMP5]]
;
%1 = call i141 @llvm.scmp(i32 %x, i32 %y)
@@ -241,16 +196,11 @@ define i8 @scmp_wide_op(i109 %x, i109 %y) nounwind #0 {
; CHECK-NEXT: [[TMP1:%.*]] = load i109, ptr @__msan_param_tls, align 8
; CHECK-NEXT: [[TMP2:%.*]] = load i109, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__msan_param_tls to i64), i64 16) to ptr), align 8
; CHECK-NEXT: call void @llvm.donothing()
-; CHECK-NEXT: [[_MSCMP:%.*]] = icmp ne i109 [[TMP1]], 0
-; CHECK-NEXT: [[_MSCMP1:%.*]] = icmp ne i109 [[TMP2]], 0
-; CHECK-NEXT: [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
-; CHECK-NEXT: br i1 [[_MSOR]], label [[TMP3:%.*]], label [[TMP4:%.*]], !prof [[PROF1]]
-; CHECK: 3:
-; CHECK-NEXT: call void @__msan_warning_noreturn() #[[ATTR4]]
-; CHECK-NEXT: unreachable
-; CHECK: 4:
+; CHECK-NEXT: [[_MSPROP:%.*]] = or i109 [[TMP1]], [[TMP2]]
+; CHECK-NEXT: [[_MSPROP1:%.*]] = or i109 [[_MSPROP]], 0
+; CHECK-NEXT: [[TMP3:%.*]] = trunc i109 [[_MSPROP1]] to i8
; CHECK-NEXT: [[TMP5:%.*]] = call i8 @llvm.scmp.i8.i109(i109 [[X]], i109 [[Y]])
-; CHECK-NEXT: store i8 0, ptr @__msan_retval_tls, align 8
+; CHECK-NEXT: store i8 [[TMP3]], ptr @__msan_retval_tls, align 8
; CHECK-NEXT: ret i8 [[TMP5]]
;
%1 = call i8 @llvm.scmp(i109 %x, i109 %y)
@@ -263,16 +213,11 @@ define i41 @scmp_uncommon_types(i7 %x, i7 %y) nounwind #0 {
; CHECK-NEXT: [[TMP1:%.*]] = load i7, ptr @__msan_param_tls, align 8
; CHECK-NEXT: [[TMP2:%.*]] = load i7, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__msan_param_tls to i64), i64 8) to ptr), align 8
; CHECK-NEXT: call void @llvm.donothing()
-; CHECK-NEXT: [[_MSCMP:%.*]] = icmp ne i7 [[TMP1]], 0
-; CHECK-NEXT: [[_MSCMP1:%.*]] = icmp ne i7 [[TMP2]], 0
-; CHECK-NEXT: [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
-; CHECK-NEXT: br i1 [[_MSOR]], label [[TMP3:%.*]], label [[TMP4:%.*]], !prof [[PROF1]]
-; CHECK: 3:
-; CHECK-NEXT: call void @__msan_warning_noreturn() #[[ATTR4]]
-; CHECK-NEXT: unreachable
-; CHECK: 4:
+; CHECK-NEXT: [[_MSPROP:%.*]] = or i7 [[TMP1]], [[TMP2]]
+; CHECK-NEXT: [[_MSPROP1:%.*]] = or i7 [[_MSPROP]], 0
+; CHECK-NEXT: [[TMP3:%.*]] = zext i7 [[_MSPROP1]] to i41
; CHECK-NEXT: [[TMP5:%.*]] = call i41 @llvm.scmp.i41.i7(i7 [[X]], i7 [[Y]])
-; CHECK-NEXT: store i41 0, ptr @__msan_retval_tls, align 8
+; CHECK-NEXT: store i41 [[TMP3]], ptr @__msan_retval_tls, align 8
; CHECK-NEXT: ret i41 [[TMP5]]
;
%1 = call i41 @llvm.scmp(i7 %x, i7 %y)
@@ -286,8 +231,9 @@ define <4 x i32> @scmp_normal_vectors(<4 x i32> %x, <4 x i32> %y) nounwind #0 {
; CHECK-NEXT: [[TMP2:%.*]] = load <4 x i32>, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__msan_param_tls to i64), i64 16) to ptr), align 8
; CHECK-NEXT: call void @llvm.donothing()
; CHECK-NEXT: [[_MSPROP:%.*]] = or <4 x i32> [[TMP1]], [[TMP2]]
+; CHECK-NEXT: [[_MSPROP1:%.*]] = or <4 x i32> [[_MSPROP]], zeroinitializer
; CHECK-NEXT: [[TMP3:%.*]] = call <4 x i32> @llvm.scmp.v4i32.v4i32(<4 x i32> [[X]], <4 x i32> [[Y]])
-; CHECK-NEXT: store <4 x i32> [[_MSPROP]], ptr @__msan_retval_tls, align 8
+; CHECK-NEXT: store <4 x i32> [[_MSPROP1]], ptr @__msan_retval_tls, align 8
; CHECK-NEXT: ret <4 x i32> [[TMP3]]
;
%1 = call <4 x i32> @llvm.scmp(<4 x i32> %x, <4 x i32> %y)
@@ -300,18 +246,11 @@ define <4 x i8> @scmp_narrow_vec_result(<4 x i32> %x, <4 x i32> %y) nounwind #0
; CHECK-NEXT: [[TMP1:%.*]] = load <4 x i32>, ptr @__msan_param_tls, align 8
; CHECK-NEXT: [[TMP2:%.*]] = load <4 x i32>, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__msan_param_tls to i64), i64 16) to ptr), align 8
; CHECK-NEXT: call void @llvm.donothing()
-; CHECK-NEXT: [[TMP3:%.*]] = bitcast <4 x i32> [[TMP1]] to i128
-; CHECK-NEXT: [[_MSCMP:%.*]] = icmp ne i128 [[TMP3]], 0
-; CHECK-NEXT: [[TMP4:%.*]] = bitcast <4 x i32> [[TMP2]] to i128
-; CHECK-NEXT: [[_MSCMP1:%.*]] = icmp ne i128 [[TMP4]], 0
-; CHECK-NEXT: [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
-; CHECK-NEXT: br i1 [[_MSOR]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
-; CHECK: 5:
-; CHECK-NEXT: call void @__msan_warning_noreturn() #[[ATTR4]]
-; CHECK-NEXT: unreachable
-; CHECK: 6:
+; CHECK-NEXT: [[_MSPROP:%.*]] = or <4 x i32> [[TMP1]], [[TMP2]]
+; CHECK-NEXT: [[_MSPROP1:%.*]] = or <4 x i32> [[_MSPROP]], zeroinitializer
+; CHECK-NEXT: [[TMP3:%.*]] = trunc <4 x i32> [[_MSPROP1]] to <4 x i8>
; CHECK-NEXT: [[TMP7:%.*]] = call <4 x i8> @llvm.scmp.v4i8.v4i32(<4 x i32> [[X]], <4 x i32> [[Y]])
-; CHECK-NEXT: store <4 x i8> zeroinitializer, ptr @__msan_retval_tls, align 8
+; CHECK-NEXT: store <4 x i8> [[TMP3]], ptr @__msan_retval_tls, align 8
; CHECK-NEXT: ret <4 x i8> [[TMP7]]
;
%1 = call <4 x i8> @llvm.scmp(<4 x i32> %x, <4 x i32> %y)
@@ -324,18 +263,11 @@ define <4 x i32> @scmp_narrow_vec_op(<4 x i8> %x, <4 x i8> %y) nounwind #0 {
; CHECK-NEXT: [[TMP1:%.*]] = load <4 x i8>, ptr @__msan_param_tls, align 8
; CHECK-NEXT: [[TMP2:%.*]] = load <4 x i8>, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__msan_param_tls to i64), i64 8) to ptr), align 8
; CHECK-NEXT: call void @llvm.donothing()
-; CHECK-NEXT: [[TMP3:%.*]] = bitcast <4 x i8> [[TMP1]] to i32
-; CHECK-NEXT: [[_MSCMP:%.*]] = icmp ne i32 [[TMP3]], 0
-; CHECK-NEXT: [[TMP4:%.*]] = bitcast <4 x i8> [[TMP2]] to i32
-; CHECK-NEXT: [[_MSCMP1:%.*]] = icmp ne i32 [[TMP4]], 0
-; CHECK-NEXT: [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
-; CHECK-NEXT: br i1 [[_MSOR]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
-; CHECK: 5:
-; CHECK-NEXT: call void @__msan_warning_noreturn() #[[ATTR4]]
-; CHECK-NEXT: unreachable
-; CHECK: 6:
+; CHECK-NEXT: [[_MSPROP:%.*]] = or <4 x i8> [[TMP1]], [[TMP2]]
+; CHECK-NEXT: [[_MSPROP1:%.*]] = or <4 x i8> [[_MSPROP]], zeroinitializer
+; CHECK-NEXT: [[TMP3:%.*]] = zext <4 x i8> [[_MSPROP1]] to <4 x i32>
; CHECK-NEXT: [[TMP7:%.*]] = call <4 x i32> @llvm.scmp.v4i32.v4i8(<4 x i8> [[X]], <4 x i8> [[Y]])
-; CHECK-NEXT: store <4 x i32> zeroinitializer, ptr @__msan_retval_tls, align 8
+; CHECK-NEXT: store <4 x i32> [[TMP3]], ptr @__msan_retval_tls, align 8
; CHECK-NEXT: ret <4 x i32> [[TMP7]]
;
%1 = call <4 x i32> @llvm.scmp(<4 x i8> %x, <4 x i8> %y)
@@ -348,18 +280,11 @@ define <16 x i32> @scmp_wide_vec_result(<16 x i8> %x, <16 x i8> %y) nounwind #0
; CHECK-NEXT: [[TMP1:%.*]] = load <16 x i8>, ptr @__msan_param_tls, align 8
; CHECK-NEXT: [[TMP2:%.*]] = load <16 x i8>, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__msan_param_tls to i64), i64 16) to ptr), align 8
; CHECK-NEXT: call void @llvm.donothing()
-; CHECK-NEXT: [[TMP3:%.*]] = bitcast <16 x i8> [[TMP1]] to i128
-; CHECK-NEXT: [[_MSCMP:%.*]] = icmp ne i128 [[TMP3]], 0
-; CHECK-NEXT: [[TMP4:%.*]] = bitcast <16 x i8> [[TMP2]] to i128
-; CHECK-NEXT: [[_MSCMP1:%.*]] = icmp ne i128 [[TMP4]], 0
-; CHECK-NEXT: [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
-; CHECK-NEXT: br i1 [[_MSOR]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
-; CHECK: 5:
-; CHECK-NEXT: call void @__msan_warning_noreturn() #[[ATTR4]]
-; CHECK-NEXT: unreachable
-;...
[truncated]
|
Apply handleShadowOr to llvm.[us]cmp. Previously, llvm.[su]cmp was correctly handled heuristically when each parameter type is the same as the return type (e.g., `call i8 @llvm.ucmp.i8.i8(i8 %x, i8 %y)`) but handled incorrectly by visitInstruction when the return type is different e.g., (`call i8 @llvm.ucmp.i8.i62(i62 %x, i62 %y)`, `call <4 x i8> @llvm.ucmp.v4i8.v4i32(<4 x i32> %x, <4 x i32> %y)`). Updates the tests from llvm#125790
Apply handleShadowOr to llvm.[us]cmp. Previously, llvm.[su]cmp was correctly handled heuristically when each parameter type is the same as the return type (e.g., `call i8 @llvm.ucmp.i8.i8(i8 %x, i8 %y)`) but handled incorrectly by visitInstruction when the return type is different e.g., (`call i8 @llvm.ucmp.i8.i62(i62 %x, i62 %y)`, `call <4 x i8> @llvm.ucmp.v4i8.v4i32(<4 x i32> %x, <4 x i32> %y)`). Updates the tests from llvm#125790
Apply handleShadowOr to llvm.[us]cmp. Previously, llvm.[su]cmp was correctly handled heuristically when each parameter type is the same as the return type (e.g., `call i8 @llvm.ucmp.i8.i8(i8 %x, i8 %y)`) but handled incorrectly by visitInstruction when the return type is different e.g., (`call i8 @llvm.ucmp.i8.i62(i62 %x, i62 %y)`, `call <4 x i8> @llvm.ucmp.v4i8.v4i32(<4 x i32> %x, <4 x i32> %y)`). Updates the tests from llvm#125790
Apply handleShadowOr to llvm.[us]cmp. Previously, llvm.[su]cmp was correctly handled heuristically when each parameter type is the same as the return type (e.g.,
call i8 @llvm.ucmp.i8.i8(i8 %x, i8 %y)
) but handled incorrectly by visitInstruction when the return type is different e.g., (call i8 @llvm.ucmp.i8.i62(i62 %x, i62 %y)
,call <4 x i8> @llvm.ucmp.v4i8.v4i32(<4 x i32> %x, <4 x i32> %y)
).Updates the tests from #125790