-
Notifications
You must be signed in to change notification settings - Fork 14.3k
[RISCV] Add IntrArgMemOnly for vector load/store intrinsic #78415
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@llvm/pr-subscribers-llvm-ir Author: Jianjian Guan (jacquesguan) ChangesIntrArgMemOnly means the intrinsic only accesses memory that its pointer-typed argument(s) points to. I think RVV load/store intrinsics meets it. Add IntrArgMemOnly would help in some passes, by example, it could add Full diff: https://github.com/llvm/llvm-project/pull/78415.diff 1 Files Affected:
diff --git a/llvm/include/llvm/IR/IntrinsicsRISCV.td b/llvm/include/llvm/IR/IntrinsicsRISCV.td
index a391bc53cdb0e99..b140e31ca263e18 100644
--- a/llvm/include/llvm/IR/IntrinsicsRISCV.td
+++ b/llvm/include/llvm/IR/IntrinsicsRISCV.td
@@ -147,7 +147,8 @@ let TargetPrefix = "riscv" in {
class RISCVUSMLoad
: DefaultAttrsIntrinsic<[llvm_anyvector_ty],
[llvm_ptr_ty, llvm_anyint_ty],
- [NoCapture<ArgIndex<0>>, IntrReadMem]>, RISCVVIntrinsic {
+ [NoCapture<ArgIndex<0>>, IntrReadMem, IntrArgMemOnly]>,
+ RISCVVIntrinsic {
let VLOperand = 1;
}
// For unit stride load
@@ -155,7 +156,8 @@ let TargetPrefix = "riscv" in {
class RISCVUSLoad
: DefaultAttrsIntrinsic<[llvm_anyvector_ty],
[LLVMMatchType<0>, llvm_ptr_ty, llvm_anyint_ty],
- [NoCapture<ArgIndex<1>>, IntrReadMem]>, RISCVVIntrinsic {
+ [NoCapture<ArgIndex<1>>, IntrReadMem, IntrArgMemOnly]>,
+ RISCVVIntrinsic {
let VLOperand = 2;
}
// For unit stride fault-only-first load
@@ -177,7 +179,8 @@ let TargetPrefix = "riscv" in {
[LLVMMatchType<0>, llvm_ptr_ty,
LLVMScalarOrSameVectorWidth<0, llvm_i1_ty>,
llvm_anyint_ty, LLVMMatchType<1>],
- [NoCapture<ArgIndex<1>>, ImmArg<ArgIndex<4>>, IntrReadMem]>,
+ [NoCapture<ArgIndex<1>>, ImmArg<ArgIndex<4>>, IntrReadMem,
+ IntrArgMemOnly]>,
RISCVVIntrinsic {
let VLOperand = 3;
}
@@ -200,7 +203,8 @@ let TargetPrefix = "riscv" in {
: DefaultAttrsIntrinsic<[llvm_anyvector_ty],
[LLVMMatchType<0>, llvm_ptr_ty,
llvm_anyint_ty, LLVMMatchType<1>],
- [NoCapture<ArgIndex<1>>, IntrReadMem]>, RISCVVIntrinsic {
+ [NoCapture<ArgIndex<1>>, IntrReadMem, IntrArgMemOnly]>,
+ RISCVVIntrinsic {
let VLOperand = 3;
}
// For strided load with mask
@@ -210,7 +214,8 @@ let TargetPrefix = "riscv" in {
[LLVMMatchType<0>, llvm_ptr_ty, llvm_anyint_ty,
LLVMScalarOrSameVectorWidth<0, llvm_i1_ty>, LLVMMatchType<1>,
LLVMMatchType<1>],
- [NoCapture<ArgIndex<1>>, ImmArg<ArgIndex<5>>, IntrReadMem]>,
+ [NoCapture<ArgIndex<1>>, ImmArg<ArgIndex<5>>, IntrReadMem,
+ IntrArgMemOnly]>,
RISCVVIntrinsic {
let VLOperand = 4;
}
@@ -220,7 +225,8 @@ let TargetPrefix = "riscv" in {
: DefaultAttrsIntrinsic<[llvm_anyvector_ty],
[LLVMMatchType<0>, llvm_ptr_ty,
llvm_anyvector_ty, llvm_anyint_ty],
- [NoCapture<ArgIndex<1>>, IntrReadMem]>, RISCVVIntrinsic {
+ [NoCapture<ArgIndex<1>>, IntrReadMem, IntrArgMemOnly]>,
+ RISCVVIntrinsic {
let VLOperand = 3;
}
// For indexed load with mask
@@ -230,7 +236,8 @@ let TargetPrefix = "riscv" in {
[LLVMMatchType<0>, llvm_ptr_ty, llvm_anyvector_ty,
LLVMScalarOrSameVectorWidth<0, llvm_i1_ty>, llvm_anyint_ty,
LLVMMatchType<2>],
- [NoCapture<ArgIndex<1>>, ImmArg<ArgIndex<5>>, IntrReadMem]>,
+ [NoCapture<ArgIndex<1>>, ImmArg<ArgIndex<5>>, IntrReadMem,
+ IntrArgMemOnly]>,
RISCVVIntrinsic {
let VLOperand = 4;
}
@@ -239,7 +246,8 @@ let TargetPrefix = "riscv" in {
class RISCVUSStore
: DefaultAttrsIntrinsic<[],
[llvm_anyvector_ty, llvm_ptr_ty, llvm_anyint_ty],
- [NoCapture<ArgIndex<1>>, IntrWriteMem]>, RISCVVIntrinsic {
+ [NoCapture<ArgIndex<1>>, IntrWriteMem, IntrArgMemOnly]>,
+ RISCVVIntrinsic {
let VLOperand = 2;
}
// For unit stride store with mask
@@ -249,7 +257,8 @@ let TargetPrefix = "riscv" in {
[llvm_anyvector_ty, llvm_ptr_ty,
LLVMScalarOrSameVectorWidth<0, llvm_i1_ty>,
llvm_anyint_ty],
- [NoCapture<ArgIndex<1>>, IntrWriteMem]>, RISCVVIntrinsic {
+ [NoCapture<ArgIndex<1>>, IntrWriteMem, IntrArgMemOnly]>,
+ RISCVVIntrinsic {
let VLOperand = 3;
}
// For strided store
@@ -258,7 +267,8 @@ let TargetPrefix = "riscv" in {
: DefaultAttrsIntrinsic<[],
[llvm_anyvector_ty, llvm_ptr_ty,
llvm_anyint_ty, LLVMMatchType<1>],
- [NoCapture<ArgIndex<1>>, IntrWriteMem]>, RISCVVIntrinsic {
+ [NoCapture<ArgIndex<1>>, IntrWriteMem, IntrArgMemOnly]>,
+ RISCVVIntrinsic {
let VLOperand = 3;
}
// For stride store with mask
@@ -267,7 +277,8 @@ let TargetPrefix = "riscv" in {
: DefaultAttrsIntrinsic<[],
[llvm_anyvector_ty, llvm_ptr_ty, llvm_anyint_ty,
LLVMScalarOrSameVectorWidth<0, llvm_i1_ty>, LLVMMatchType<1>],
- [NoCapture<ArgIndex<1>>, IntrWriteMem]>, RISCVVIntrinsic {
+ [NoCapture<ArgIndex<1>>, IntrWriteMem, IntrArgMemOnly]>,
+ RISCVVIntrinsic {
let VLOperand = 4;
}
// For indexed store
@@ -276,7 +287,8 @@ let TargetPrefix = "riscv" in {
: DefaultAttrsIntrinsic<[],
[llvm_anyvector_ty, llvm_ptr_ty,
llvm_anyint_ty, llvm_anyint_ty],
- [NoCapture<ArgIndex<1>>, IntrWriteMem]>, RISCVVIntrinsic {
+ [NoCapture<ArgIndex<1>>, IntrWriteMem, IntrArgMemOnly]>,
+ RISCVVIntrinsic {
let VLOperand = 3;
}
// For indexed store with mask
@@ -285,7 +297,8 @@ let TargetPrefix = "riscv" in {
: DefaultAttrsIntrinsic<[],
[llvm_anyvector_ty, llvm_ptr_ty, llvm_anyvector_ty,
LLVMScalarOrSameVectorWidth<0, llvm_i1_ty>, llvm_anyint_ty],
- [NoCapture<ArgIndex<1>>, IntrWriteMem]>, RISCVVIntrinsic {
+ [NoCapture<ArgIndex<1>>, IntrWriteMem, IntrArgMemOnly]>,
+ RISCVVIntrinsic {
let VLOperand = 4;
}
// For destination vector type is the same as source vector.
@@ -992,7 +1005,8 @@ let TargetPrefix = "riscv" in {
!add(nf, -1))),
!listconcat(!listsplat(LLVMMatchType<0>, nf),
[llvm_ptr_ty, llvm_anyint_ty]),
- [NoCapture<ArgIndex<nf>>, IntrReadMem]>, RISCVVIntrinsic {
+ [NoCapture<ArgIndex<nf>>, IntrReadMem, IntrArgMemOnly]>,
+ RISCVVIntrinsic {
let VLOperand = !add(nf, 1);
}
// For unit stride segment load with mask
@@ -1004,8 +1018,9 @@ let TargetPrefix = "riscv" in {
[llvm_ptr_ty,
LLVMScalarOrSameVectorWidth<0, llvm_i1_ty>,
llvm_anyint_ty, LLVMMatchType<1>]),
- [ImmArg<ArgIndex<!add(nf, 3)>>, NoCapture<ArgIndex<nf>>, IntrReadMem]>,
- RISCVVIntrinsic {
+ [ImmArg<ArgIndex<!add(nf, 3)>>, NoCapture<ArgIndex<nf>>, IntrReadMem,
+ IntrArgMemOnly]>,
+ RISCVVIntrinsic {
let VLOperand = !add(nf, 2);
}
@@ -1046,7 +1061,8 @@ let TargetPrefix = "riscv" in {
!add(nf, -1))),
!listconcat(!listsplat(LLVMMatchType<0>, nf),
[llvm_ptr_ty, llvm_anyint_ty, LLVMMatchType<1>]),
- [NoCapture<ArgIndex<nf>>, IntrReadMem]>, RISCVVIntrinsic {
+ [NoCapture<ArgIndex<nf>>, IntrReadMem, IntrArgMemOnly]>,
+ RISCVVIntrinsic {
let VLOperand = !add(nf, 2);
}
// For stride segment load with mask
@@ -1059,8 +1075,9 @@ let TargetPrefix = "riscv" in {
llvm_anyint_ty,
LLVMScalarOrSameVectorWidth<0, llvm_i1_ty>,
LLVMMatchType<1>, LLVMMatchType<1>]),
- [ImmArg<ArgIndex<!add(nf, 4)>>, NoCapture<ArgIndex<nf>>, IntrReadMem]>,
- RISCVVIntrinsic {
+ [ImmArg<ArgIndex<!add(nf, 4)>>, NoCapture<ArgIndex<nf>>,
+ IntrReadMem, IntrArgMemOnly]>,
+ RISCVVIntrinsic {
let VLOperand = !add(nf, 3);
}
@@ -1071,7 +1088,8 @@ let TargetPrefix = "riscv" in {
!add(nf, -1))),
!listconcat(!listsplat(LLVMMatchType<0>, nf),
[llvm_ptr_ty, llvm_anyvector_ty, llvm_anyint_ty]),
- [NoCapture<ArgIndex<nf>>, IntrReadMem]>, RISCVVIntrinsic {
+ [NoCapture<ArgIndex<nf>>, IntrReadMem, IntrArgMemOnly]>,
+ RISCVVIntrinsic {
let VLOperand = !add(nf, 2);
}
// For indexed segment load with mask
@@ -1084,8 +1102,9 @@ let TargetPrefix = "riscv" in {
llvm_anyvector_ty,
LLVMScalarOrSameVectorWidth<0, llvm_i1_ty>,
llvm_anyint_ty, LLVMMatchType<2>]),
- [ImmArg<ArgIndex<!add(nf, 4)>>, NoCapture<ArgIndex<nf>>, IntrReadMem]>,
- RISCVVIntrinsic {
+ [ImmArg<ArgIndex<!add(nf, 4)>>, NoCapture<ArgIndex<nf>>,
+ IntrReadMem, IntrArgMemOnly]>,
+ RISCVVIntrinsic {
let VLOperand = !add(nf, 3);
}
@@ -1096,7 +1115,8 @@ let TargetPrefix = "riscv" in {
!listconcat([llvm_anyvector_ty],
!listsplat(LLVMMatchType<0>, !add(nf, -1)),
[llvm_ptr_ty, llvm_anyint_ty]),
- [NoCapture<ArgIndex<nf>>, IntrWriteMem]>, RISCVVIntrinsic {
+ [NoCapture<ArgIndex<nf>>, IntrWriteMem, IntrArgMemOnly]>,
+ RISCVVIntrinsic {
let VLOperand = !add(nf, 1);
}
// For unit stride segment store with mask
@@ -1108,7 +1128,8 @@ let TargetPrefix = "riscv" in {
[llvm_ptr_ty,
LLVMScalarOrSameVectorWidth<0, llvm_i1_ty>,
llvm_anyint_ty]),
- [NoCapture<ArgIndex<nf>>, IntrWriteMem]>, RISCVVIntrinsic {
+ [NoCapture<ArgIndex<nf>>, IntrWriteMem, IntrArgMemOnly]>,
+ RISCVVIntrinsic {
let VLOperand = !add(nf, 2);
}
@@ -1120,7 +1141,8 @@ let TargetPrefix = "riscv" in {
!listsplat(LLVMMatchType<0>, !add(nf, -1)),
[llvm_ptr_ty, llvm_anyint_ty,
LLVMMatchType<1>]),
- [NoCapture<ArgIndex<nf>>, IntrWriteMem]>, RISCVVIntrinsic {
+ [NoCapture<ArgIndex<nf>>, IntrWriteMem, IntrArgMemOnly]>,
+ RISCVVIntrinsic {
let VLOperand = !add(nf, 2);
}
// For stride segment store with mask
@@ -1132,7 +1154,8 @@ let TargetPrefix = "riscv" in {
[llvm_ptr_ty, llvm_anyint_ty,
LLVMScalarOrSameVectorWidth<0, llvm_i1_ty>,
LLVMMatchType<1>]),
- [NoCapture<ArgIndex<nf>>, IntrWriteMem]>, RISCVVIntrinsic {
+ [NoCapture<ArgIndex<nf>>, IntrWriteMem, IntrArgMemOnly]>,
+ RISCVVIntrinsic {
let VLOperand = !add(nf, 3);
}
@@ -1144,7 +1167,8 @@ let TargetPrefix = "riscv" in {
!listsplat(LLVMMatchType<0>, !add(nf, -1)),
[llvm_ptr_ty, llvm_anyvector_ty,
llvm_anyint_ty]),
- [NoCapture<ArgIndex<nf>>, IntrWriteMem]>, RISCVVIntrinsic {
+ [NoCapture<ArgIndex<nf>>, IntrWriteMem, IntrArgMemOnly]>,
+ RISCVVIntrinsic {
let VLOperand = !add(nf, 2);
}
// For indexed segment store with mask
@@ -1156,7 +1180,8 @@ let TargetPrefix = "riscv" in {
[llvm_ptr_ty, llvm_anyvector_ty,
LLVMScalarOrSameVectorWidth<0, llvm_i1_ty>,
llvm_anyint_ty]),
- [NoCapture<ArgIndex<nf>>, IntrWriteMem]>, RISCVVIntrinsic {
+ [NoCapture<ArgIndex<nf>>, IntrWriteMem, IntrArgMemOnly]>,
+ RISCVVIntrinsic {
let VLOperand = !add(nf, 3);
}
|
@llvm/pr-subscribers-backend-risc-v Author: Jianjian Guan (jacquesguan) ChangesIntrArgMemOnly means the intrinsic only accesses memory that its pointer-typed argument(s) points to. I think RVV load/store intrinsics meets it. Add IntrArgMemOnly would help in some passes, by example, it could add Full diff: https://github.com/llvm/llvm-project/pull/78415.diff 1 Files Affected:
diff --git a/llvm/include/llvm/IR/IntrinsicsRISCV.td b/llvm/include/llvm/IR/IntrinsicsRISCV.td
index a391bc53cdb0e9..b140e31ca263e1 100644
--- a/llvm/include/llvm/IR/IntrinsicsRISCV.td
+++ b/llvm/include/llvm/IR/IntrinsicsRISCV.td
@@ -147,7 +147,8 @@ let TargetPrefix = "riscv" in {
class RISCVUSMLoad
: DefaultAttrsIntrinsic<[llvm_anyvector_ty],
[llvm_ptr_ty, llvm_anyint_ty],
- [NoCapture<ArgIndex<0>>, IntrReadMem]>, RISCVVIntrinsic {
+ [NoCapture<ArgIndex<0>>, IntrReadMem, IntrArgMemOnly]>,
+ RISCVVIntrinsic {
let VLOperand = 1;
}
// For unit stride load
@@ -155,7 +156,8 @@ let TargetPrefix = "riscv" in {
class RISCVUSLoad
: DefaultAttrsIntrinsic<[llvm_anyvector_ty],
[LLVMMatchType<0>, llvm_ptr_ty, llvm_anyint_ty],
- [NoCapture<ArgIndex<1>>, IntrReadMem]>, RISCVVIntrinsic {
+ [NoCapture<ArgIndex<1>>, IntrReadMem, IntrArgMemOnly]>,
+ RISCVVIntrinsic {
let VLOperand = 2;
}
// For unit stride fault-only-first load
@@ -177,7 +179,8 @@ let TargetPrefix = "riscv" in {
[LLVMMatchType<0>, llvm_ptr_ty,
LLVMScalarOrSameVectorWidth<0, llvm_i1_ty>,
llvm_anyint_ty, LLVMMatchType<1>],
- [NoCapture<ArgIndex<1>>, ImmArg<ArgIndex<4>>, IntrReadMem]>,
+ [NoCapture<ArgIndex<1>>, ImmArg<ArgIndex<4>>, IntrReadMem,
+ IntrArgMemOnly]>,
RISCVVIntrinsic {
let VLOperand = 3;
}
@@ -200,7 +203,8 @@ let TargetPrefix = "riscv" in {
: DefaultAttrsIntrinsic<[llvm_anyvector_ty],
[LLVMMatchType<0>, llvm_ptr_ty,
llvm_anyint_ty, LLVMMatchType<1>],
- [NoCapture<ArgIndex<1>>, IntrReadMem]>, RISCVVIntrinsic {
+ [NoCapture<ArgIndex<1>>, IntrReadMem, IntrArgMemOnly]>,
+ RISCVVIntrinsic {
let VLOperand = 3;
}
// For strided load with mask
@@ -210,7 +214,8 @@ let TargetPrefix = "riscv" in {
[LLVMMatchType<0>, llvm_ptr_ty, llvm_anyint_ty,
LLVMScalarOrSameVectorWidth<0, llvm_i1_ty>, LLVMMatchType<1>,
LLVMMatchType<1>],
- [NoCapture<ArgIndex<1>>, ImmArg<ArgIndex<5>>, IntrReadMem]>,
+ [NoCapture<ArgIndex<1>>, ImmArg<ArgIndex<5>>, IntrReadMem,
+ IntrArgMemOnly]>,
RISCVVIntrinsic {
let VLOperand = 4;
}
@@ -220,7 +225,8 @@ let TargetPrefix = "riscv" in {
: DefaultAttrsIntrinsic<[llvm_anyvector_ty],
[LLVMMatchType<0>, llvm_ptr_ty,
llvm_anyvector_ty, llvm_anyint_ty],
- [NoCapture<ArgIndex<1>>, IntrReadMem]>, RISCVVIntrinsic {
+ [NoCapture<ArgIndex<1>>, IntrReadMem, IntrArgMemOnly]>,
+ RISCVVIntrinsic {
let VLOperand = 3;
}
// For indexed load with mask
@@ -230,7 +236,8 @@ let TargetPrefix = "riscv" in {
[LLVMMatchType<0>, llvm_ptr_ty, llvm_anyvector_ty,
LLVMScalarOrSameVectorWidth<0, llvm_i1_ty>, llvm_anyint_ty,
LLVMMatchType<2>],
- [NoCapture<ArgIndex<1>>, ImmArg<ArgIndex<5>>, IntrReadMem]>,
+ [NoCapture<ArgIndex<1>>, ImmArg<ArgIndex<5>>, IntrReadMem,
+ IntrArgMemOnly]>,
RISCVVIntrinsic {
let VLOperand = 4;
}
@@ -239,7 +246,8 @@ let TargetPrefix = "riscv" in {
class RISCVUSStore
: DefaultAttrsIntrinsic<[],
[llvm_anyvector_ty, llvm_ptr_ty, llvm_anyint_ty],
- [NoCapture<ArgIndex<1>>, IntrWriteMem]>, RISCVVIntrinsic {
+ [NoCapture<ArgIndex<1>>, IntrWriteMem, IntrArgMemOnly]>,
+ RISCVVIntrinsic {
let VLOperand = 2;
}
// For unit stride store with mask
@@ -249,7 +257,8 @@ let TargetPrefix = "riscv" in {
[llvm_anyvector_ty, llvm_ptr_ty,
LLVMScalarOrSameVectorWidth<0, llvm_i1_ty>,
llvm_anyint_ty],
- [NoCapture<ArgIndex<1>>, IntrWriteMem]>, RISCVVIntrinsic {
+ [NoCapture<ArgIndex<1>>, IntrWriteMem, IntrArgMemOnly]>,
+ RISCVVIntrinsic {
let VLOperand = 3;
}
// For strided store
@@ -258,7 +267,8 @@ let TargetPrefix = "riscv" in {
: DefaultAttrsIntrinsic<[],
[llvm_anyvector_ty, llvm_ptr_ty,
llvm_anyint_ty, LLVMMatchType<1>],
- [NoCapture<ArgIndex<1>>, IntrWriteMem]>, RISCVVIntrinsic {
+ [NoCapture<ArgIndex<1>>, IntrWriteMem, IntrArgMemOnly]>,
+ RISCVVIntrinsic {
let VLOperand = 3;
}
// For stride store with mask
@@ -267,7 +277,8 @@ let TargetPrefix = "riscv" in {
: DefaultAttrsIntrinsic<[],
[llvm_anyvector_ty, llvm_ptr_ty, llvm_anyint_ty,
LLVMScalarOrSameVectorWidth<0, llvm_i1_ty>, LLVMMatchType<1>],
- [NoCapture<ArgIndex<1>>, IntrWriteMem]>, RISCVVIntrinsic {
+ [NoCapture<ArgIndex<1>>, IntrWriteMem, IntrArgMemOnly]>,
+ RISCVVIntrinsic {
let VLOperand = 4;
}
// For indexed store
@@ -276,7 +287,8 @@ let TargetPrefix = "riscv" in {
: DefaultAttrsIntrinsic<[],
[llvm_anyvector_ty, llvm_ptr_ty,
llvm_anyint_ty, llvm_anyint_ty],
- [NoCapture<ArgIndex<1>>, IntrWriteMem]>, RISCVVIntrinsic {
+ [NoCapture<ArgIndex<1>>, IntrWriteMem, IntrArgMemOnly]>,
+ RISCVVIntrinsic {
let VLOperand = 3;
}
// For indexed store with mask
@@ -285,7 +297,8 @@ let TargetPrefix = "riscv" in {
: DefaultAttrsIntrinsic<[],
[llvm_anyvector_ty, llvm_ptr_ty, llvm_anyvector_ty,
LLVMScalarOrSameVectorWidth<0, llvm_i1_ty>, llvm_anyint_ty],
- [NoCapture<ArgIndex<1>>, IntrWriteMem]>, RISCVVIntrinsic {
+ [NoCapture<ArgIndex<1>>, IntrWriteMem, IntrArgMemOnly]>,
+ RISCVVIntrinsic {
let VLOperand = 4;
}
// For destination vector type is the same as source vector.
@@ -992,7 +1005,8 @@ let TargetPrefix = "riscv" in {
!add(nf, -1))),
!listconcat(!listsplat(LLVMMatchType<0>, nf),
[llvm_ptr_ty, llvm_anyint_ty]),
- [NoCapture<ArgIndex<nf>>, IntrReadMem]>, RISCVVIntrinsic {
+ [NoCapture<ArgIndex<nf>>, IntrReadMem, IntrArgMemOnly]>,
+ RISCVVIntrinsic {
let VLOperand = !add(nf, 1);
}
// For unit stride segment load with mask
@@ -1004,8 +1018,9 @@ let TargetPrefix = "riscv" in {
[llvm_ptr_ty,
LLVMScalarOrSameVectorWidth<0, llvm_i1_ty>,
llvm_anyint_ty, LLVMMatchType<1>]),
- [ImmArg<ArgIndex<!add(nf, 3)>>, NoCapture<ArgIndex<nf>>, IntrReadMem]>,
- RISCVVIntrinsic {
+ [ImmArg<ArgIndex<!add(nf, 3)>>, NoCapture<ArgIndex<nf>>, IntrReadMem,
+ IntrArgMemOnly]>,
+ RISCVVIntrinsic {
let VLOperand = !add(nf, 2);
}
@@ -1046,7 +1061,8 @@ let TargetPrefix = "riscv" in {
!add(nf, -1))),
!listconcat(!listsplat(LLVMMatchType<0>, nf),
[llvm_ptr_ty, llvm_anyint_ty, LLVMMatchType<1>]),
- [NoCapture<ArgIndex<nf>>, IntrReadMem]>, RISCVVIntrinsic {
+ [NoCapture<ArgIndex<nf>>, IntrReadMem, IntrArgMemOnly]>,
+ RISCVVIntrinsic {
let VLOperand = !add(nf, 2);
}
// For stride segment load with mask
@@ -1059,8 +1075,9 @@ let TargetPrefix = "riscv" in {
llvm_anyint_ty,
LLVMScalarOrSameVectorWidth<0, llvm_i1_ty>,
LLVMMatchType<1>, LLVMMatchType<1>]),
- [ImmArg<ArgIndex<!add(nf, 4)>>, NoCapture<ArgIndex<nf>>, IntrReadMem]>,
- RISCVVIntrinsic {
+ [ImmArg<ArgIndex<!add(nf, 4)>>, NoCapture<ArgIndex<nf>>,
+ IntrReadMem, IntrArgMemOnly]>,
+ RISCVVIntrinsic {
let VLOperand = !add(nf, 3);
}
@@ -1071,7 +1088,8 @@ let TargetPrefix = "riscv" in {
!add(nf, -1))),
!listconcat(!listsplat(LLVMMatchType<0>, nf),
[llvm_ptr_ty, llvm_anyvector_ty, llvm_anyint_ty]),
- [NoCapture<ArgIndex<nf>>, IntrReadMem]>, RISCVVIntrinsic {
+ [NoCapture<ArgIndex<nf>>, IntrReadMem, IntrArgMemOnly]>,
+ RISCVVIntrinsic {
let VLOperand = !add(nf, 2);
}
// For indexed segment load with mask
@@ -1084,8 +1102,9 @@ let TargetPrefix = "riscv" in {
llvm_anyvector_ty,
LLVMScalarOrSameVectorWidth<0, llvm_i1_ty>,
llvm_anyint_ty, LLVMMatchType<2>]),
- [ImmArg<ArgIndex<!add(nf, 4)>>, NoCapture<ArgIndex<nf>>, IntrReadMem]>,
- RISCVVIntrinsic {
+ [ImmArg<ArgIndex<!add(nf, 4)>>, NoCapture<ArgIndex<nf>>,
+ IntrReadMem, IntrArgMemOnly]>,
+ RISCVVIntrinsic {
let VLOperand = !add(nf, 3);
}
@@ -1096,7 +1115,8 @@ let TargetPrefix = "riscv" in {
!listconcat([llvm_anyvector_ty],
!listsplat(LLVMMatchType<0>, !add(nf, -1)),
[llvm_ptr_ty, llvm_anyint_ty]),
- [NoCapture<ArgIndex<nf>>, IntrWriteMem]>, RISCVVIntrinsic {
+ [NoCapture<ArgIndex<nf>>, IntrWriteMem, IntrArgMemOnly]>,
+ RISCVVIntrinsic {
let VLOperand = !add(nf, 1);
}
// For unit stride segment store with mask
@@ -1108,7 +1128,8 @@ let TargetPrefix = "riscv" in {
[llvm_ptr_ty,
LLVMScalarOrSameVectorWidth<0, llvm_i1_ty>,
llvm_anyint_ty]),
- [NoCapture<ArgIndex<nf>>, IntrWriteMem]>, RISCVVIntrinsic {
+ [NoCapture<ArgIndex<nf>>, IntrWriteMem, IntrArgMemOnly]>,
+ RISCVVIntrinsic {
let VLOperand = !add(nf, 2);
}
@@ -1120,7 +1141,8 @@ let TargetPrefix = "riscv" in {
!listsplat(LLVMMatchType<0>, !add(nf, -1)),
[llvm_ptr_ty, llvm_anyint_ty,
LLVMMatchType<1>]),
- [NoCapture<ArgIndex<nf>>, IntrWriteMem]>, RISCVVIntrinsic {
+ [NoCapture<ArgIndex<nf>>, IntrWriteMem, IntrArgMemOnly]>,
+ RISCVVIntrinsic {
let VLOperand = !add(nf, 2);
}
// For stride segment store with mask
@@ -1132,7 +1154,8 @@ let TargetPrefix = "riscv" in {
[llvm_ptr_ty, llvm_anyint_ty,
LLVMScalarOrSameVectorWidth<0, llvm_i1_ty>,
LLVMMatchType<1>]),
- [NoCapture<ArgIndex<nf>>, IntrWriteMem]>, RISCVVIntrinsic {
+ [NoCapture<ArgIndex<nf>>, IntrWriteMem, IntrArgMemOnly]>,
+ RISCVVIntrinsic {
let VLOperand = !add(nf, 3);
}
@@ -1144,7 +1167,8 @@ let TargetPrefix = "riscv" in {
!listsplat(LLVMMatchType<0>, !add(nf, -1)),
[llvm_ptr_ty, llvm_anyvector_ty,
llvm_anyint_ty]),
- [NoCapture<ArgIndex<nf>>, IntrWriteMem]>, RISCVVIntrinsic {
+ [NoCapture<ArgIndex<nf>>, IntrWriteMem, IntrArgMemOnly]>,
+ RISCVVIntrinsic {
let VLOperand = !add(nf, 2);
}
// For indexed segment store with mask
@@ -1156,7 +1180,8 @@ let TargetPrefix = "riscv" in {
[llvm_ptr_ty, llvm_anyvector_ty,
LLVMScalarOrSameVectorWidth<0, llvm_i1_ty>,
llvm_anyint_ty]),
- [NoCapture<ArgIndex<nf>>, IntrWriteMem]>, RISCVVIntrinsic {
+ [NoCapture<ArgIndex<nf>>, IntrWriteMem, IntrArgMemOnly]>,
+ RISCVVIntrinsic {
let VLOperand = !add(nf, 3);
}
|
@@ -177,7 +179,8 @@ let TargetPrefix = "riscv" in { | |||
[LLVMMatchType<0>, llvm_ptr_ty, | |||
LLVMScalarOrSameVectorWidth<0, llvm_i1_ty>, | |||
llvm_anyint_ty, LLVMMatchType<1>], | |||
[NoCapture<ArgIndex<1>>, ImmArg<ArgIndex<4>>, IntrReadMem]>, | |||
[NoCapture<ArgIndex<1>>, ImmArg<ArgIndex<4>>, IntrReadMem, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fault-only-first load may write vl, is this still IntrArgMemOnly
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I presume no, from the definition of IntrArgMemOnly in Intrinsics.td:
Other than reads from and (possibly volatile) writes to memory, it has no side effects.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This class isn't used for fault-only-first loads. That's RISCVUSLoadFF
and RISCVUSLoadFFMasked
. This patch isn't touching those intrinsics.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh I misread it, I was reading the line below.
@@ -200,7 +203,8 @@ let TargetPrefix = "riscv" in { | |||
: DefaultAttrsIntrinsic<[llvm_anyvector_ty], | |||
[LLVMMatchType<0>, llvm_ptr_ty, | |||
llvm_anyint_ty, LLVMMatchType<1>], | |||
[NoCapture<ArgIndex<1>>, IntrReadMem]>, RISCVVIntrinsic { | |||
[NoCapture<ArgIndex<1>>, IntrReadMem, IntrArgMemOnly]>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure if this is valid for strided or indexed load/store. The address isn't fully described by the pointer argument.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The aarch64 backend adds IntrArgMemOnly
for gather and scatter intrinsics, so I think it's ok to add it for strided/indexed load/store.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And I removed it from X86 gather/scatter a few years ago.
commit 35f55d72f6f8895d64171a42ee0635ef8d76c61d
Author: Craig Topper <[email protected]>
Date: Fri Mar 1 13:02:40 2019
[X86] Remove IntrArgMemOnly from target specific gather/scatter intrinsics
IntrArgMemOnly implies that only memory pointed to by pointer typed arguments will be accessed. But these intrinsics allow you to pass null to the pointer argument and put the full address into the index argument. Other passes won't be able to understand this.
A colleague found that ISPC was creating gathers like this and then dead store elimination removed some stores because it didn't understand what the gather was doing since the pointer argument was null.
Differential Revision: https://reviews.llvm.org/D58805
llvm-svn: 355228
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Your commit makes sense. I still don't understand the comment of IntrArgMemOnly
well, it says: "but may access an unspecified amount", it seems allow to let the intrinsic access more memory.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But still derived from the pointer, i.e. not outside the bounds of the underlying object
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Change to add only for unit stride load/store.
affc8d5
to
6b6fd48
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
IntrArgMemOnly means the intrinsic only accesses memory that its pointer-typed argument(s) points to. I think RVV load/store intrinsics meets it. Add IntrArgMemOnly would help in some passes, by example, it could add
alais.scope
to intrinsics callee when try to inline a function that has noalais parameter(s).