-
Notifications
You must be signed in to change notification settings - Fork 14.3k
[RISCV][GISel] Don't custom legalize load/store of vector of pointers if ELEN < XLEN. #101565
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@llvm/pr-subscribers-backend-risc-v Author: Craig Topper (topperc) ChangesWe need to have elements than can hold a pointer sized element. No test because it crashes in LowerLoad or LowerStore now which I also reordered things so all the vector load/store stuff is together. Full diff: https://github.com/llvm/llvm-project/pull/101565.diff 1 Files Affected:
diff --git a/llvm/lib/Target/RISCV/GISel/RISCVLegalizerInfo.cpp b/llvm/lib/Target/RISCV/GISel/RISCVLegalizerInfo.cpp
index 4e583d96335d9..74bfe8b838af7 100644
--- a/llvm/lib/Target/RISCV/GISel/RISCVLegalizerInfo.cpp
+++ b/llvm/lib/Target/RISCV/GISel/RISCVLegalizerInfo.cpp
@@ -285,7 +285,22 @@ RISCVLegalizerInfo::RISCVLegalizerInfo(const RISCVSubtarget &ST)
{s32, p0, s16, 16},
{s32, p0, s32, 32},
{p0, p0, sXLen, XLen}});
- if (ST.hasVInstructions())
+ auto &ExtLoadActions =
+ getActionDefinitionsBuilder({G_SEXTLOAD, G_ZEXTLOAD})
+ .legalForTypesWithMemDesc({{s32, p0, s8, 8}, {s32, p0, s16, 16}});
+ if (XLen == 64) {
+ LoadStoreActions.legalForTypesWithMemDesc({{s64, p0, s8, 8},
+ {s64, p0, s16, 16},
+ {s64, p0, s32, 32},
+ {s64, p0, s64, 64}});
+ ExtLoadActions.legalForTypesWithMemDesc(
+ {{s64, p0, s8, 8}, {s64, p0, s16, 16}, {s64, p0, s32, 32}});
+ } else if (ST.hasStdExtD()) {
+ LoadStoreActions.legalForTypesWithMemDesc({{s64, p0, s64, 64}});
+ }
+
+ // Vector loads/stores.
+ if (ST.hasVInstructions()) {
LoadStoreActions.legalForTypesWithMemDesc({{nxv2s8, p0, nxv2s8, 8},
{nxv4s8, p0, nxv4s8, 8},
{nxv8s8, p0, nxv8s8, 8},
@@ -302,38 +317,28 @@ RISCVLegalizerInfo::RISCVLegalizerInfo(const RISCVSubtarget &ST)
{nxv8s32, p0, nxv8s32, 32},
{nxv16s32, p0, nxv16s32, 32}});
- auto &ExtLoadActions =
- getActionDefinitionsBuilder({G_SEXTLOAD, G_ZEXTLOAD})
- .legalForTypesWithMemDesc({{s32, p0, s8, 8}, {s32, p0, s16, 16}});
- if (XLen == 64) {
- LoadStoreActions.legalForTypesWithMemDesc({{s64, p0, s8, 8},
- {s64, p0, s16, 16},
- {s64, p0, s32, 32},
- {s64, p0, s64, 64}});
- ExtLoadActions.legalForTypesWithMemDesc(
- {{s64, p0, s8, 8}, {s64, p0, s16, 16}, {s64, p0, s32, 32}});
- } else if (ST.hasStdExtD()) {
- LoadStoreActions.legalForTypesWithMemDesc({{s64, p0, s64, 64}});
- }
- if (ST.hasVInstructions() && ST.getELen() == 64)
- LoadStoreActions.legalForTypesWithMemDesc({{nxv1s8, p0, nxv1s8, 8},
- {nxv1s16, p0, nxv1s16, 16},
- {nxv1s32, p0, nxv1s32, 32}});
+ if (ST.getELen() == 64)
+ LoadStoreActions.legalForTypesWithMemDesc({{nxv1s8, p0, nxv1s8, 8},
+ {nxv1s16, p0, nxv1s16, 16},
+ {nxv1s32, p0, nxv1s32, 32}});
+
+ if (ST.hasVInstructionsI64())
+ LoadStoreActions.legalForTypesWithMemDesc({{nxv1s64, p0, nxv1s64, 64},
+ {nxv2s64, p0, nxv2s64, 64},
+ {nxv4s64, p0, nxv4s64, 64},
+ {nxv8s64, p0, nxv8s64, 64}});
- if (ST.hasVInstructionsI64())
- LoadStoreActions.legalForTypesWithMemDesc({{nxv1s64, p0, nxv1s64, 64},
+ // we will take the custom lowering logic if we have scalable vector types
+ // with non-standard alignments
+ LoadStoreActions.customIf(typeIsLegalIntOrFPVec(0, IntOrFPVecTys, ST));
- {nxv2s64, p0, nxv2s64, 64},
- {nxv4s64, p0, nxv4s64, 64},
- {nxv8s64, p0, nxv8s64, 64}});
+ // Pointers require that XLen sized elements are legal.
+ if (XLen <= ST.getELen())
+ LoadStoreActions.customIf(typeIsLegalPtrVec(0, PtrVecTys, ST));
+ }
LoadStoreActions.widenScalarToNextPow2(0, /* MinSize = */ 8)
.lowerIfMemSizeNotByteSizePow2()
- // we will take the custom lowering logic if we have scalable vector types
- // with non-standard alignments
- .customIf(LegalityPredicate(
- LegalityPredicates::any(typeIsLegalIntOrFPVec(0, IntOrFPVecTys, ST),
- typeIsLegalPtrVec(0, PtrVecTys, ST))))
.clampScalar(0, s32, sXLen)
.lower();
|
Ping |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No test because it crashes in LowerLoad or LowerStore now which needs to be addressed separately.
Could you create an issue for this?
{nxv8s64, p0, nxv8s64, 64}}); | ||
// Pointers require that XLen sized elements are legal. | ||
if (XLen <= ST.getELen()) | ||
LoadStoreActions.customIf(typeIsLegalPtrVec(0, PtrVecTys, ST)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// Pointers require that XLen sized elements are legal.
Do we need to mark as legal in the ELEN < XLEN case? with a call to legalIf
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't seem to have marked any pointer vectors with legalIf. I think they just go to the custom handler and get treated as legal if they pass the alignment check.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we be marking pointer types legal with legalIf? The only case they need to go through custom is when they have special alignment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe. It would make the custom handler simpler, we wouldn't need to call allowsMemoryAccessForAlignment since we wouldn't get to the custom handler if we had already check the alignment with legalIf.
On the other hand calling allowsMemoryAccessForAlignment allows us to handle FeatureUnalignedVectorMem
correctly. So maybe that's a vote for removing the legalIf and just letting the custom handle always handle it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On the other hand calling allowsMemoryAccessForAlignment allows us to handle FeatureUnalignedVectorMem correctly. So maybe that's a vote for removing the legalIf and just letting the custom handle always handle it?
Do we care about FeatureUnalignedVectorMem in the aligned cases? If we were to move the legalIf, it could probably be done in another patch because it doesn't have to do with pointers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we care about FeatureUnalignedVectorMem in the aligned cases? If we were to move the legalIf, it could probably be done in another patch because it doesn't have to do with pointers.
No. But by listing the aligned cases explicitly with legalIf we're effectively hardcoding what allowsMemoryAccessForAlignment is already capable of checking. Is there an advantage to having both a legalIf and a customIf when the customIf can already handle the legal case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
… if ELEN < XLEN. We need to have elements than can hold a pointer sized element. No test because it crashes in LowerLoad or LowerStore now which needs to be addressed separately.
42b3d23
to
46e50cd
Compare
We need to have elements than can hold a pointer sized element.
No test because it crashes in LowerLoad or LowerStore now which
needs to be addressed separately.
I also reordered things so all the vector load/store stuff is together.