Skip to content

[RISCV][GISel] Support unaligned-scalar-mem. #108905

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Sep 18, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
62 changes: 38 additions & 24 deletions llvm/lib/Target/RISCV/GISel/RISCVLegalizerInfo.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -287,34 +287,48 @@ RISCVLegalizerInfo::RISCVLegalizerInfo(const RISCVSubtarget &ST)

auto &LoadActions = getActionDefinitionsBuilder(G_LOAD);
auto &StoreActions = getActionDefinitionsBuilder(G_STORE);
auto &ExtLoadActions = getActionDefinitionsBuilder({G_SEXTLOAD, G_ZEXTLOAD});

LoadActions
.legalForTypesWithMemDesc({{s32, p0, s8, 8},
{s32, p0, s16, 16},
{s32, p0, s32, 32},
{p0, p0, sXLen, XLen}});
StoreActions
.legalForTypesWithMemDesc({{s32, p0, s8, 8},
{s32, p0, s16, 16},
{s32, p0, s32, 32},
{p0, p0, sXLen, XLen}});
auto &ExtLoadActions =
getActionDefinitionsBuilder({G_SEXTLOAD, G_ZEXTLOAD})
.legalForTypesWithMemDesc({{s32, p0, s8, 8}, {s32, p0, s16, 16}});
// Return the alignment needed for scalar memory ops. If unaligned scalar mem
// is supported, we only require byte alignment. Otherwise, we need the memory
// op to be natively aligned.
auto getScalarMemAlign = [&ST](unsigned Size) {
return ST.enableUnalignedScalarMem() ? 8 : Size;
};
Comment on lines +292 to +297
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We really need to replace the load/store legalization. You shouldn't have to do all this to support decomposing unaligned accesses. Currently we're abusing the register types for this, when it's really a property of the in-memory type which differs in the extend/truncate case. I have an old patch I need to resurrect to move all of this into a new lowering action


LoadActions.legalForTypesWithMemDesc(
{{s32, p0, s8, getScalarMemAlign(8)},
{s32, p0, s16, getScalarMemAlign(16)},
{s32, p0, s32, getScalarMemAlign(32)},
{p0, p0, sXLen, getScalarMemAlign(XLen)}});
StoreActions.legalForTypesWithMemDesc(
{{s32, p0, s8, getScalarMemAlign(8)},
{s32, p0, s16, getScalarMemAlign(16)},
{s32, p0, s32, getScalarMemAlign(32)},
{p0, p0, sXLen, getScalarMemAlign(XLen)}});
ExtLoadActions.legalForTypesWithMemDesc(
{{s32, p0, s8, getScalarMemAlign(8)},
{s32, p0, s16, getScalarMemAlign(16)}});
if (XLen == 64) {
LoadActions.legalForTypesWithMemDesc({{s64, p0, s8, 8},
{s64, p0, s16, 16},
{s64, p0, s32, 32},
{s64, p0, s64, 64}});
StoreActions.legalForTypesWithMemDesc({{s64, p0, s8, 8},
{s64, p0, s16, 16},
{s64, p0, s32, 32},
{s64, p0, s64, 64}});
LoadActions.legalForTypesWithMemDesc(
{{s64, p0, s8, getScalarMemAlign(8)},
{s64, p0, s16, getScalarMemAlign(16)},
{s64, p0, s32, getScalarMemAlign(32)},
{s64, p0, s64, getScalarMemAlign(64)}});
StoreActions.legalForTypesWithMemDesc(
{{s64, p0, s8, getScalarMemAlign(8)},
{s64, p0, s16, getScalarMemAlign(16)},
{s64, p0, s32, getScalarMemAlign(32)},
{s64, p0, s64, getScalarMemAlign(64)}});
ExtLoadActions.legalForTypesWithMemDesc(
{{s64, p0, s8, 8}, {s64, p0, s16, 16}, {s64, p0, s32, 32}});
{{s64, p0, s8, getScalarMemAlign(8)},
{s64, p0, s16, getScalarMemAlign(16)},
{s64, p0, s32, getScalarMemAlign(32)}});
} else if (ST.hasStdExtD()) {
LoadActions.legalForTypesWithMemDesc({{s64, p0, s64, 64}});
StoreActions.legalForTypesWithMemDesc({{s64, p0, s64, 64}});
LoadActions.legalForTypesWithMemDesc(
{{s64, p0, s64, getScalarMemAlign(64)}});
StoreActions.legalForTypesWithMemDesc(
{{s64, p0, s64, getScalarMemAlign(64)}});
}

// Vector loads/stores.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
# NOTE: Assertions have been autogenerated by utils/update_mir_test_checks.py
# RUN: llc -mtriple=riscv32 -run-pass=legalizer %s -o - \
# RUN: | FileCheck %s
# RUN: | FileCheck %s
# RUN: llc -mtriple=riscv32 -mattr=+unaligned-scalar-mem -run-pass=legalizer %s -o - \
# RUN: | FileCheck %s --check-prefix=UNALIGNED

---
name: load_i8
Expand All @@ -26,6 +28,14 @@ body: |
; CHECK-NEXT: [[LOAD:%[0-9]+]]:_(s32) = G_LOAD [[COPY]](p0) :: (load (s8))
; CHECK-NEXT: $x10 = COPY [[LOAD]](s32)
; CHECK-NEXT: PseudoRET implicit $x10
;
; UNALIGNED-LABEL: name: load_i8
; UNALIGNED: liveins: $x10
; UNALIGNED-NEXT: {{ $}}
; UNALIGNED-NEXT: [[COPY:%[0-9]+]]:_(p0) = COPY $x10
; UNALIGNED-NEXT: [[LOAD:%[0-9]+]]:_(s32) = G_LOAD [[COPY]](p0) :: (load (s8))
; UNALIGNED-NEXT: $x10 = COPY [[LOAD]](s32)
; UNALIGNED-NEXT: PseudoRET implicit $x10
%0:_(p0) = COPY $x10
%1:_(s8) = G_LOAD %0(p0) :: (load (s8))
%2:_(s32) = G_ANYEXT %1(s8)
Expand Down Expand Up @@ -57,6 +67,14 @@ body: |
; CHECK-NEXT: [[LOAD:%[0-9]+]]:_(s32) = G_LOAD [[COPY]](p0) :: (load (s16))
; CHECK-NEXT: $x10 = COPY [[LOAD]](s32)
; CHECK-NEXT: PseudoRET implicit $x10
;
; UNALIGNED-LABEL: name: load_i16
; UNALIGNED: liveins: $x10
; UNALIGNED-NEXT: {{ $}}
; UNALIGNED-NEXT: [[COPY:%[0-9]+]]:_(p0) = COPY $x10
; UNALIGNED-NEXT: [[LOAD:%[0-9]+]]:_(s32) = G_LOAD [[COPY]](p0) :: (load (s16))
; UNALIGNED-NEXT: $x10 = COPY [[LOAD]](s32)
; UNALIGNED-NEXT: PseudoRET implicit $x10
%0:_(p0) = COPY $x10
%1:_(s16) = G_LOAD %0(p0) :: (load (s16))
%2:_(s32) = G_ANYEXT %1(s16)
Expand Down Expand Up @@ -87,6 +105,14 @@ body: |
; CHECK-NEXT: [[LOAD:%[0-9]+]]:_(s32) = G_LOAD [[COPY]](p0) :: (load (s32))
; CHECK-NEXT: $x10 = COPY [[LOAD]](s32)
; CHECK-NEXT: PseudoRET implicit $x10
;
; UNALIGNED-LABEL: name: load_i32
; UNALIGNED: liveins: $x10
; UNALIGNED-NEXT: {{ $}}
; UNALIGNED-NEXT: [[COPY:%[0-9]+]]:_(p0) = COPY $x10
; UNALIGNED-NEXT: [[LOAD:%[0-9]+]]:_(s32) = G_LOAD [[COPY]](p0) :: (load (s32))
; UNALIGNED-NEXT: $x10 = COPY [[LOAD]](s32)
; UNALIGNED-NEXT: PseudoRET implicit $x10
%0:_(p0) = COPY $x10
%1:_(s32) = G_LOAD %0(p0) :: (load (s32))
$x10 = COPY %1(s32)
Expand Down Expand Up @@ -122,6 +148,18 @@ body: |
; CHECK-NEXT: $x10 = COPY [[LOAD]](s32)
; CHECK-NEXT: $x11 = COPY [[LOAD1]](s32)
; CHECK-NEXT: PseudoRET implicit $x10, implicit $x11
;
; UNALIGNED-LABEL: name: load_i64
; UNALIGNED: liveins: $x10
; UNALIGNED-NEXT: {{ $}}
; UNALIGNED-NEXT: [[COPY:%[0-9]+]]:_(p0) = COPY $x10
; UNALIGNED-NEXT: [[LOAD:%[0-9]+]]:_(s32) = G_LOAD [[COPY]](p0) :: (load (s32), align 8)
; UNALIGNED-NEXT: [[C:%[0-9]+]]:_(s32) = G_CONSTANT i32 4
; UNALIGNED-NEXT: [[PTR_ADD:%[0-9]+]]:_(p0) = G_PTR_ADD [[COPY]], [[C]](s32)
; UNALIGNED-NEXT: [[LOAD1:%[0-9]+]]:_(s32) = G_LOAD [[PTR_ADD]](p0) :: (load (s32) from unknown-address + 4)
; UNALIGNED-NEXT: $x10 = COPY [[LOAD]](s32)
; UNALIGNED-NEXT: $x11 = COPY [[LOAD1]](s32)
; UNALIGNED-NEXT: PseudoRET implicit $x10, implicit $x11
%0:_(p0) = COPY $x10
%1:_(s64) = G_LOAD %0(p0) :: (load (s64))
%2:_(s32), %3:_(s32) = G_UNMERGE_VALUES %1(s64)
Expand Down Expand Up @@ -153,6 +191,14 @@ body: |
; CHECK-NEXT: [[LOAD:%[0-9]+]]:_(p0) = G_LOAD [[COPY]](p0) :: (load (p0), align 8)
; CHECK-NEXT: $x10 = COPY [[LOAD]](p0)
; CHECK-NEXT: PseudoRET implicit $x10
;
; UNALIGNED-LABEL: name: load_ptr
; UNALIGNED: liveins: $x10
; UNALIGNED-NEXT: {{ $}}
; UNALIGNED-NEXT: [[COPY:%[0-9]+]]:_(p0) = COPY $x10
; UNALIGNED-NEXT: [[LOAD:%[0-9]+]]:_(p0) = G_LOAD [[COPY]](p0) :: (load (p0), align 8)
; UNALIGNED-NEXT: $x10 = COPY [[LOAD]](p0)
; UNALIGNED-NEXT: PseudoRET implicit $x10
%0:_(p0) = COPY $x10
%1:_(p0) = G_LOAD %0(p0) :: (load (p0), align 8)
$x10 = COPY %1(p0)
Expand Down Expand Up @@ -189,6 +235,14 @@ body: |
; CHECK-NEXT: [[OR:%[0-9]+]]:_(s32) = G_OR [[SHL]], [[ZEXTLOAD]]
; CHECK-NEXT: $x10 = COPY [[OR]](s32)
; CHECK-NEXT: PseudoRET implicit $x10
;
; UNALIGNED-LABEL: name: load_i16_unaligned
; UNALIGNED: liveins: $x10
; UNALIGNED-NEXT: {{ $}}
; UNALIGNED-NEXT: [[COPY:%[0-9]+]]:_(p0) = COPY $x10
; UNALIGNED-NEXT: [[LOAD:%[0-9]+]]:_(s32) = G_LOAD [[COPY]](p0) :: (load (s16), align 1)
; UNALIGNED-NEXT: $x10 = COPY [[LOAD]](s32)
; UNALIGNED-NEXT: PseudoRET implicit $x10
%0:_(p0) = COPY $x10
%1:_(s16) = G_LOAD %0(p0) :: (load (s16), align 1)
%2:_(s32) = G_ANYEXT %1(s16)
Expand Down Expand Up @@ -237,6 +291,14 @@ body: |
; CHECK-NEXT: [[OR2:%[0-9]+]]:_(s32) = G_OR [[SHL2]], [[OR]]
; CHECK-NEXT: $x10 = COPY [[OR2]](s32)
; CHECK-NEXT: PseudoRET implicit $x10
;
; UNALIGNED-LABEL: name: load_i32_unaligned
; UNALIGNED: liveins: $x10
; UNALIGNED-NEXT: {{ $}}
; UNALIGNED-NEXT: [[COPY:%[0-9]+]]:_(p0) = COPY $x10
; UNALIGNED-NEXT: [[LOAD:%[0-9]+]]:_(s32) = G_LOAD [[COPY]](p0) :: (load (s32), align 1)
; UNALIGNED-NEXT: $x10 = COPY [[LOAD]](s32)
; UNALIGNED-NEXT: PseudoRET implicit $x10
%0:_(p0) = COPY $x10
%1:_(s32) = G_LOAD %0(p0) :: (load (s32), align 1)
$x10 = COPY %1(s32)
Expand Down Expand Up @@ -272,6 +334,14 @@ body: |
; CHECK-NEXT: [[OR:%[0-9]+]]:_(s32) = G_OR [[SHL]], [[ZEXTLOAD]]
; CHECK-NEXT: $x10 = COPY [[OR]](s32)
; CHECK-NEXT: PseudoRET implicit $x10
;
; UNALIGNED-LABEL: name: load_i32_align2
; UNALIGNED: liveins: $x10
; UNALIGNED-NEXT: {{ $}}
; UNALIGNED-NEXT: [[COPY:%[0-9]+]]:_(p0) = COPY $x10
; UNALIGNED-NEXT: [[LOAD:%[0-9]+]]:_(s32) = G_LOAD [[COPY]](p0) :: (load (s32), align 2)
; UNALIGNED-NEXT: $x10 = COPY [[LOAD]](s32)
; UNALIGNED-NEXT: PseudoRET implicit $x10
%0:_(p0) = COPY $x10
%1:_(s32) = G_LOAD %0(p0) :: (load (s32), align 2)
$x10 = COPY %1(s32)
Expand Down Expand Up @@ -343,6 +413,18 @@ body: |
; CHECK-NEXT: $x10 = COPY [[OR2]](s32)
; CHECK-NEXT: $x11 = COPY [[OR5]](s32)
; CHECK-NEXT: PseudoRET implicit $x10, implicit $x11
;
; UNALIGNED-LABEL: name: load_i64_unaligned
; UNALIGNED: liveins: $x10
; UNALIGNED-NEXT: {{ $}}
; UNALIGNED-NEXT: [[COPY:%[0-9]+]]:_(p0) = COPY $x10
; UNALIGNED-NEXT: [[LOAD:%[0-9]+]]:_(s32) = G_LOAD [[COPY]](p0) :: (load (s32), align 1)
; UNALIGNED-NEXT: [[C:%[0-9]+]]:_(s32) = G_CONSTANT i32 4
; UNALIGNED-NEXT: [[PTR_ADD:%[0-9]+]]:_(p0) = G_PTR_ADD [[COPY]], [[C]](s32)
; UNALIGNED-NEXT: [[LOAD1:%[0-9]+]]:_(s32) = G_LOAD [[PTR_ADD]](p0) :: (load (s32) from unknown-address + 4, align 1)
; UNALIGNED-NEXT: $x10 = COPY [[LOAD]](s32)
; UNALIGNED-NEXT: $x11 = COPY [[LOAD1]](s32)
; UNALIGNED-NEXT: PseudoRET implicit $x10, implicit $x11
%0:_(p0) = COPY $x10
%1:_(s64) = G_LOAD %0(p0) :: (load (s64), align 1)
%2:_(s32), %3:_(s32) = G_UNMERGE_VALUES %1(s64)
Expand Down
Loading
Loading