Skip to content

LAA: generalize strides over unequal type sizes #108088

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 10 additions & 6 deletions llvm/include/llvm/Analysis/LoopAccessAnalysis.h
Original file line number Diff line number Diff line change
Expand Up @@ -366,16 +366,20 @@ class MemoryDepChecker {

struct DepDistanceStrideAndSizeInfo {
const SCEV *Dist;
uint64_t StrideA;
uint64_t StrideB;
uint64_t MaxStride;
std::optional<uint64_t> CommonStride;
bool ShouldRetryWithRuntimeCheck;
uint64_t TypeByteSize;
bool AIsWrite;
bool BIsWrite;

DepDistanceStrideAndSizeInfo(const SCEV *Dist, uint64_t StrideA,
uint64_t StrideB, uint64_t TypeByteSize,
bool AIsWrite, bool BIsWrite)
: Dist(Dist), StrideA(StrideA), StrideB(StrideB),
DepDistanceStrideAndSizeInfo(const SCEV *Dist, uint64_t MaxStride,
std::optional<uint64_t> CommonStride,
bool ShouldRetryWithRuntimeCheck,
uint64_t TypeByteSize, bool AIsWrite,
bool BIsWrite)
: Dist(Dist), MaxStride(MaxStride), CommonStride(CommonStride),
ShouldRetryWithRuntimeCheck(ShouldRetryWithRuntimeCheck),
TypeByteSize(TypeByteSize), AIsWrite(AIsWrite), BIsWrite(BIsWrite) {}
};

Expand Down
144 changes: 80 additions & 64 deletions llvm/lib/Analysis/LoopAccessAnalysis.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -1799,8 +1799,7 @@ void MemoryDepChecker::mergeInStatus(VectorizationSafetyStatus S) {
/// }
static bool isSafeDependenceDistance(const DataLayout &DL, ScalarEvolution &SE,
const SCEV &MaxBTC, const SCEV &Dist,
uint64_t MaxStride,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the comment above needs updating now that you've removed TypeByteSize and MaxStride is now in bytes.

uint64_t TypeByteSize) {
uint64_t MaxStride) {

// If we can prove that
// (**) |Dist| > MaxBTC * Step
Expand All @@ -1819,8 +1818,7 @@ static bool isSafeDependenceDistance(const DataLayout &DL, ScalarEvolution &SE,
// will be executed only if LoopCount >= VF, proving distance >= LoopCount
// also guarantees that distance >= VF.
//
const uint64_t ByteStride = MaxStride * TypeByteSize;
const SCEV *Step = SE.getConstant(MaxBTC.getType(), ByteStride);
const SCEV *Step = SE.getConstant(MaxBTC.getType(), MaxStride);
const SCEV *Product = SE.getMulExpr(&MaxBTC, Step);

const SCEV *CastedDist = &Dist;
Expand Down Expand Up @@ -1864,9 +1862,7 @@ static bool areStridedAccessesIndependent(uint64_t Distance, uint64_t Stride,
if (Distance % TypeByteSize)
return false;

uint64_t ScaledDist = Distance / TypeByteSize;

// No dependence if the scaled distance is not multiple of the stride.
// No dependence if the distance is not multiple of the stride.
// E.g.
// for (i = 0; i < 1024 ; i += 4)
// A[i+2] = A[i] + 1;
Expand All @@ -1882,7 +1878,7 @@ static bool areStridedAccessesIndependent(uint64_t Distance, uint64_t Stride,
// Two accesses in memory (scaled distance is 4, stride is 3):
// | A[0] | | | A[3] | | | A[6] | | |
// | | | | | A[4] | | | A[7] | |
return ScaledDist % Stride;
return Distance % Stride;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like you've changed the definition of Stride to be in bytes, not multiples of the type size. Can you add something to the comment to reflect this?

}

std::variant<MemoryDepChecker::Dependence::DepType,
Expand Down Expand Up @@ -1921,6 +1917,7 @@ MemoryDepChecker::getDependenceDistanceStrideAndSize(
if (StrideAPtr && *StrideAPtr < 0) {
std::swap(Src, Sink);
std::swap(AInst, BInst);
std::swap(ATy, BTy);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like a bug fix and unrelated to this patch - can you commit this in a separate PR please?

std::swap(StrideAPtr, StrideBPtr);
}

Expand Down Expand Up @@ -1972,30 +1969,68 @@ MemoryDepChecker::getDependenceDistanceStrideAndSize(
return MemoryDepChecker::Dependence::IndirectUnsafe;
}

int64_t StrideAPtrInt = *StrideAPtr;
int64_t StrideBPtrInt = *StrideBPtr;
LLVM_DEBUG(dbgs() << "LAA: Src induction step: " << StrideAPtrInt
<< " Sink induction step: " << StrideBPtrInt << "\n");
LLVM_DEBUG(dbgs() << "LAA: Src induction step: " << *StrideAPtr
<< " Sink induction step: " << *StrideBPtr << "\n");

// Note that store size is different from alloc size, which is dependent on
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if I understand the comment store size is different from alloc size, which is dependent on store size. Do you mean that alloc size could be greater than the store size?

// store size. We use the former for checking illegal cases, and the latter
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By former I assume you mean store size? I think it's clearer to write out exactly what you're referring to, i.e. We use the store size for checking illegal cases, and the alloc size for scaling strides.`

// for scaling strides.
TypeSize AStoreSz = DL.getTypeStoreSize(ATy),
BStoreSz = DL.getTypeStoreSize(BTy);

// When the distance is zero, we're reading/writing the same memory location:
// check that the store sizes are equal. Otherwise, fail with an unknown
// dependence for which we should not generate runtime checks.
if (Dist->isZero() && AStoreSz != BStoreSz)
return MemoryDepChecker::Dependence::Unknown;

// We can't get get a uint64_t for the AllocSize if either of the store sizes
// are scalable.
if (AStoreSz.isScalable() || BStoreSz.isScalable())
return MemoryDepChecker::Dependence::Unknown;

// The TypeByteSize is used to scale Distance and VF. In these contexts, the
// only size that matters is the size of the Sink.
uint64_t ASz = alignTo(AStoreSz, DL.getABITypeAlign(ATy).value()),
TypeByteSize = alignTo(BStoreSz, DL.getABITypeAlign(BTy).value());

// We scale the strides by the alloc-type-sizes, so we can check that the
// common distance is equal when ASz != BSz.
int64_t StrideAScaled = *StrideAPtr * ASz;
int64_t StrideBScaled = *StrideBPtr * TypeByteSize;

// At least Src or Sink are loop invariant and the other is strided or
// invariant. We can generate a runtime check to disambiguate the accesses.
if (!StrideAPtrInt || !StrideBPtrInt)
if (!StrideAScaled || !StrideBScaled)
return MemoryDepChecker::Dependence::Unknown;

// Both Src and Sink have a constant stride, check if they are in the same
// direction.
if ((StrideAPtrInt > 0) != (StrideBPtrInt > 0)) {
if ((StrideAScaled > 0) != (StrideBScaled > 0)) {
LLVM_DEBUG(
dbgs() << "Pointer access with strides in different directions\n");
return MemoryDepChecker::Dependence::Unknown;
}

uint64_t TypeByteSize = DL.getTypeAllocSize(ATy);
bool HasSameSize =
DL.getTypeStoreSizeInBits(ATy) == DL.getTypeStoreSizeInBits(BTy);
if (!HasSameSize)
TypeByteSize = 0;
return DepDistanceStrideAndSizeInfo(Dist, std::abs(StrideAPtrInt),
std::abs(StrideBPtrInt), TypeByteSize,
StrideAScaled = std::abs(StrideAScaled);
StrideBScaled = std::abs(StrideBScaled);

// MaxStride is the max of the scaled strides, as expected.
uint64_t MaxStride = std::max(StrideAScaled, StrideBScaled);

// CommonStride is set if both scaled strides are equal.
std::optional<uint64_t> CommonStride;
if (StrideAScaled == StrideBScaled)
CommonStride = StrideAScaled;

// TODO: Historically, we don't retry with runtime checks unless the unscaled
// strides are the same, but this doesn't make sense. Fix this once the
// condition for runtime checks in isDependent is fixed.
bool ShouldRetryWithRuntimeCheck =
std::abs(*StrideAPtr) == std::abs(*StrideBPtr);

return DepDistanceStrideAndSizeInfo(Dist, MaxStride, CommonStride,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like you could reduce the complexity of this patch with an initial NFC refactoring PR that changes the constructor of DepDistanceStrideAndSizeInfo to take the MaxStride, CommonStride and ShouldRetryWithRuntimeCheck parameters. Essentially, it's just pushing some of the work into constructing the DepDistanceStrideAndSizeInfo object rather than in MemoryDepChecker::isDependent, which seems a sensible thing to do given we could call isDependent many times on the same object.

ShouldRetryWithRuntimeCheck, TypeByteSize,
AIsWrite, BIsWrite);
}

Expand All @@ -2011,32 +2046,28 @@ MemoryDepChecker::isDependent(const MemAccessInfo &A, unsigned AIdx,
if (std::holds_alternative<Dependence::DepType>(Res))
return std::get<Dependence::DepType>(Res);

auto &[Dist, StrideA, StrideB, TypeByteSize, AIsWrite, BIsWrite] =
auto &[Dist, MaxStride, CommonStride, ShouldRetryWithRuntimeCheck,
TypeByteSize, AIsWrite, BIsWrite] =
std::get<DepDistanceStrideAndSizeInfo>(Res);
bool HasSameSize = TypeByteSize > 0;

std::optional<uint64_t> CommonStride =
StrideA == StrideB ? std::make_optional(StrideA) : std::nullopt;
if (isa<SCEVCouldNotCompute>(Dist)) {
// TODO: Relax requirement that there is a common stride to retry with
// non-constant distance dependencies.
FoundNonConstantDistanceDependence |= CommonStride.has_value();
// TODO: Relax requirement that there is a common unscaled stride to retry
// with non-constant distance dependencies.
FoundNonConstantDistanceDependence |= ShouldRetryWithRuntimeCheck;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we apply the renaming as NFC patch first, so this reduces the diff here?

LLVM_DEBUG(dbgs() << "LAA: Dependence because of uncomputable distance.\n");
return Dependence::Unknown;
}

ScalarEvolution &SE = *PSE.getSE();
auto &DL = InnermostLoop->getHeader()->getDataLayout();
uint64_t MaxStride = std::max(StrideA, StrideB);

// If the distance between the acecsses is larger than their maximum absolute
// stride multiplied by the symbolic maximum backedge taken count (which is an
// upper bound of the number of iterations), the accesses are independet, i.e.
// they are far enough appart that accesses won't access the same location
// across all loop ierations.
if (HasSameSize && isSafeDependenceDistance(
DL, SE, *(PSE.getSymbolicMaxBackedgeTakenCount()),
*Dist, MaxStride, TypeByteSize))
if (isSafeDependenceDistance(
DL, SE, *(PSE.getSymbolicMaxBackedgeTakenCount()), *Dist, MaxStride))
return Dependence::NoDep;

const SCEVConstant *ConstDist = dyn_cast<SCEVConstant>(Dist);
Expand All @@ -2047,7 +2078,7 @@ MemoryDepChecker::isDependent(const MemAccessInfo &A, unsigned AIdx,

// If the distance between accesses and their strides are known constants,
// check whether the accesses interlace each other.
if (Distance > 0 && CommonStride && CommonStride > 1 && HasSameSize &&
if (Distance > 0 && CommonStride && CommonStride > 1 &&
areStridedAccessesIndependent(Distance, *CommonStride, TypeByteSize)) {
LLVM_DEBUG(dbgs() << "LAA: Strided accesses are independent\n");
return Dependence::NoDep;
Expand All @@ -2061,15 +2092,9 @@ MemoryDepChecker::isDependent(const MemAccessInfo &A, unsigned AIdx,

// Negative distances are not plausible dependencies.
if (SE.isKnownNonPositive(Dist)) {
if (SE.isKnownNonNegative(Dist)) {
if (HasSameSize) {
// Write to the same location with the same size.
return Dependence::Forward;
}
LLVM_DEBUG(dbgs() << "LAA: possibly zero dependence difference but "
"different type sizes\n");
return Dependence::Unknown;
}
if (SE.isKnownNonNegative(Dist))
// Write to the same location.
return Dependence::Forward;

bool IsTrueDataDependence = (AIsWrite && !BIsWrite);
// Check if the first access writes to a location that is read in a later
Expand All @@ -2084,13 +2109,12 @@ MemoryDepChecker::isDependent(const MemAccessInfo &A, unsigned AIdx,
if (!ConstDist) {
// TODO: FoundNonConstantDistanceDependence is used as a necessary
// condition to consider retrying with runtime checks. Historically, we
// did not set it when strides were different but there is no inherent
// reason to.
FoundNonConstantDistanceDependence |= CommonStride.has_value();
// did not set it when unscaled strides were different but there is no
// inherent reason to.
FoundNonConstantDistanceDependence |= ShouldRetryWithRuntimeCheck;
return Dependence::Unknown;
}
if (!HasSameSize ||
couldPreventStoreLoadForward(
if (couldPreventStoreLoadForward(
ConstDist->getAPInt().abs().getZExtValue(), TypeByteSize)) {
LLVM_DEBUG(
dbgs() << "LAA: Forward but may prevent st->ld forwarding\n");
Expand All @@ -2105,27 +2129,20 @@ MemoryDepChecker::isDependent(const MemAccessInfo &A, unsigned AIdx,
int64_t MinDistance = SE.getSignedRangeMin(Dist).getSExtValue();
// Below we only handle strictly positive distances.
if (MinDistance <= 0) {
FoundNonConstantDistanceDependence |= CommonStride.has_value();
FoundNonConstantDistanceDependence |= ShouldRetryWithRuntimeCheck;
return Dependence::Unknown;
}

if (!ConstDist) {
if (!ConstDist)
// Previously this case would be treated as Unknown, possibly setting
// FoundNonConstantDistanceDependence to force re-trying with runtime
// checks. Until the TODO below is addressed, set it here to preserve
// original behavior w.r.t. re-trying with runtime checks.
// TODO: FoundNonConstantDistanceDependence is used as a necessary
// condition to consider retrying with runtime checks. Historically, we
// did not set it when strides were different but there is no inherent
// reason to.
FoundNonConstantDistanceDependence |= CommonStride.has_value();
}

if (!HasSameSize) {
LLVM_DEBUG(dbgs() << "LAA: ReadWrite-Write positive dependency with "
"different type sizes\n");
return Dependence::Unknown;
}
// did not set it when unscaled strides were different but there is no
// inherent reason to.
FoundNonConstantDistanceDependence |= ShouldRetryWithRuntimeCheck;

if (!CommonStride)
return Dependence::Unknown;
Expand All @@ -2140,8 +2157,8 @@ MemoryDepChecker::isDependent(const MemAccessInfo &A, unsigned AIdx,

// It's not vectorizable if the distance is smaller than the minimum distance
// needed for a vectroized/unrolled version. Vectorizing one iteration in
// front needs TypeByteSize * Stride. Vectorizing the last iteration needs
// TypeByteSize (No need to plus the last gap distance).
// front needs CommonStride. Vectorizing the last iteration needs TypeByteSize
// (No need to plus the last gap distance).
//
// E.g. Assume one char is 1 byte in memory and one int is 4 bytes.
// foo(int *A) {
Expand All @@ -2168,8 +2185,7 @@ MemoryDepChecker::isDependent(const MemAccessInfo &A, unsigned AIdx,
// We know that Dist is positive, but it may not be constant. Use the signed
// minimum for computations below, as this ensures we compute the closest
// possible dependence distance.
uint64_t MinDistanceNeeded =
TypeByteSize * *CommonStride * (MinNumIter - 1) + TypeByteSize;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changing CommonStride to be scaled seems like an improvement that could be split off as NFC, reducing the diff?

uint64_t MinDistanceNeeded = *CommonStride * (MinNumIter - 1) + TypeByteSize;
if (MinDistanceNeeded > static_cast<uint64_t>(MinDistance)) {
if (!ConstDist) {
// For non-constant distances, we checked the lower bound of the
Expand Down Expand Up @@ -2225,7 +2241,7 @@ MemoryDepChecker::isDependent(const MemAccessInfo &A, unsigned AIdx,

// An update to MinDepDistBytes requires an update to MaxSafeVectorWidthInBits
// since there is a backwards dependency.
uint64_t MaxVF = MinDepDistBytes / (TypeByteSize * *CommonStride);
uint64_t MaxVF = MinDepDistBytes / *CommonStride;
LLVM_DEBUG(dbgs() << "LAA: Positive min distance " << MinDistance
<< " with max VF = " << MaxVF << '\n');

Expand Down
10 changes: 1 addition & 9 deletions llvm/test/Analysis/LoopAccessAnalysis/depend_diff_types.ll
Original file line number Diff line number Diff line change
Expand Up @@ -129,16 +129,8 @@ define void @neg_dist_dep_type_size_equivalence(ptr nocapture %vec, i64 %n) {
; CHECK-LABEL: 'neg_dist_dep_type_size_equivalence'
; CHECK-NEXT: loop:
; CHECK-NEXT: Report: unsafe dependent memory operations in loop. Use #pragma clang loop distribute(enable) to allow loop distribution to attempt to isolate the offending operations into a separate loop
; CHECK-NEXT: Unknown data dependence.
; CHECK-NEXT: Backward loop carried data dependence that prevents store-to-load forwarding.
; CHECK-NEXT: Dependences:
; CHECK-NEXT: Unknown:
; CHECK-NEXT: %ld.f64 = load double, ptr %gep.iv, align 8 ->
; CHECK-NEXT: store i32 %ld.i64.i32, ptr %gep.iv.n.i64, align 8
; CHECK-EMPTY:
; CHECK-NEXT: Unknown:
; CHECK-NEXT: %ld.i64 = load i64, ptr %gep.iv, align 8 ->
; CHECK-NEXT: store i32 %ld.i64.i32, ptr %gep.iv.n.i64, align 8
; CHECK-EMPTY:
; CHECK-NEXT: BackwardVectorizableButPreventsForwarding:
; CHECK-NEXT: %ld.f64 = load double, ptr %gep.iv, align 8 ->
; CHECK-NEXT: store double %val, ptr %gep.iv.101.i64, align 8
Expand Down
4 changes: 0 additions & 4 deletions llvm/test/Analysis/LoopAccessAnalysis/forward-loop-carried.ll
Original file line number Diff line number Diff line change
Expand Up @@ -70,10 +70,6 @@ define void @forward_different_access_sizes(ptr readnone %end, ptr %start) {
; CHECK-NEXT: store i32 0, ptr %gep.2, align 4 ->
; CHECK-NEXT: %l = load i24, ptr %gep.1, align 1
; CHECK-EMPTY:
; CHECK-NEXT: Forward:
; CHECK-NEXT: store i32 0, ptr %gep.2, align 4 ->
; CHECK-NEXT: store i24 %l, ptr %ptr.iv, align 1
; CHECK-EMPTY:
; CHECK-NEXT: Run-time memory checks:
; CHECK-NEXT: Grouped accesses:
; CHECK-EMPTY:
Expand Down
Loading