Skip to content

fix dependency resolution edge cases issues #3985

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Jan 9, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
54 changes: 32 additions & 22 deletions Sources/PackageGraph/Pubgrub/PubgrubDependencyResolver.swift
Original file line number Diff line number Diff line change
Expand Up @@ -1216,41 +1216,51 @@ internal final class PubGrubPackageContainer {

// Since the pinned version is most likely to succeed, we don't compute bounds for its
// incompatibilities.
return try Array(constraints.map { (constraint: PackageContainerConstraint) -> [Incompatibility] in
guard case .versionSet(let vs) = constraint.requirement else {
return try constraints.flatMap { constraint -> [Incompatibility] in
// We only have version-based requirements at this point.
guard case .versionSet(let constraintRequirement) = constraint.requirement else {
throw InternalError("Unexpected unversioned requirement: \(constraint)")
}
return try constraint.nodes().map { dependencyNode in
var terms: OrderedSet<Term> = []
// the package version requirement
terms.append(Term(node, .exact(version)))
terms.append(Term(not: dependencyNode, vs))
// the dependency's version requirement
terms.append(Term(not: dependencyNode, constraintRequirement))
return try Incompatibility(terms, root: root, cause: .dependency(node: node))
}
}.joined())
}
Copy link
Contributor Author

@tomerd tomerd Jan 7, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bit of style cleanup, uses flatMap instead of creating an array and then joining it. no functional change.

}

let (lowerBounds, upperBounds) = try self.computeBounds(for: node,
constraints: constraints,
startingWith: version)

return try constraints.flatMap { constraint in
return try constraint.nodes().map { constraintNode in
var terms: OrderedSet<Term> = []
let lowerBound = lowerBounds[constraintNode] ?? "0.0.0"
let upperBound = upperBounds[constraintNode] ?? Version(version.major + 1, 0, 0)
assert(lowerBound < upperBound)
let (lowerBounds, upperBounds) = try self.computeBounds(
for: node,
constraints: constraints,
startingWith: version
)

// We only have version-based requirements at this point.
guard case .versionSet(let vs) = constraint.requirement else {
throw InternalError("Unexpected unversioned requirement: \(constraint)")
return try constraints.flatMap { constraint -> [Incompatibility] in
// We only have version-based requirements at this point.
guard case .versionSet(let constraintRequirement) = constraint.requirement else {
throw InternalError("Unexpected unversioned requirement: \(constraint)")
}
return try constraint.nodes().compactMap { constraintNode in
// cycle
guard node != constraintNode else {
return nil
}

let requirement: VersionSetSpecifier = .range(lowerBound ..< upperBound)
terms.append(Term(node, requirement))
terms.append(Term(not: constraintNode, vs))

// Make a record for this dependency so we don't have to recompute the bounds when the selected version falls within the bounds.
self.emittedIncompatibilities[constraintNode] = requirement.union(emittedIncompatibilities[constraintNode] ?? .empty)
if let lowerBound = lowerBounds[constraintNode], let upperBound = upperBounds[constraintNode] {
assert(lowerBound < upperBound)
let requirement: VersionSetSpecifier = .range(lowerBound ..< upperBound)
self.emittedIncompatibilities[constraintNode] = requirement.union(emittedIncompatibilities[constraintNode] ?? .empty)
}

var terms: OrderedSet<Term> = []
// the package version requirement
terms.append(Term(node, .exact(version)))
// the dependency's version requirement
terms.append(Term(not: constraintNode, constraintRequirement))
Copy link
Contributor Author

@tomerd tomerd Jan 7, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is the heart of the change, there a couple of things going on here that seem wrong:

  1. requirement was used for two different purposes: to record the dependency version ( self.emittedIncompatibilities[constraintNode] = requirement.union(emittedIncompatibilities[constraintNode] ?? .empty)) but also as the resolved version for node (terms.append(Term(node, requirement))), which iiuc does not really make sense as it mixed the dependency (constraintNode) version with the package (node) version.

  2. requirement is looked up on the bounds computed for the dependencies using computeBounds with constraintNode as the key which is correct, however if not found, defaults to 0.0.0 < version(major+1, 0, 0). from what I can see, the defaulting is wrong in two ways: it does not work for pre-release versions since it strips the pre-release extensions (e.g. .exact("1.0.0-alpha.10") becomes .range(0.0.0 ..< 2.0.0) which is causing downstream issue as we are using range checks later in the graph resolution and "1.0.0-alpha.10" is not included in 0.0.0 ..< 2.0.0 based on our version inclusion rules (which is the bug I was chasing and got me here). maybe slightly worse, the default is based on version which is the version of node and has nothing to do with constraintNode so we end up recording (in self.emittedIncompatibilities) version ranges that are potentially wrong

the change suggested here is to always set the node term to the exact version we are computing for which works for pre-release and non-pre-release. it does however break a bunch of tests that recorded the 0.0.0 < version(major+1, 0, 0) behavior, but maybe that is okay since they are testing for wrong behavior?

@SDGGiesbrecht very interested in your 👀 on this, as you are probably more familiar with this code / logic than I am. I actually hope I am wrong and this is not as problematic as I think it is :D

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have much knowledge here, but I would not be surprised if some of the tests are conceptually wrong and just verifying the implemented behavior. So I wouldn't be concerned about having to change the tests.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Taking a closer look now. I will get back to you.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe both your observations are correct, @tomerd, and same goes for your solutions.

Whatever situation used to hit the defaults will now result in nothing recorded into emittedIncompatibilities. At first that concerned me, but after tracking where that leads, it feels safe, even if a little odd. The only two times emittedIncompatibilities is read from are:

  • To skip processing a node that has already been handled. Such nodes will be re‐processed now, but I don’t think that would be a problem.
  • To not trust a pin if we have already encountered anything odd. This too only prevents a shortcut, as the whole graph will be traversed either way, just on the one hand not necessarily trying the most probable solution first when obvious, or on the other hand not preloading alternatives when overestimating the probability that a solution will pan out.

I would be curious to know how the defaults were reachable in the first place, and if it could be done from the wild or only by mocked tests that bypass earlier checks.

I actually hope I am wrong and this is not as problematic as I think it is :D

Yes, that code appears to have been embarrassingly invalid. I think it snuck by everyone for so long because without the proper incompatibility here, it would just fail to record a short‐circuit opportunity and merely be slower, and because the nonsensical extra incompatibility would usually end up so far afield that it would be completely disjoint from the actual problem space.


return try Incompatibility(terms, root: root, cause: .dependency(node: node))
}
Expand Down
25 changes: 19 additions & 6 deletions Sources/Workspace/Workspace.swift
Original file line number Diff line number Diff line change
Expand Up @@ -778,7 +778,7 @@ extension Workspace {
observabilityScope: ObservabilityScope
) throws -> [(PackageReference, Workspace.PackageStateChange)]? {
// Create cache directories.
createCacheDirectories(observabilityScope: observabilityScope)
self.createCacheDirectories(observabilityScope: observabilityScope)

// FIXME: this should not block
// Load the root manifests and currently checked out manifests.
Expand All @@ -793,7 +793,9 @@ extension Workspace {
guard let pinsStore = observabilityScope.trap({ try self.pinsStore.load() }) else { return nil }

// Ensure we don't have any error at this point.
guard !observabilityScope.errorsReported else { return nil }
guard !observabilityScope.errorsReported else {
return nil
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

formatting only


// Add unversioned constraints for edited packages.
var updateConstraints = currentManifests.editedPackagesConstraints()
Expand Down Expand Up @@ -824,17 +826,28 @@ extension Workspace {
// Reset the active resolver.
self.activeResolver = nil

guard !observabilityScope.errorsReported else { return nil }
guard !observabilityScope.errorsReported else {
return nil
}

if dryRun {
return observabilityScope.trap { return try computePackageStateChanges(root: graphRoot, resolvedDependencies: updateResults, updateBranches: true, observabilityScope: observabilityScope) }
return observabilityScope.trap {
return try self.computePackageStateChanges(root: graphRoot, resolvedDependencies: updateResults, updateBranches: true, observabilityScope: observabilityScope)
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

formatting only

}

// Update the checkouts based on new dependency resolution.
let packageStateChanges = self.updateDependenciesCheckouts(root: graphRoot, updateResults: updateResults, updateBranches: true, observabilityScope: observabilityScope)

// Load the updated manifests.
let updatedDependencyManifests = try self.loadDependencyManifests(root: graphRoot, observabilityScope: observabilityScope)
// If we have missing packages, something is fundamentally wrong with the resolution of the graph
let stillMissingPackages = updatedDependencyManifests.computePackages().missing
guard stillMissingPackages.isEmpty else {
let missing = stillMissingPackages.map{ $0.description }
observabilityScope.emit(error: "exhausted attempts to resolve the dependencies graph, with '\(missing.joined(separator: "', '"))' unresolved.")
return nil
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added validation as package update could fail silently without this check

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a great improvement too — the silent failures cause so much confusion.


// Update the resolved file.
self.saveResolvedFile(
Expand Down Expand Up @@ -1715,7 +1728,7 @@ extension Workspace {
let dependenciesToLoad = dependenciesRequired.map{ $0.createPackageRef() }.filter { !loadedManifests.keys.contains($0.identity) }
let dependenciesManifests = try temp_await { self.loadManagedManifests(for: dependenciesToLoad, observabilityScope: observabilityScope, completion: $0) }
dependenciesManifests.forEach { loadedManifests[$0.key] = $0.value }
return pair.item.dependenciesRequired(for: pair.key.productFilter).compactMap { dependency in
return dependenciesRequired.compactMap { dependency in
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

small cleanup as dependenciesRequired is already defined above as pair.item.dependenciesRequired(for: pair.key.productFilter)

loadedManifests[dependency.identity].flatMap {
// we also compare the location as this function may attempt to load
// dependencies that have the same identity but from a different location
Expand Down Expand Up @@ -2698,7 +2711,7 @@ extension Workspace {
) -> [(PackageReference, PackageStateChange)] {
// Get the update package states from resolved results.
guard let packageStateChanges = observabilityScope.trap({
try computePackageStateChanges(root: root, resolvedDependencies: updateResults, updateBranches: updateBranches, observabilityScope: observabilityScope)
try self.computePackageStateChanges(root: root, resolvedDependencies: updateResults, updateBranches: updateBranches, observabilityScope: observabilityScope)
}) else {
return []
}
Expand Down
Loading