Skip to content

Commit 07590ac

Browse files
committed
[MLIR] Fix race condition in MLIR verifier
`failableParallelForEach` will non-deterministically early terminate upon failure, leading to inconsistent and potentially missing diagnostics. This PR uses `parallelForEach` to ensure all operations are verified and all diagnostics are handled, while tracking the failure state separately. Other potential fixes include: - Making `failableParallelForEach` have deterministic early-exit behavior (or have an option for it) - I didn't want to change more than what was required (and potentially incur perf hits for unrelated code), but if this is a better fix I'm happy to submit a patch. - I think all diagnostics that can be detected from verification failures should be reported, so I don't even think this would be correct behavior anyway - Adding an option for `failableParallelForEach` to still execute on every element on the range while still returning `LogicalResult`
1 parent 747214e commit 07590ac

File tree

1 file changed

+8
-3
lines changed

1 file changed

+8
-3
lines changed

mlir/lib/IR/Verifier.cpp

Lines changed: 8 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -220,10 +220,15 @@ LogicalResult OperationVerifier::verifyOnExit(Operation &op) {
220220
o.hasTrait<OpTrait::IsIsolatedFromAbove>())
221221
opsWithIsolatedRegions.push_back(&o);
222222
}
223-
if (failed(failableParallelForEach(
224-
op.getContext(), opsWithIsolatedRegions,
225-
[&](Operation *o) { return verifyOpAndDominance(*o); })))
223+
224+
std::atomic<bool> opFailedVerify = false;
225+
parallelForEach(op.getContext(), opsWithIsolatedRegions, [&](Operation *o) {
226+
if (failed(verifyOpAndDominance(*o)))
227+
opFailedVerify.store(true, std::memory_order_relaxed);
228+
});
229+
if (opFailedVerify.load(std::memory_order_relaxed))
226230
return failure();
231+
227232
OperationName opName = op.getName();
228233
std::optional<RegisteredOperationName> registeredInfo =
229234
opName.getRegisteredInfo();

0 commit comments

Comments
 (0)