You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Nov 30, 2024. It is now read-only.
To speed up bisection of complex dependent failures, a recursive bisection
strategy is introduced in place of permutation searching. The strategy works
as follows:
1. Split the candidate set into two slices
2. Test each slice with the expected failure(s)
3. If either slice can be ignored:
- recurse on the other slice
4. If neither slice can be ignored, for each slice:
- recurse on slice, retaining other slice for test runs
Graphically, this looks somewhat like:
F = expected failure
P = unexpected pass
X = culprit
- = innocent
[ ] = current bisection scope
1 2 3 4 5 6 7 8 9
[- - X - - X - -] F # Initial run
[ - X - -] F # RHS0 implicated
[- - X - ] F # LHS0 implicated
=> (1) recurse on LHS0, fixing RHS0
[ X -]- X - - F # RHS1 implicated
[- - ]- X - - P # LHS1 can be ignored
=> (2) recurse on RHS, retaining fixed set
[ -]- X - - P # LHS2 implicated
[X ]- X - - F # RHS2 can be ignored
=> (3) recurse on LHS2
=> terminate as candidates.length == 1
=> (1) recurse on RHS0, fixing LHS0
- - X -[ - -] P # LHS3 implicated
- - X -[- X ] F # RHS3 can be ignored
=> (4) recurse on LHS3
- - X -[ X] F # LHS4 can be ignored
- - X -[- ] P # RHS4 implicated
=> (5) recurse on RHS5
=> terminate as candidates.length == 1
=> return LHS2 + RHS5 = [3, 6]
0 commit comments