Skip to content

[DomTree] Avoid duplicate hash lookups in runDFS() (NFCI) #96460

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jun 25, 2024

Conversation

nikic
Copy link
Contributor

@nikic nikic commented Jun 24, 2024

runDFS() currently performs three hash table lookups. One in the main loop, one when checking whether a successor has already been visited and another when adding parent and reverse children to the successor.

We can avoid the two additional lookups by making the parent number part of the stack, and then making the parent / reverse children update part of the main loop.

The main loop already has a check for already visited nodes, so we don't have to check this in advance -- we can simply push the node to the worklist and skip it later.

This results in a minor compile-time improvement: http://llvm-compile-time-tracker.com/compare.php?from=c43d5f540fd43409e7997c9fec97a1d415855b7c&to=1844fc47329a2905bf73b7b86964e8a31fa3b758&stat=instructions:u

runDFS() currently performs three hash table lookups. One in the main
loop, one when checking whether a successor has already been
visited and another when adding parent and reverse children to the
successor.

We can avoid the two additional lookups by making the parent
number part of the stack, and then making the parent / reverse
children update part of the main loop.

The main loop already has a check for already visited nodes, so
we don't have to check this in advance -- we can simply push the
node to the worklist and skip it later.
@nikic nikic requested a review from kuhar June 24, 2024 08:09
@llvmbot
Copy link
Member

llvmbot commented Jun 24, 2024

@llvm/pr-subscribers-llvm-support

Author: Nikita Popov (nikic)

Changes

runDFS() currently performs three hash table lookups. One in the main loop, one when checking whether a successor has already been visited and another when adding parent and reverse children to the successor.

We can avoid the two additional lookups by making the parent number part of the stack, and then making the parent / reverse children update part of the main loop.

The main loop already has a check for already visited nodes, so we don't have to check this in advance -- we can simply push the node to the worklist and skip it later.

This results in a minor compile-time improvement: http://llvm-compile-time-tracker.com/compare.php?from=c43d5f540fd43409e7997c9fec97a1d415855b7c&to=1844fc47329a2905bf73b7b86964e8a31fa3b758&stat=instructions:u


Full diff: https://github.com/llvm/llvm-project/pull/96460.diff

1 Files Affected:

  • (modified) llvm/include/llvm/Support/GenericDomTreeConstruction.h (+5-16)
diff --git a/llvm/include/llvm/Support/GenericDomTreeConstruction.h b/llvm/include/llvm/Support/GenericDomTreeConstruction.h
index 401cc4eb0ec1b..57cbe993d8739 100644
--- a/llvm/include/llvm/Support/GenericDomTreeConstruction.h
+++ b/llvm/include/llvm/Support/GenericDomTreeConstruction.h
@@ -180,15 +180,17 @@ struct SemiNCAInfo {
                   unsigned AttachToNum,
                   const NodeOrderMap *SuccOrder = nullptr) {
     assert(V);
-    SmallVector<NodePtr, 64> WorkList = {V};
+    SmallVector<std::pair<NodePtr, unsigned>, 64> WorkList = {{V, AttachToNum}};
     NodeToInfo[V].Parent = AttachToNum;
 
     while (!WorkList.empty()) {
-      const NodePtr BB = WorkList.pop_back_val();
+      const auto [BB, ParentNum] = WorkList.pop_back_val();
       auto &BBInfo = NodeToInfo[BB];
+      BBInfo.ReverseChildren.push_back(ParentNum);
 
       // Visited nodes always have positive DFS numbers.
       if (BBInfo.DFSNum != 0) continue;
+      BBInfo.Parent = ParentNum;
       BBInfo.DFSNum = BBInfo.Semi = BBInfo.Label = ++LastNum;
       NumToNode.push_back(BB);
 
@@ -201,22 +203,9 @@ struct SemiNCAInfo {
             });
 
       for (const NodePtr Succ : Successors) {
-        const auto SIT = NodeToInfo.find(Succ);
-        // Don't visit nodes more than once but remember to collect
-        // ReverseChildren.
-        if (SIT != NodeToInfo.end() && SIT->second.DFSNum != 0) {
-          if (Succ != BB) SIT->second.ReverseChildren.push_back(LastNum);
-          continue;
-        }
-
         if (!Condition(BB, Succ)) continue;
 
-        // It's fine to add Succ to the map, because we know that it will be
-        // visited later.
-        auto &SuccInfo = NodeToInfo[Succ];
-        WorkList.push_back(Succ);
-        SuccInfo.Parent = LastNum;
-        SuccInfo.ReverseChildren.push_back(LastNum);
+        WorkList.push_back({Succ, LastNum});
       }
     }
 

@@ -180,15 +180,17 @@ struct SemiNCAInfo {
unsigned AttachToNum,
const NodeOrderMap *SuccOrder = nullptr) {
assert(V);
SmallVector<NodePtr, 64> WorkList = {V};
SmallVector<std::pair<NodePtr, unsigned>, 64> WorkList = {{V, AttachToNum}};
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like a pretty large stack allocation (16B * 64 => 1kB?). Is this what we want?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's on the bigger side, but as runDFS is essentially a leaf function, I don't think it's a problem.

@nikic nikic merged commit 174f80c into llvm:main Jun 25, 2024
9 checks passed
@nikic nikic deleted the dt-dfs branch June 25, 2024 07:23
@llvm-ci
Copy link
Collaborator

llvm-ci commented Jun 25, 2024

LLVM Buildbot has detected a new failure on builder clang-cuda-l4 running on cuda-l4-0 while building llvm at step 3 "annotate".

Full details are available at: https://lab.llvm.org/buildbot/#/builders/101/builds/644

Here is the relevant piece of the build log for the reference:

Step 3 (annotate) failure: '/buildbot/cuda-build --jobs=' (failure)
...
+ echo @@@STEP_SUMMARY_TEXT@@@@
+ run ninja check-cuda-simple
+ echo '>>> ' ninja check-cuda-simple
+ ninja check-cuda-simple
@@@BUILD_STEP Testing CUDA test-suite@@@
@@@STEP_SUMMARY_CLEAR@@@
@@@STEP_SUMMARY_TEXT@@@@
>>>  ninja check-cuda-simple
[0/40] cd /buildbot/cuda-l4-0/work/clang-cuda-l4/build/External/CUDA && /usr/local/bin/lit -vv -j 1 assert-cuda-11.8-c++11-libc++.test axpy-cuda-11.8-c++11-libc++.test algorithm-cuda-11.8-c++11-libc++.test cmath-cuda-11.8-c++11-libc++.test complex-cuda-11.8-c++11-libc++.test math_h-cuda-11.8-c++11-libc++.test new-cuda-11.8-c++11-libc++.test empty-cuda-11.8-c++11-libc++.test printf-cuda-11.8-c++11-libc++.test future-cuda-11.8-c++11-libc++.test builtin_var-cuda-11.8-c++11-libc++.test test_round-cuda-11.8-c++11-libc++.test
-- Testing: 12 tests, 1 workers --
FAIL: test-suite :: External/CUDA/algorithm-cuda-11.8-c++11-libc++.test (1 of 12)
******************** TEST 'test-suite :: External/CUDA/algorithm-cuda-11.8-c++11-libc++.test' FAILED ********************

/buildbot/cuda-l4-0/work/clang-cuda-l4/build/tools/timeit-target --timeout 7200 --limit-core 0 --limit-cpu 7200 --limit-file-size 209715200 --limit-rss-size 838860800 --append-exitstatus --redirect-output /buildbot/cuda-l4-0/work/clang-cuda-l4/build/External/CUDA/Output/algorithm-cuda-11.8-c++11-libc++.test.out --redirect-input /dev/null --summary /buildbot/cuda-l4-0/work/clang-cuda-l4/build/External/CUDA/Output/algorithm-cuda-11.8-c++11-libc++.test.time /buildbot/cuda-l4-0/work/clang-cuda-l4/build/External/CUDA/algorithm-cuda-11.8-c++11-libc++
cd /buildbot/cuda-l4-0/work/clang-cuda-l4/build/External/CUDA ; /buildbot/cuda-l4-0/work/clang-cuda-l4/build/tools/fpcmp-target /buildbot/cuda-l4-0/work/clang-cuda-l4/build/External/CUDA/Output/algorithm-cuda-11.8-c++11-libc++.test.out algorithm.reference_output-cuda-11.8-c++11-libc++

+ cd /buildbot/cuda-l4-0/work/clang-cuda-l4/build/External/CUDA
+ /buildbot/cuda-l4-0/work/clang-cuda-l4/build/tools/fpcmp-target /buildbot/cuda-l4-0/work/clang-cuda-l4/build/External/CUDA/Output/algorithm-cuda-11.8-c++11-libc++.test.out algorithm.reference_output-cuda-11.8-c++11-libc++
/buildbot/cuda-l4-0/work/clang-cuda-l4/build/tools/fpcmp-target: Comparison failed, textual difference between 'C' and 'S'

********************
FAIL: test-suite :: External/CUDA/assert-cuda-11.8-c++11-libc++.test (2 of 12)
******************** TEST 'test-suite :: External/CUDA/assert-cuda-11.8-c++11-libc++.test' FAILED ********************

/buildbot/cuda-l4-0/work/clang-cuda-l4/build/tools/timeit-target --timeout 7200 --limit-core 0 --limit-cpu 7200 --limit-file-size 209715200 --limit-rss-size 838860800 --append-exitstatus --redirect-output /buildbot/cuda-l4-0/work/clang-cuda-l4/build/External/CUDA/Output/assert-cuda-11.8-c++11-libc++.test.out --redirect-input /dev/null --summary /buildbot/cuda-l4-0/work/clang-cuda-l4/build/External/CUDA/Output/assert-cuda-11.8-c++11-libc++.test.time /buildbot/cuda-l4-0/work/clang-cuda-l4/build/External/CUDA/assert-cuda-11.8-c++11-libc++
cd /buildbot/cuda-l4-0/work/clang-cuda-l4/build/External/CUDA ; /buildbot/cuda-l4-0/work/clang-cuda-l4/build/tools/fpcmp-target /buildbot/cuda-l4-0/work/clang-cuda-l4/build/External/CUDA/Output/assert-cuda-11.8-c++11-libc++.test.out assert.reference_output-cuda-11.8-c++11-libc++

+ cd /buildbot/cuda-l4-0/work/clang-cuda-l4/build/External/CUDA
+ /buildbot/cuda-l4-0/work/clang-cuda-l4/build/tools/fpcmp-target /buildbot/cuda-l4-0/work/clang-cuda-l4/build/External/CUDA/Output/assert-cuda-11.8-c++11-libc++.test.out assert.reference_output-cuda-11.8-c++11-libc++
/buildbot/cuda-l4-0/work/clang-cuda-l4/build/tools/fpcmp-target: Comparison failed, textual difference between 'e' and 'a'

********************
FAIL: test-suite :: External/CUDA/axpy-cuda-11.8-c++11-libc++.test (3 of 12)
******************** TEST 'test-suite :: External/CUDA/axpy-cuda-11.8-c++11-libc++.test' FAILED ********************

/buildbot/cuda-l4-0/work/clang-cuda-l4/build/tools/timeit-target --timeout 7200 --limit-core 0 --limit-cpu 7200 --limit-file-size 209715200 --limit-rss-size 838860800 --append-exitstatus --redirect-output /buildbot/cuda-l4-0/work/clang-cuda-l4/build/External/CUDA/Output/axpy-cuda-11.8-c++11-libc++.test.out --redirect-input /dev/null --summary /buildbot/cuda-l4-0/work/clang-cuda-l4/build/External/CUDA/Output/axpy-cuda-11.8-c++11-libc++.test.time /buildbot/cuda-l4-0/work/clang-cuda-l4/build/External/CUDA/axpy-cuda-11.8-c++11-libc++
cd /buildbot/cuda-l4-0/work/clang-cuda-l4/build/External/CUDA ; /buildbot/cuda-l4-0/work/clang-cuda-l4/build/tools/fpcmp-target /buildbot/cuda-l4-0/work/clang-cuda-l4/build/External/CUDA/Output/axpy-cuda-11.8-c++11-libc++.test.out axpy.reference_output-cuda-11.8-c++11-libc++

+ cd /buildbot/cuda-l4-0/work/clang-cuda-l4/build/External/CUDA
+ /buildbot/cuda-l4-0/work/clang-cuda-l4/build/tools/fpcmp-target /buildbot/cuda-l4-0/work/clang-cuda-l4/build/External/CUDA/Output/axpy-cuda-11.8-c++11-libc++.test.out axpy.reference_output-cuda-11.8-c++11-libc++
/buildbot/cuda-l4-0/work/clang-cuda-l4/build/tools/fpcmp-target: Comparison failed, textual difference between '1' and '2'

********************
PASS: test-suite :: External/CUDA/builtin_var-cuda-11.8-c++11-libc++.test (4 of 12)
********** TEST 'test-suite :: External/CUDA/builtin_var-cuda-11.8-c++11-libc++.test' RESULTS **********
exec_time: 0.0000 
hash: "293d0eb9282156edc5422e7a8c9268e3" 
**********
FAIL: test-suite :: External/CUDA/cmath-cuda-11.8-c++11-libc++.test (5 of 12)

@llvm-ci
Copy link
Collaborator

llvm-ci commented Jun 25, 2024

LLVM Buildbot has detected a new failure on builder sanitizer-x86_64-linux-qemu running on sanitizer-buildbot4 while building llvm at step 2 "annotate".

Full details are available at: https://lab.llvm.org/buildbot/#/builders/139/builds/441

Here is the relevant piece of the build log for the reference:

Step 2 (annotate) failure: 'python ../sanitizer_buildbot/sanitizers/zorg/buildbot/builders/sanitizers/buildbot_selector.py' (failure)
...
1 warning generated.
[72/76] Generating ScudoCUnitTest-mips64-Test
[73/76] Generating ScudoCxxUnitTest-mips64-Test
[74/76] Generating ScudoUnitTestsObjects.combined_test.cpp.mips64.o
[75/76] Generating ScudoUnitTest-mips64-Test
[75/76] Running Scudo Standalone tests
llvm-lit: /b/sanitizer-x86_64-linux-qemu/build/llvm-project/llvm/utils/lit/lit/main.py:72: note: The test suite configuration requested an individual test timeout of 0 seconds but a timeout of 900 seconds was requested on the command line. Forcing timeout to be 900 seconds.
-- Testing: 152 tests, 80 workers --
Testing:  0.. 10.. 20.. 30.. 40.. 50.. 60.. 70.. 80.. 90..
TIMEOUT: ScudoStandalone-Unit :: ./ScudoUnitTest-mips64-Test/72/134 (152 of 152)
******************** TEST 'ScudoStandalone-Unit :: ./ScudoUnitTest-mips64-Test/72/134' FAILED ********************
Script(shard):
--
GTEST_OUTPUT=json:/b/sanitizer-x86_64-linux-qemu/build/llvm_build2_debug_mips64_qemu/lib/scudo/standalone/tests/./ScudoUnitTest-mips64-Test-ScudoStandalone-Unit-675603-72-134.json GTEST_SHUFFLE=0 GTEST_TOTAL_SHARDS=134 GTEST_SHARD_INDEX=72 /b/sanitizer-x86_64-linux-qemu/build/qemu_build/qemu-mips64 -L /usr/mips64-linux-gnuabi64 /b/sanitizer-x86_64-linux-qemu/build/llvm_build2_debug_mips64_qemu/lib/scudo/standalone/tests/./ScudoUnitTest-mips64-Test
--

Note: This is test shard 73 of 134.
[==========] Running 2 tests from 2 test suites.
[----------] Global test environment set-up.
[----------] 1 test from ScudoCombinedDeathTestBasicCombined15_AndroidConfig
[ RUN      ] ScudoCombinedDeathTestBasicCombined15_AndroidConfig.BasicCombined15
Stats: SizeClassAllocator64: 0M mapped (0M rss) in 15 allocations; remains 15; ReleaseToOsIntervalMs = 1000
  00 (    64): mapped:    256K popped:      13 pushed:       0 inuse:     13 total:    104 releases:      0 last released:      0K latest pushed bytes:      5K region: 0x555676811000 (0x555676808000)
  31 ( 33296): mapped:    256K popped:       1 pushed:       0 inuse:      1 total:      7 releases:      0 last released:      0K latest pushed bytes:    195K region: 0x5555d6818000 (0x5555d6808000)
  32 ( 65552): mapped:    256K popped:       1 pushed:       0 inuse:      1 total:      3 releases:      0 last released:      0K latest pushed bytes:    128K region: 0x5556d6813000 (0x5556d6808000)
Stats: MapAllocator: allocated 81 times (7848K), freed 81 times (7848K), remains 0 (0K) max 0M, Fragmented 0K
Stats: MapAllocatorCache: EntriesCount: 7, MaxEntriesCount: 32, MaxEntrySize: 2097152, ReleaseToOsIntervalMs = 1000
Stats: CacheRetrievalStats: SuccessRate: 0/0 (100.00%)
StartBlockAddress: 0x55576722f000, EndBlockAddress: 0x555767249000, BlockSize: 106496 
StartBlockAddress: 0x55576715f000, EndBlockAddress: 0x55576717d000, BlockSize: 122880 
StartBlockAddress: 0x55576717f000, EndBlockAddress: 0x55576719f000, BlockSize: 131072 
StartBlockAddress: 0x5557671af000, EndBlockAddress: 0x5557671c1000, BlockSize: 73728 
StartBlockAddress: 0x5557671cf000, EndBlockAddress: 0x5557671e3000, BlockSize: 81920 
StartBlockAddress: 0x5557671ef000, EndBlockAddress: 0x555767205000, BlockSize: 90112 
StartBlockAddress: 0x55576720f000, EndBlockAddress: 0x555767227000, BlockSize: 98304 
Stats: Quarantine: batches: 0; bytes: 0 (user: 0); chunks: 0 (capacity: 0); 0% chunks used; 0% memory overhead
Quarantine limits: global: 256K; thread local: 128K
Stats: SharedTSDs: 2 available; total 8
  Shared TSD[0]:
    00 (    64): cached:    9 max:   26
    31 ( 33296): cached:    1 max:    2
    32 ( 65552): cached:    1 max:    2
  Shared TSD[1]:
    No block is cached.
Fragmentation Stats: SizeClassAllocator64: page size = 4096 bytes
  01 (    32): inuse/total blocks:      0/     0 inuse/total pages:      0/     0 inuse bytes:      0K util: 100.00%
  02 (    48): inuse/total blocks:      0/     0 inuse/total pages:      0/     0 inuse bytes:      0K util: 100.00%
  03 (    64): inuse/total blocks:      0/     0 inuse/total pages:      0/     0 inuse bytes:      0K util: 100.00%
  04 (    80): inuse/total blocks:      0/     0 inuse/total pages:      0/     0 inuse bytes:      0K util: 100.00%
Step 24 (scudo debug_mips64_qemu) failure: scudo debug_mips64_qemu (failure)
...
1 warning generated.
[72/76] Generating ScudoCUnitTest-mips64-Test
[73/76] Generating ScudoCxxUnitTest-mips64-Test
[74/76] Generating ScudoUnitTestsObjects.combined_test.cpp.mips64.o
[75/76] Generating ScudoUnitTest-mips64-Test
[75/76] Running Scudo Standalone tests
llvm-lit: /b/sanitizer-x86_64-linux-qemu/build/llvm-project/llvm/utils/lit/lit/main.py:72: note: The test suite configuration requested an individual test timeout of 0 seconds but a timeout of 900 seconds was requested on the command line. Forcing timeout to be 900 seconds.
-- Testing: 152 tests, 80 workers --
Testing:  0.. 10.. 20.. 30.. 40.. 50.. 60.. 70.. 80.. 90..
TIMEOUT: ScudoStandalone-Unit :: ./ScudoUnitTest-mips64-Test/72/134 (152 of 152)
******************** TEST 'ScudoStandalone-Unit :: ./ScudoUnitTest-mips64-Test/72/134' FAILED ********************
Script(shard):
--
GTEST_OUTPUT=json:/b/sanitizer-x86_64-linux-qemu/build/llvm_build2_debug_mips64_qemu/lib/scudo/standalone/tests/./ScudoUnitTest-mips64-Test-ScudoStandalone-Unit-675603-72-134.json GTEST_SHUFFLE=0 GTEST_TOTAL_SHARDS=134 GTEST_SHARD_INDEX=72 /b/sanitizer-x86_64-linux-qemu/build/qemu_build/qemu-mips64 -L /usr/mips64-linux-gnuabi64 /b/sanitizer-x86_64-linux-qemu/build/llvm_build2_debug_mips64_qemu/lib/scudo/standalone/tests/./ScudoUnitTest-mips64-Test
--

Note: This is test shard 73 of 134.
[==========] Running 2 tests from 2 test suites.
[----------] Global test environment set-up.
[----------] 1 test from ScudoCombinedDeathTestBasicCombined15_AndroidConfig
[ RUN      ] ScudoCombinedDeathTestBasicCombined15_AndroidConfig.BasicCombined15
Stats: SizeClassAllocator64: 0M mapped (0M rss) in 15 allocations; remains 15; ReleaseToOsIntervalMs = 1000
  00 (    64): mapped:    256K popped:      13 pushed:       0 inuse:     13 total:    104 releases:      0 last released:      0K latest pushed bytes:      5K region: 0x555676811000 (0x555676808000)
  31 ( 33296): mapped:    256K popped:       1 pushed:       0 inuse:      1 total:      7 releases:      0 last released:      0K latest pushed bytes:    195K region: 0x5555d6818000 (0x5555d6808000)
  32 ( 65552): mapped:    256K popped:       1 pushed:       0 inuse:      1 total:      3 releases:      0 last released:      0K latest pushed bytes:    128K region: 0x5556d6813000 (0x5556d6808000)
Stats: MapAllocator: allocated 81 times (7848K), freed 81 times (7848K), remains 0 (0K) max 0M, Fragmented 0K
Stats: MapAllocatorCache: EntriesCount: 7, MaxEntriesCount: 32, MaxEntrySize: 2097152, ReleaseToOsIntervalMs = 1000
Stats: CacheRetrievalStats: SuccessRate: 0/0 (100.00%)
StartBlockAddress: 0x55576722f000, EndBlockAddress: 0x555767249000, BlockSize: 106496 
StartBlockAddress: 0x55576715f000, EndBlockAddress: 0x55576717d000, BlockSize: 122880 
StartBlockAddress: 0x55576717f000, EndBlockAddress: 0x55576719f000, BlockSize: 131072 
StartBlockAddress: 0x5557671af000, EndBlockAddress: 0x5557671c1000, BlockSize: 73728 
StartBlockAddress: 0x5557671cf000, EndBlockAddress: 0x5557671e3000, BlockSize: 81920 
StartBlockAddress: 0x5557671ef000, EndBlockAddress: 0x555767205000, BlockSize: 90112 
StartBlockAddress: 0x55576720f000, EndBlockAddress: 0x555767227000, BlockSize: 98304 
Stats: Quarantine: batches: 0; bytes: 0 (user: 0); chunks: 0 (capacity: 0); 0% chunks used; 0% memory overhead
Quarantine limits: global: 256K; thread local: 128K
Stats: SharedTSDs: 2 available; total 8
  Shared TSD[0]:
    00 (    64): cached:    9 max:   26
    31 ( 33296): cached:    1 max:    2
    32 ( 65552): cached:    1 max:    2
  Shared TSD[1]:
    No block is cached.
Fragmentation Stats: SizeClassAllocator64: page size = 4096 bytes
  01 (    32): inuse/total blocks:      0/     0 inuse/total pages:      0/     0 inuse bytes:      0K util: 100.00%
  02 (    48): inuse/total blocks:      0/     0 inuse/total pages:      0/     0 inuse bytes:      0K util: 100.00%
  03 (    64): inuse/total blocks:      0/     0 inuse/total pages:      0/     0 inuse bytes:      0K util: 100.00%
  04 (    80): inuse/total blocks:      0/     0 inuse/total pages:      0/     0 inuse bytes:      0K util: 100.00%

AlexisPerry pushed a commit to llvm-project-tlp/llvm-project that referenced this pull request Jul 9, 2024
runDFS() currently performs three hash table lookups. One in the main
loop, one when checking whether a successor has already been visited and
another when adding parent and reverse children to the successor.

We can avoid the two additional lookups by making the parent number part
of the stack, and then making the parent / reverse children update part
of the main loop.

The main loop already has a check for already visited nodes, so we don't
have to check this in advance -- we can simply push the node to the
worklist and skip it later.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants