Skip to content

compiler: threaded codegen (and more goodies) #24124

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 31 commits into
base: master
Choose a base branch
from

Conversation

mlugg
Copy link
Member

@mlugg mlugg commented Jun 8, 2025

This branch achieves a few things. Here they are, in increasing order of how cool I find them.

The way Compilation handles caching is cleaned up. There are now 3 cache modes: none, incremental, and whole. The latter two always emit all artifacts to the cache directory, while .none always emits all artifacts to user-specified paths. Direct CLI invocations usually use .none, except for zig run and zig test, which use .whole by default. For more details, check out commit fa5b3a1.

The separation between LLVM and LLD in the pipeline is made better, and some legacy cruft is removed from linker implementations. Notably, our LLD integration is fully factored out into a new link implementation, link.Lld, instead of being mixed into link.Elf/link.Coff/link.Wasm. Also, there's now only one place for the LLVM object to live (a field on Zcu) instead of each linker having its own field for it.

The std.Progress output of the compiler is enhanced. In particular, code generation and linking are separated, flush appears under linking, and we provide estimated totals for codegen/link based on what's queued. This leads directly onto the final, and by far most interesting, point...


Here's the big one: the "backend" of the Compilation pipeline is reworked to separate the codegen and link phases. This allows code generation to run on a separate thread to linking. This works by having "codegen workers" (of which there can be arbitrarily many, running in parallel) consume AIR, and emit MIR. Then, the single linker thread consumes that MIR and emits it to the binary.

The .separate_thread backend feature is repurposed to mean that a backend supports this pattern. Unfortunately, this is more difficult for backends to support, because it requires that CodeGen does not reference any link.File state (instead leaving such work until Emit, which happens on the linker thread). However, this PR performs that work for the C backend, the Wasm backend, and the x86_64 backend.

Details can mostly be found in commit messages and the code. But that's not what you're here for...

First, here's a demo of this branch building the compiler: asciinema. This shows you the parallelism and the progress output quite nicely.

Now for the benchmarks! The TL;DR is that (with a warm cache) I see anywhere from a 1% to a 55% speed boost, depending on the code being built. However, please bear in mind that I think these are at least somewhat IO-bound, so results may vary.

Build Behavior Tests

This includes the x86_64-specific tests, which take by far the bulk of the time.

Benchmark 1 (3 runs): ../master/build/stage3/bin/zig test test/behavior.zig -femit-bin=b --test-no-exec
  measurement          mean ± σ            min … max           outliers         delta
  wall_time          46.8s  ±  129ms    46.7s  … 46.9s           0 ( 0%)        0%
  peak_rss           2.34GB ± 2.03MB    2.34GB … 2.34GB          0 ( 0%)        0%
  cpu_cycles          218G  ±  376M      217G  …  218G           0 ( 0%)        0%
  instructions        489G  ± 14.4M      489G  …  489G           0 ( 0%)        0%
  cache_references   16.2G  ± 80.5M     16.1G  … 16.2G           0 ( 0%)        0%
  cache_misses        930M  ± 26.2M      905M  …  958M           0 ( 0%)        0%
  branch_misses       675M  ± 6.99M      670M  …  683M           0 ( 0%)        0%
Benchmark 2 (3 runs): ./stage4-release/bin/zig test test/behavior.zig -femit-bin=b --test-no-exec
  measurement          mean ± σ            min … max           outliers         delta
  wall_time          20.7s  ± 94.5ms    20.6s  … 20.8s           0 ( 0%)        ⚡- 55.8% ±  0.5%
  peak_rss           2.72GB ± 12.7MB    2.71GB … 2.74GB          0 ( 0%)        💩+ 16.3% ±  0.9%
  cpu_cycles          183G  ±  374M      182G  …  183G           0 ( 0%)        ⚡- 16.0% ±  0.4%
  instructions        346G  ± 18.4M      346G  …  346G           0 ( 0%)        ⚡- 29.2% ±  0.0%
  cache_references   11.3G  ± 25.1M     11.3G  … 11.3G           0 ( 0%)        ⚡- 30.1% ±  0.8%
  cache_misses        314M  ± 6.92M      308M  …  321M           0 ( 0%)        ⚡- 66.2% ±  4.7%
  branch_misses       374M  ± 1.27M      372M  …  375M           0 ( 0%)        ⚡- 44.6% ±  1.7%

Build Compiler (-Ddev=x86_64-linux)

Benchmark 1 (3 runs): ../master/build/stage3/bin/zig build-exe [...]
  measurement          mean ± σ            min … max           outliers         delta
  wall_time          10.4s  ± 36.4ms    10.4s  … 10.4s           0 ( 0%)        0%
  peak_rss            813MB ± 7.73MB     804MB …  818MB          0 ( 0%)        0%
  cpu_cycles         63.9G  ±  144M     63.8G  … 64.0G           0 ( 0%)        0%
  instructions        140G  ± 1.89M      140G  …  140G           0 ( 0%)        0%
  cache_references   3.47G  ± 10.1M     3.46G  … 3.48G           0 ( 0%)        0%
  cache_misses        232M  ± 1.23M      231M  …  233M           0 ( 0%)        0%
  branch_misses       226M  ±  500K      226M  …  227M           0 ( 0%)        0%
Benchmark 2 (3 runs): ./stage4-release/bin/zig build-exe [...]
  measurement          mean ± σ            min … max           outliers         delta
  wall_time          9.25s  ± 24.3ms    9.24s  … 9.28s           0 ( 0%)        ⚡- 11.1% ±  0.7%
  peak_rss            879MB ± 23.8MB     852MB …  896MB          0 ( 0%)        💩+  8.1% ±  4.9%
  cpu_cycles         60.1G  ± 57.6M     60.0G  … 60.2G           0 ( 0%)        ⚡-  5.9% ±  0.4%
  instructions        119G  ± 1.77M      119G  …  119G           0 ( 0%)        ⚡- 14.8% ±  0.0%
  cache_references   3.21G  ± 11.7M     3.20G  … 3.22G           0 ( 0%)        ⚡-  7.5% ±  0.7%
  cache_misses        222M  ±  621K      221M  …  222M           0 ( 0%)        ⚡-  4.5% ±  1.0%
  branch_misses       205M  ±  669K      205M  …  206M           0 ( 0%)        ⚡-  9.2% ±  0.6%

Build Compiler (Full)

Benchmark 1 (3 runs): ../master/build/stage3/bin/zig [...]
  measurement          mean ± σ            min … max           outliers         delta
  wall_time          18.2s  ± 21.5ms    18.2s  … 18.2s           0 ( 0%)        0%
  peak_rss           1.19GB ±  251KB    1.19GB … 1.19GB          0 ( 0%)        0%
  cpu_cycles          115G  ± 98.1M      115G  …  115G           0 ( 0%)        0%
  instructions        248G  ± 1.91M      248G  …  248G           0 ( 0%)        0%
  cache_references   6.79G  ± 21.0M     6.77G  … 6.81G           0 ( 0%)        0%
  cache_misses        449M  ± 4.30M      444M  …  452M           0 ( 0%)        0%
  branch_misses       449M  ± 2.19M      447M  …  451M           0 ( 0%)        0%
Benchmark 2 (3 runs): ./stage4-release/bin/zig build-exe [...]
  measurement          mean ± σ            min … max           outliers         delta
  wall_time          14.8s  ± 32.4ms    14.8s  … 14.9s           0 ( 0%)        ⚡- 18.6% ±  0.3%
  peak_rss           1.41GB ± 20.1MB    1.39GB … 1.43GB          0 ( 0%)        💩+ 18.3% ±  2.7%
  cpu_cycles          108G  ±  184M      107G  …  108G           0 ( 0%)        ⚡-  6.5% ±  0.3%
  instructions        202G  ± 24.4M      202G  …  202G           0 ( 0%)        ⚡- 18.5% ±  0.0%
  cache_references   6.34G  ± 12.7M     6.33G  … 6.35G           0 ( 0%)        ⚡-  6.7% ±  0.6%
  cache_misses        446M  ± 3.01M      444M  …  450M           0 ( 0%)          -  0.5% ±  1.9%
  branch_misses       408M  ±  642K      407M  …  408M           0 ( 0%)        ⚡-  9.2% ±  0.8%

Build Hello World

Benchmark 1 (14 runs): ../master/build/stage3/bin/zig build-exe /home/mlugg/test/hello.zig
  measurement          mean ± σ            min … max           outliers         delta
  wall_time           365ms ± 4.01ms     357ms …  370ms          0 ( 0%)        0%
  peak_rss            136MB ±  469KB     136MB …  137MB          0 ( 0%)        0%
  cpu_cycles         1.65G  ± 20.3M     1.62G  … 1.69G           0 ( 0%)        0%
  instructions       3.20G  ± 64.9K     3.20G  … 3.20G           0 ( 0%)        0%
  cache_references    114M  ±  638K      112M  …  115M           0 ( 0%)        0%
  cache_misses       10.6M  ±  215K     10.1M  … 11.0M           1 ( 7%)        0%
  branch_misses      9.83M  ± 65.7K     9.71M  … 9.93M           0 ( 0%)        0%
Benchmark 2 (20 runs): ./stage4-release/bin/zig build-exe /home/mlugg/test/hello.zig
  measurement          mean ± σ            min … max           outliers         delta
  wall_time           251ms ± 3.91ms     245ms …  258ms          0 ( 0%)        ⚡- 31.1% ±  0.8%
  peak_rss            150MB ±  851KB     148MB …  152MB          0 ( 0%)        💩+ 10.0% ±  0.4%
  cpu_cycles         1.51G  ± 23.1M     1.46G  … 1.56G           0 ( 0%)        ⚡-  8.9% ±  0.9%
  instructions       2.50G  ±  134K     2.50G  … 2.50G           1 ( 5%)        ⚡- 21.9% ±  0.0%
  cache_references    104M  ± 1.25M      103M  …  108M           0 ( 0%)        ⚡-  8.1% ±  0.7%
  cache_misses       10.0M  ±  229K     9.59M  … 10.5M           0 ( 0%)        ⚡-  5.6% ±  1.5%
  branch_misses      8.98M  ± 86.6K     8.83M  … 9.14M           0 ( 0%)        ⚡-  8.6% ±  0.6%

Build std Tests

Benchmark 1 (3 runs): ../master/build/stage3/bin/zig test --zig-lib-dir lib lib/std/std.zig --test-no-exec -femit-bin=t
  measurement          mean ± σ            min … max           outliers         delta
  wall_time          12.3s  ± 23.2ms    12.3s  … 12.4s           0 ( 0%)        0%
  peak_rss            969MB ±  204KB     969MB …  969MB          0 ( 0%)        0%
  cpu_cycles         82.2G  ±  112M     82.1G  … 82.4G           0 ( 0%)        0%
  instructions        171G  ± 5.29M      171G  …  171G           0 ( 0%)        0%
  cache_references   5.30G  ± 4.88M     5.30G  … 5.31G           0 ( 0%)        0%
  cache_misses        260M  ±  906K      259M  …  261M           0 ( 0%)        0%
  branch_misses       240M  ±  952K      240M  …  242M           0 ( 0%)        0%
Benchmark 2 (3 runs): ./stage4-release/bin/zig test --zig-lib-dir lib lib/std/std.zig --test-no-exec -femit-bin=t
  measurement          mean ± σ            min … max           outliers         delta
  wall_time          12.1s  ± 50.7ms    12.1s  … 12.2s           0 ( 0%)        ⚡-  1.8% ±  0.7%
  peak_rss           1.04GB ± 19.5MB    1.02GB … 1.06GB          0 ( 0%)        💩+  7.8% ±  3.2%
  cpu_cycles         76.1G  ± 88.3M     76.0G  … 76.2G           0 ( 0%)        ⚡-  7.5% ±  0.3%
  instructions        142G  ± 2.94M      142G  …  142G           0 ( 0%)        ⚡- 16.6% ±  0.0%
  cache_references   5.01G  ± 9.29M     5.00G  … 5.02G           0 ( 0%)        ⚡-  5.5% ±  0.3%
  cache_misses        261M  ± 6.29M      254M  …  267M           0 ( 0%)          +  0.1% ±  3.9%
  branch_misses       220M  ±  556K      220M  …  221M           0 ( 0%)        ⚡-  8.4% ±  0.7%

Resolves: #13179

@xdBronch
Copy link
Contributor

xdBronch commented Jun 9, 2025

am i understanding correctly that fa5b3a1 closes #13179?

@mlugg
Copy link
Member Author

mlugg commented Jun 9, 2025

Correct, thanks for flagging that up. I'll mark that on the PR in a bit.

mlugg and others added 28 commits June 11, 2025 02:26
* The `codegen_nav`, `codegen_func`, `codegen_type` tasks are renamed to
  `link_nav`, `link_func`, and `link_type`, to more accurately reflect
  their purpose of sending data to the *linker*. Currently, `link_func`
  remains responsible for codegen; this will change in an upcoming
  commit.

* Don't go on a pointless detour through `PerThread` when linking ZCU
  functions/`Nav`s; so, the `linkerUpdateNav` etc logic now lives in
  `link.zig`. Currently, `linkerUpdateFunc` is an exception, because it
  has broader responsibilities including codegen, but this will be
  solved in an upcoming commit.
The main goal of this commit is to make it easier to decouple codegen
from the linkers by being able to do LLVM codegen without going through
the `link.File`; however, this ended up being a nice refactor anyway.

Previously, every linker stored an optional `llvm.Object`, which was
populated when using LLVM for the ZCU *and* linking an output binary;
and `Zcu` also stored an optional `llvm.Object`, which was used only
when we needed LLVM for the ZCU (e.g. for `-femit-llvm-bc`) but were not
emitting a binary.

This situation was incredibly silly. It meant there were N+1 places the
LLVM object might be instead of just 1, and it meant that every linker
had to start a bunch of methods by checking for an LLVM object, and just
dispatching to the corresponding method on *it* instead if it was not
`null`.

Instead, we now always store the LLVM object on the `Zcu` -- which makes
sense, because it corresponds to the object emitted by, well, the Zig
Compilation Unit! The linkers now mostly don't make reference to LLVM.
`Compilation` makes sure to emit the LLVM object if necessary before
calling `flush`, so it is ready for the linker. Also, all of the
`link.File` methods which act on the ZCU -- like `updateNav` -- now
check for the LLVM object in `link.zig` instead of in every single
individual linker implementation. Notably, the change to LLVM emit
improves this rather ludicrous call chain in the `-fllvm -flld` case:

* Compilation.flush
* link.File.flush
* link.Elf.flush
* link.Elf.linkWithLLD
* link.Elf.flushModule
* link.emitLlvmObject
* Compilation.emitLlvmObject
* llvm.Object.emit

Replacing it with this one:

* Compilation.flush
* llvm.Object.emit

...although we do currently still end up in `link.Elf.linkWithLLD` to do
the actual linking. The logic for invoking LLD should probably also be
unified at least somewhat; I haven't done that in this commit.
Similar to the previous commit, this commit untangles LLD integration
from the self-hosted linkers. Despite the big network of functions which
were involved, it turns out what was going on here is quite simple. The
LLD linking logic is actually very self-contained; it requires a few
flags from the `link.File.OpenOptions`, but that's really about it. We
don't need any of the mutable state on `Elf`/`Coff`/`Wasm`, for
instance. There was some legacy code trying to handle support for using
self-hosted codegen with LLD, but that's not a supported use case, so
I've just stripped it out.

For now, I've just pasted the logic for linking the 3 targets we
currently support using LLD for into this new linker implementation,
`link.Lld`; however, it's almost certainly possible to combine some of
the logic and simplify this file a bit. But to be honest, it's not
actually that bad right now.

This commit ends up eliminating the distinction between `flush` and
`flushZcu` (formerly `flushModule`) in linkers, where the latter
previously meant something along the lines of "flush, but if you're
going to be linking with LLD, just flush the ZCU object file, don't
actually link"?. The distinction here doesn't seem like it was properly
defined, and most linkers seem to treat them as essentially identical
anyway. Regardless, all calls to `flushZcu` are gone now, so it's
deleted -- one `flush` to rule them all!

The end result of this commit and the preceding one is that LLVM and LLD
fit into the pipeline much more sanely:

* If we're using LLVM for the ZCU, that state is on `zcu.llvm_object`
* If we're using LLD to link, then the `link.File` is a `link.Lld`
* Calls to "ZCU link functions" (e.g. `updateNav`) lower to calls to the
  LLVM object if it's available, or otherwise to the `link.File` if it's
  available (neither is available under `-fno-emit-bin`)
* After everything is done, linking is finalized by calling `flush` on
  the `link.File`; for `link.Lld` this invokes LLD, for other linkers it
  flushes self-hosted linker state

There's one messy thing remaining, and that's how self-hosted function
codegen in a ZCU works; right now, we process AIR with a call sequence
something like this:

* `link.doTask`
* `Zcu.PerThread.linkerUpdateFunc`
* `link.File.updateFunc`
* `link.Elf.updateFunc`
* `link.Elf.ZigObject.updateFunc`
* `codegen.generateFunction`
* `arch.x86_64.CodeGen.generate`

So, we start in the linker, take a scenic detour through `Zcu`, go back
to the linker, into its implementation, and then... right back out, into
code which is generic over the linker implementation, and then dispatch
on the *backend* instead! Of course, within `arch.x86_64.CodeGen`, there
are some more places which switch on the `link` implementation being
used. This is all pretty silly... so it shall be my next target.
The idea here is that instead of the linker calling into codegen,
instead codegen should run before we touch the linker, and after MIR is
produced, it is sent to the linker. Aside from simplifying the call
graph (by preventing N linkers from each calling into M codegen
backends!), this has the huge benefit that it is possible to
parallellize codegen separately from linking. The threading model can
look like this:

* 1 semantic analysis thread, which generates AIR
* N codegen threads, which process AIR into MIR
* 1 linker thread, which emits MIR to the binary

The codegen threads are also responsible for `Air.Legalize` and
`Air.Liveness`; it's more efficient to do this work here instead of
blocking the main thread for this trivially parallel task.

I have repurposed the `Zcu.Feature.separate_thread` backend feature to
indicate support for this 1:N:1 threading pattern. This commit makes the
C backend support this feature, since it was relatively easy to divorce
from `link.C`: it just required eliminating some shared buffers. Other
backends don't currently support this feature. In fact, they don't even
compile -- the next few commits will fix them back up.
As of this commit, every backend other than self-hosted Wasm and
self-hosted SPIR-V compiles and (at least somewhat) functions again.
Those two backends are currently disabled with panics.

Note that `Zcu.Feature.separate_thread` is *not* enabled for the fixed
backends. Avoiding linker references from codegen is a non-trivial task,
and can be done after this branch.
My original goal here was just to get the self-hosted Wasm backend
compiling again after the pipeline change, but it turned out that from
there it was pretty simple to entirely eliminate the shared state
between `codegen.wasm` and `link.Wasm`. As such, this commit not only
fixes the backend, but makes it the second backend (after CBE) to
support the new 1:N:1 threading model.
Unfortunately, the self-hosted SPIR-V backend is quite tightly coupled
with the self-hosted SPIR-V linker through its `Object` concept (which
is much like `llvm.Object`). Reworking this would be too much work for
this branch. So, for now, I have introduced a special case (similar to
the LLVM backend's special case) to the codegen logic when using this
backend. We will want to delete this special case at some point, but it
need not block this work.
It turns out that LLD caching hasn't been in use for a while. On master,
it is currently only enabled when you compile via the build system,
passing `-fincremental`, using LLD (and so LLVM if there's a ZCU). That
case never happens, because `-fincremental` is only useful when you're
using a backend *other* than the LLVM backend. My previous commits
accidentally re-enabled this logic in some cases, exposing bugs; that
ultimately led to this realisation. So, let's just delete that logic --
less LLVM-related cruft to maintain.
Previously, various doc comments heavily disagreed with the
implementation on both what lives where on the filesystem at what time,
and how that was represented in code. Notably, the combination of emit
paths outside the cache and `disable_lld_caching` created a kind of
ad-hoc "cache disable" mechanism -- which didn't actually *work* very
well, 'most everything still ended up in this cache. There was also a
long-standing issue where building using the LLVM backend would put a
random object file in your cwd.

This commit reworks how emit paths are specified in
`Compilation.CreateOptions`, how they are represented internally, and
how the cache usage is specified.

There are now 3 options for `Compilation.CacheMode`:
* `.none`: do not use the cache. The paths we have to emit to are
  relative to the compiler cwd (they're either user-specified, or
  defaults inferred from the root name). If we create any temporary
  files (e.g. the ZCU object when using the LLVM backend) they are
  emitted to a directory in `local_cache/tmp/`, which is deleted once
  the update finishes.
* `.whole`: cache the compilation based on all inputs, including file
  contents. All emit paths are computed by the compiler (and will be
  stored as relative to the local cache directory); it is a CLI error to
  specify an explicit emit path. Artifacts (including temporary files)
  are written to a directory under `local_cache/tmp/`, which is later
  renamed to an appropriate `local_cache/o/`. The caller (who is using
  `--listen`; e.g. the build system) learns the name of this directory,
  and can get the artifacts from it.
* `.incremental`: similar to `.whole`, but Zig source file contents, and
  anything else which incremental compilation can handle changes for, is
  not included in the cache manifest. We don't need to do the dance
  where the output directory is initially in `tmp/`, because our digest
  is computed entirely from CLI inputs.

To be clear, the difference between `CacheMode.whole` and
`CacheMode.incremental` is unchanged. `CacheMode.none` is new
(previously it was sort of poorly imitated with `CacheMode.whole`). The
defined behavior for temporary/intermediate files is new.

`.none` is used for direct CLI invocations like `zig build-exe foo.zig`.
The other cache modes are reserved for `--listen`, and the cache mode in
use is currently just based on the presence of the `-fincremental` flag.

There are two cases in which `CacheMode.whole` is used despite there
being no `--listen` flag: `zig test` and `zig run`. Unless an explicit
`-femit-bin=xxx` argument is passed on the CLI, these subcommands will
use `CacheMode.whole`, so that they can put the output somewhere without
polluting the cwd (plus, caching is potentially more useful for direct
usage of these subcommands).

Users of `--listen` (such as the build system) can now use
`std.zig.EmitArtifact.cacheName` to find out what an output will be
named. This avoids having to synchronize logic between the compiler and
all users of `--listen`.
When the name strategy is `.parent`, the DWARF info really wants to know
what `Nav` we were named after to emit a more optimal hierarchy.
* "Flush" nodes ("LLVM Emit Object", "ELF Flush") appear under "Linking"

* "Code Generation" disappears when all analysis and codegen is done

* We only show one node under "Semantic Analysis" to accurately convey
  that analysis isn't happening in parallel, but rather that we're
  pausing one task to do another
Looking at a compilation of 'test/behavior/x86_64/unary.zig' in
callgrind showed that a full 30% of the compiler runtime was spent in
this `stringToEnum` call. Replace it with some nested `switch`
statements using `inline else` to generate the cases at comptime. There
are a lot, but most of them end up falling through to the runtime error
case, so the block will get optimized away.

Notably, this commit builds itself faster than the previous commit
builds *it*self, because the performance degradation from the compiler
having to analyze the `inline else` is beaten out by the performance
gain from using this faster logic in `Lower`
glibc, freebsd, and netbsd all do caching manually, because of the fact
that they emit multiple files which they want to cache as a block.
Therefore, the individual sub-compilation on a cache miss should be
using `CacheMode.none` so that we can specify the output paths for each
sub-compilation as being in the shared output directory.
The name of the ZCU object file emitted by the LLVM backend has been
changed in this branch from e.g. `foo.o` to `foo_zcu.o`. This is to
avoid name clashes. This commit just updates a link test which started
failing because the object name in a linker error changed.
@mlugg mlugg force-pushed the better-backend-pipeline-2 branch from b04df91 to eba2e6c Compare June 11, 2025 01:27
The name of the ZCU object file emitted by the LLVM backend has been
changed in this branch from e.g. `foo.obj` to `foo_zcu.obj`. This is to
avoid name clashes. This commit just updates the stack trace tests which
started failing on windows because of the object name change.
Previously, `PerThread.populateTestFunctions` was analyzing the
`test_functions` declaration if it hadn't already been analyzed, so that
it could then populate it. However, the logic for doing this wasn't
actually correct, because it didn't trigger the necessary type
resolution. I could have tried to fix this, but there's actually a
simpler solution! If the `test_functions` declaration isn't referenced
or has a compile error, then we simply don't need to update it; either
it's unreferenced so its value doesn't matter, or we're going to get a
compile error anyway. Either way, we can just give up early. This avoids
doing semantic analysis after `performAllTheWork` finishes.

Also, get rid of the "Code Generation" progress node while updating the
test decl: this is a linking task.
@andrewrk
Copy link
Member

andrewrk commented Jun 11, 2025

This is a perfect example of why our #1840 policy exists:

error: error.Unexpected: GetLastError(38): Reached the end of the file.

Unable to dump stack trace: OutOfMemory
thread 23880 panic: getrandom() failed to provide entropy
C:\Users\CI\actions-runner-1\_work\zig\zig\lib\std\os\windows.zig:2855:5: 0x7ff7e44ee76a in unexpectedError (zig_zcu.obj)
    return error.Unexpected;
    ^
C:\Users\CI\actions-runner-1\_work\zig\zig\lib\std\os\windows.zig:425:13: 0x7ff7e4bf86db in RtlGenRandom (zig_zcu.obj)
            return unexpectedError(GetLastError());
            ^
C:\Users\CI\actions-runner-1\_work\zig\zig\lib\std\posix.zig:600:9: 0x7ff7e48a6003 in getrandom (zig_zcu.obj)
        return windows.RtlGenRandom(buffer);
        ^

As far as I'm aware this is the lowest level function possible for collecting entropy from the operating system. For whatever reason it is implemented in advapi32.dll. And despite it having no reasonable excuse for failure, apparently it tries to do heap memory allocation and can return an undocumented error HANDLE_EOF.

If anyone knows of an infallible way to collect entropy on Windows, please speak up...

Regardless, @mlugg I suggest setting a max_rss value for the respective compilation task. As is evident from the benchmarks above, the peak memory used is increased with these changes. This is fine and expected, since we're doing more at the same time, but it looks like we need to help the build system's scheduler out a little bit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

build-exe leaves behind object files
5 participants