-
-
Notifications
You must be signed in to change notification settings - Fork 2.9k
compiler: threaded codegen (and more goodies) #24124
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Correct, thanks for flagging that up. I'll mark that on the PR in a bit. |
* The `codegen_nav`, `codegen_func`, `codegen_type` tasks are renamed to `link_nav`, `link_func`, and `link_type`, to more accurately reflect their purpose of sending data to the *linker*. Currently, `link_func` remains responsible for codegen; this will change in an upcoming commit. * Don't go on a pointless detour through `PerThread` when linking ZCU functions/`Nav`s; so, the `linkerUpdateNav` etc logic now lives in `link.zig`. Currently, `linkerUpdateFunc` is an exception, because it has broader responsibilities including codegen, but this will be solved in an upcoming commit.
The main goal of this commit is to make it easier to decouple codegen from the linkers by being able to do LLVM codegen without going through the `link.File`; however, this ended up being a nice refactor anyway. Previously, every linker stored an optional `llvm.Object`, which was populated when using LLVM for the ZCU *and* linking an output binary; and `Zcu` also stored an optional `llvm.Object`, which was used only when we needed LLVM for the ZCU (e.g. for `-femit-llvm-bc`) but were not emitting a binary. This situation was incredibly silly. It meant there were N+1 places the LLVM object might be instead of just 1, and it meant that every linker had to start a bunch of methods by checking for an LLVM object, and just dispatching to the corresponding method on *it* instead if it was not `null`. Instead, we now always store the LLVM object on the `Zcu` -- which makes sense, because it corresponds to the object emitted by, well, the Zig Compilation Unit! The linkers now mostly don't make reference to LLVM. `Compilation` makes sure to emit the LLVM object if necessary before calling `flush`, so it is ready for the linker. Also, all of the `link.File` methods which act on the ZCU -- like `updateNav` -- now check for the LLVM object in `link.zig` instead of in every single individual linker implementation. Notably, the change to LLVM emit improves this rather ludicrous call chain in the `-fllvm -flld` case: * Compilation.flush * link.File.flush * link.Elf.flush * link.Elf.linkWithLLD * link.Elf.flushModule * link.emitLlvmObject * Compilation.emitLlvmObject * llvm.Object.emit Replacing it with this one: * Compilation.flush * llvm.Object.emit ...although we do currently still end up in `link.Elf.linkWithLLD` to do the actual linking. The logic for invoking LLD should probably also be unified at least somewhat; I haven't done that in this commit.
Similar to the previous commit, this commit untangles LLD integration from the self-hosted linkers. Despite the big network of functions which were involved, it turns out what was going on here is quite simple. The LLD linking logic is actually very self-contained; it requires a few flags from the `link.File.OpenOptions`, but that's really about it. We don't need any of the mutable state on `Elf`/`Coff`/`Wasm`, for instance. There was some legacy code trying to handle support for using self-hosted codegen with LLD, but that's not a supported use case, so I've just stripped it out. For now, I've just pasted the logic for linking the 3 targets we currently support using LLD for into this new linker implementation, `link.Lld`; however, it's almost certainly possible to combine some of the logic and simplify this file a bit. But to be honest, it's not actually that bad right now. This commit ends up eliminating the distinction between `flush` and `flushZcu` (formerly `flushModule`) in linkers, where the latter previously meant something along the lines of "flush, but if you're going to be linking with LLD, just flush the ZCU object file, don't actually link"?. The distinction here doesn't seem like it was properly defined, and most linkers seem to treat them as essentially identical anyway. Regardless, all calls to `flushZcu` are gone now, so it's deleted -- one `flush` to rule them all! The end result of this commit and the preceding one is that LLVM and LLD fit into the pipeline much more sanely: * If we're using LLVM for the ZCU, that state is on `zcu.llvm_object` * If we're using LLD to link, then the `link.File` is a `link.Lld` * Calls to "ZCU link functions" (e.g. `updateNav`) lower to calls to the LLVM object if it's available, or otherwise to the `link.File` if it's available (neither is available under `-fno-emit-bin`) * After everything is done, linking is finalized by calling `flush` on the `link.File`; for `link.Lld` this invokes LLD, for other linkers it flushes self-hosted linker state There's one messy thing remaining, and that's how self-hosted function codegen in a ZCU works; right now, we process AIR with a call sequence something like this: * `link.doTask` * `Zcu.PerThread.linkerUpdateFunc` * `link.File.updateFunc` * `link.Elf.updateFunc` * `link.Elf.ZigObject.updateFunc` * `codegen.generateFunction` * `arch.x86_64.CodeGen.generate` So, we start in the linker, take a scenic detour through `Zcu`, go back to the linker, into its implementation, and then... right back out, into code which is generic over the linker implementation, and then dispatch on the *backend* instead! Of course, within `arch.x86_64.CodeGen`, there are some more places which switch on the `link` implementation being used. This is all pretty silly... so it shall be my next target.
The idea here is that instead of the linker calling into codegen, instead codegen should run before we touch the linker, and after MIR is produced, it is sent to the linker. Aside from simplifying the call graph (by preventing N linkers from each calling into M codegen backends!), this has the huge benefit that it is possible to parallellize codegen separately from linking. The threading model can look like this: * 1 semantic analysis thread, which generates AIR * N codegen threads, which process AIR into MIR * 1 linker thread, which emits MIR to the binary The codegen threads are also responsible for `Air.Legalize` and `Air.Liveness`; it's more efficient to do this work here instead of blocking the main thread for this trivially parallel task. I have repurposed the `Zcu.Feature.separate_thread` backend feature to indicate support for this 1:N:1 threading pattern. This commit makes the C backend support this feature, since it was relatively easy to divorce from `link.C`: it just required eliminating some shared buffers. Other backends don't currently support this feature. In fact, they don't even compile -- the next few commits will fix them back up.
As of this commit, every backend other than self-hosted Wasm and self-hosted SPIR-V compiles and (at least somewhat) functions again. Those two backends are currently disabled with panics. Note that `Zcu.Feature.separate_thread` is *not* enabled for the fixed backends. Avoiding linker references from codegen is a non-trivial task, and can be done after this branch.
My original goal here was just to get the self-hosted Wasm backend compiling again after the pipeline change, but it turned out that from there it was pretty simple to entirely eliminate the shared state between `codegen.wasm` and `link.Wasm`. As such, this commit not only fixes the backend, but makes it the second backend (after CBE) to support the new 1:N:1 threading model.
Unfortunately, the self-hosted SPIR-V backend is quite tightly coupled with the self-hosted SPIR-V linker through its `Object` concept (which is much like `llvm.Object`). Reworking this would be too much work for this branch. So, for now, I have introduced a special case (similar to the LLVM backend's special case) to the codegen logic when using this backend. We will want to delete this special case at some point, but it need not block this work.
It turns out that LLD caching hasn't been in use for a while. On master, it is currently only enabled when you compile via the build system, passing `-fincremental`, using LLD (and so LLVM if there's a ZCU). That case never happens, because `-fincremental` is only useful when you're using a backend *other* than the LLVM backend. My previous commits accidentally re-enabled this logic in some cases, exposing bugs; that ultimately led to this realisation. So, let's just delete that logic -- less LLVM-related cruft to maintain.
Previously, various doc comments heavily disagreed with the implementation on both what lives where on the filesystem at what time, and how that was represented in code. Notably, the combination of emit paths outside the cache and `disable_lld_caching` created a kind of ad-hoc "cache disable" mechanism -- which didn't actually *work* very well, 'most everything still ended up in this cache. There was also a long-standing issue where building using the LLVM backend would put a random object file in your cwd. This commit reworks how emit paths are specified in `Compilation.CreateOptions`, how they are represented internally, and how the cache usage is specified. There are now 3 options for `Compilation.CacheMode`: * `.none`: do not use the cache. The paths we have to emit to are relative to the compiler cwd (they're either user-specified, or defaults inferred from the root name). If we create any temporary files (e.g. the ZCU object when using the LLVM backend) they are emitted to a directory in `local_cache/tmp/`, which is deleted once the update finishes. * `.whole`: cache the compilation based on all inputs, including file contents. All emit paths are computed by the compiler (and will be stored as relative to the local cache directory); it is a CLI error to specify an explicit emit path. Artifacts (including temporary files) are written to a directory under `local_cache/tmp/`, which is later renamed to an appropriate `local_cache/o/`. The caller (who is using `--listen`; e.g. the build system) learns the name of this directory, and can get the artifacts from it. * `.incremental`: similar to `.whole`, but Zig source file contents, and anything else which incremental compilation can handle changes for, is not included in the cache manifest. We don't need to do the dance where the output directory is initially in `tmp/`, because our digest is computed entirely from CLI inputs. To be clear, the difference between `CacheMode.whole` and `CacheMode.incremental` is unchanged. `CacheMode.none` is new (previously it was sort of poorly imitated with `CacheMode.whole`). The defined behavior for temporary/intermediate files is new. `.none` is used for direct CLI invocations like `zig build-exe foo.zig`. The other cache modes are reserved for `--listen`, and the cache mode in use is currently just based on the presence of the `-fincremental` flag. There are two cases in which `CacheMode.whole` is used despite there being no `--listen` flag: `zig test` and `zig run`. Unless an explicit `-femit-bin=xxx` argument is passed on the CLI, these subcommands will use `CacheMode.whole`, so that they can put the output somewhere without polluting the cwd (plus, caching is potentially more useful for direct usage of these subcommands). Users of `--listen` (such as the build system) can now use `std.zig.EmitArtifact.cacheName` to find out what an output will be named. This avoids having to synchronize logic between the compiler and all users of `--listen`.
When the name strategy is `.parent`, the DWARF info really wants to know what `Nav` we were named after to emit a more optimal hierarchy.
* "Flush" nodes ("LLVM Emit Object", "ELF Flush") appear under "Linking" * "Code Generation" disappears when all analysis and codegen is done * We only show one node under "Semantic Analysis" to accurately convey that analysis isn't happening in parallel, but rather that we're pausing one task to do another
Looking at a compilation of 'test/behavior/x86_64/unary.zig' in callgrind showed that a full 30% of the compiler runtime was spent in this `stringToEnum` call. Replace it with some nested `switch` statements using `inline else` to generate the cases at comptime. There are a lot, but most of them end up falling through to the runtime error case, so the block will get optimized away. Notably, this commit builds itself faster than the previous commit builds *it*self, because the performance degradation from the compiler having to analyze the `inline else` is beaten out by the performance gain from using this faster logic in `Lower`
glibc, freebsd, and netbsd all do caching manually, because of the fact that they emit multiple files which they want to cache as a block. Therefore, the individual sub-compilation on a cache miss should be using `CacheMode.none` so that we can specify the output paths for each sub-compilation as being in the shared output directory.
The name of the ZCU object file emitted by the LLVM backend has been changed in this branch from e.g. `foo.o` to `foo_zcu.o`. This is to avoid name clashes. This commit just updates a link test which started failing because the object name in a linker error changed.
b04df91
to
eba2e6c
Compare
The name of the ZCU object file emitted by the LLVM backend has been changed in this branch from e.g. `foo.obj` to `foo_zcu.obj`. This is to avoid name clashes. This commit just updates the stack trace tests which started failing on windows because of the object name change.
Previously, `PerThread.populateTestFunctions` was analyzing the `test_functions` declaration if it hadn't already been analyzed, so that it could then populate it. However, the logic for doing this wasn't actually correct, because it didn't trigger the necessary type resolution. I could have tried to fix this, but there's actually a simpler solution! If the `test_functions` declaration isn't referenced or has a compile error, then we simply don't need to update it; either it's unreferenced so its value doesn't matter, or we're going to get a compile error anyway. Either way, we can just give up early. This avoids doing semantic analysis after `performAllTheWork` finishes. Also, get rid of the "Code Generation" progress node while updating the test decl: this is a linking task.
This is a perfect example of why our #1840 policy exists:
As far as I'm aware this is the lowest level function possible for collecting entropy from the operating system. For whatever reason it is implemented in advapi32.dll. And despite it having no reasonable excuse for failure, apparently it tries to do heap memory allocation and can return an undocumented error If anyone knows of an infallible way to collect entropy on Windows, please speak up... Regardless, @mlugg I suggest setting a max_rss value for the respective compilation task. As is evident from the benchmarks above, the peak memory used is increased with these changes. This is fine and expected, since we're doing more at the same time, but it looks like we need to help the build system's scheduler out a little bit. |
This branch achieves a few things. Here they are, in increasing order of how cool I find them.
The way
Compilation
handles caching is cleaned up. There are now 3 cache modes:none
,incremental
, andwhole
. The latter two always emit all artifacts to the cache directory, while.none
always emits all artifacts to user-specified paths. Direct CLI invocations usually use.none
, except forzig run
andzig test
, which use.whole
by default. For more details, check out commit fa5b3a1.The separation between LLVM and LLD in the pipeline is made better, and some legacy cruft is removed from linker implementations. Notably, our LLD integration is fully factored out into a new
link
implementation,link.Lld
, instead of being mixed intolink.Elf
/link.Coff
/link.Wasm
. Also, there's now only one place for the LLVM object to live (a field onZcu
) instead of each linker having its own field for it.The
std.Progress
output of the compiler is enhanced. In particular, code generation and linking are separated, flush appears under linking, and we provide estimated totals for codegen/link based on what's queued. This leads directly onto the final, and by far most interesting, point...Here's the big one: the "backend" of the
Compilation
pipeline is reworked to separate the codegen and link phases. This allows code generation to run on a separate thread to linking. This works by having "codegen workers" (of which there can be arbitrarily many, running in parallel) consume AIR, and emit MIR. Then, the single linker thread consumes that MIR and emits it to the binary.The
.separate_thread
backend feature is repurposed to mean that a backend supports this pattern. Unfortunately, this is more difficult for backends to support, because it requires thatCodeGen
does not reference anylink.File
state (instead leaving such work untilEmit
, which happens on the linker thread). However, this PR performs that work for the C backend, the Wasm backend, and the x86_64 backend.Details can mostly be found in commit messages and the code. But that's not what you're here for...
First, here's a demo of this branch building the compiler: asciinema. This shows you the parallelism and the progress output quite nicely.
Now for the benchmarks! The TL;DR is that (with a warm cache) I see anywhere from a 1% to a 55% speed boost, depending on the code being built. However, please bear in mind that I think these are at least somewhat IO-bound, so results may vary.
Build Behavior Tests
This includes the x86_64-specific tests, which take by far the bulk of the time.
Build Compiler (
-Ddev=x86_64-linux
)Build Compiler (Full)
Build Hello World
Build
std
TestsResolves: #13179