-
Notifications
You must be signed in to change notification settings - Fork 10.4k
Cache JS->.NET string decoding results within a single renderbatch #23773
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@@ -246,7 +264,6 @@ function createEmscriptenModuleInstance(resourceLoader: WebAssemblyResourceLoade | |||
module.preRun.push(() => { | |||
// By now, emscripten should be initialised enough that we can capture these methods for later use | |||
mono_wasm_add_assembly = cwrap('mono_wasm_add_assembly', null, ['string', 'number', 'number']); | |||
mono_string_get_utf8 = cwrap('mono_wasm_string_get_utf8', 'number', ['number']); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was not used
currentHeapLock.stringCache.set(fieldValue, decodedString); | ||
} | ||
} else { | ||
decodedString = BINDING.conv_string(fieldValue as any as System_String); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there code that hits this code path?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, any of the framework's built-in unmarshalled interop calls that read strings would hit this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know you could argue that they too should take a heap lock, but I'm not trying to change everything here. We only need to reason about all unmarshalled interop if/when we get multithreading.
Have we discussed this with the runtime folks just so that they are aware of our requirements when designing multi-threading? |
Uhoh test failure:
|
I believe when there's multithreading going on, some version of All we care about are the pointers not moving, isn't it? The memory on the render batches should not change since all values are inmutable and we control the objects that are created? |
Yes
Correct
Excellent! Now we can see if we were making a bad assumption about something before... |
I'm very glad we got that test failure, because this has identified a possible rendering bug. Admittedly it would be incredibly hard to repro, so much that it's possible nobody ever has done, but if you:
... then you could end up with an exception or undefined behavior. I've implemented what I think is roughly the correct fix, but I want to add some proper E2E test cases dedicated to this edge case before merging this. |
} | ||
|
||
export function setEventDispatcher(newDispatcher: (eventDescriptor: EventDescriptor, eventArgs: UIEventArgs) => Promise<void>): void { | ||
export function setEventDispatcher(newDispatcher: (eventDescriptor: EventDescriptor, eventArgs: UIEventArgs) => void): void { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've changed newDispatcher
to return void
because we only call it in one place, and that place discards the returned value anyway. Event-triggering is fire-and-forget from the JS side, because any errors are handled on the .NET side.
So having it return void
is a more accurate representation of what's going on, and it simplifies a few other bits of the code.
18eb523
to
cf32d51
Compare
Profiling shows that Blazor WebAssembly spends a substantial chunk of time decoding .NET strings into JavaScript ones during rendering. This optimization improves the grid benchmark results (on my laptop) from this:
... to this:
As you can see, the biggest percentage gain is on the FastGrid benchmarks, because they already don't spend much time doing anything else. At roughly 9%, the gains in this situation easily warrant doing such a straightforward optimization.
Implementation note
Blazor WebAssembly has always relied on JS code being able to read data directly from .NET's heap. This allows us to skip serializing renderbatches entirely. Without doing this, perf would be destroyed.
In order for this to be safe, we've relied on the fact that .NET's GC can only run when .NET itself runs, which is not during the DOM-update time when we exclusively run JavaScript. So we know that any pointers we hold will certainly remain valid. For this new string caching behavior, we also rely on the same thing (that is, the mapping from "pointer value" to "decoded string" must remain valid because nothing is changing inside the .NET heap).
However until now that's been done quite informally. This PR introduces a slightly higher degree of formality by defining the notion of a ".NET heap lock" that JS code can take and release around a critical section. We can then:
In the long term when
dotnet.wasm
gains true multithreading, we will have to contend with more complicated situations where .NET code can in fact run while the JS code is simultaneously executing. When we get to that point, we'll want to use this ".NET heap lock" concept to signal to .NET that it's not allowed to mutate the memory that we care about. Most likely it will be sufficient to prevent GC compaction during the critical section, but we will have to reason about that in detail at the time.