-
Notifications
You must be signed in to change notification settings - Fork 292
Add tagged union benchmarks #656
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
CodSpeed Performance ReportMerging #656 Summary
Benchmarks breakdown
|
No idea why codspeed claims there were performance regressions, I only added benchmarks? |
Hey @adriangb , @samuelcolvin I had a thorough look at the execution profiles. It seems the variability comes from This seems to have something to do with the order of the benches. Maybe some allocations are done and reused in later benchmarks? Do you think it makes sense? Normally, the diff of the sub-costs should be visible in the UI when we will display the callgraph(it's coming very soon for Python but will require Cpython 3.12). |
Thank you for investigating! We do have some Arthur, if it is something like that I maybe we could run a "warmup" round on all of the benchmarks first (i.e. run each one a single time) before doing the full run on each one. Note that I don't mean running a warmup for A then running A then a warmup for B then B, I mean warmup A, warmup B then run A run B. |
Seems weird to me that, if it was explained by OnceCell etc., that adding benchmarks would make others slower. I would think it would just make them faster, but maybe if it reordered them it could explain something? |
Agreed, also:
So @art049 I don't see how that could happen. Maybe @samuelcolvin knows something, I know he did some tweaking of allocators. |
I can confirm that the issue is from the allocator. I tested |
@art049 we're interested in maybe running benchmarks both with and without MiMalloc. Is there any way to do two runs and combine with with CodSpeed? |
Hey @adriangb, it's not possible out of the box at the moment. But we're looking forward to adding bench groups quite soon and then we'll be able to compare multiple runs with different feature sets. |
No description provided.