-
Notifications
You must be signed in to change notification settings - Fork 10.5k
[benchmark] Baseline test #20074
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[benchmark] Baseline test #20074
Conversation
Script for integrating BenchmarkDoctor’s check of newly added benchmarks into CI workflow.
@eeckstein Please review 🙏 Can you integrate this into @swift-ci? |
Can you include that in run_smoke_bench? |
I could? But than it would be invoked for each of the optimization levels, right? It should run only once, as it validates only the |
Or do you mean I should add it as another option to the |
Yes, just add an option and we can change the ci script to pass that option |
@eeckstein I've integrated the |
How do you derive the list of added benchmarks? Is it getting the list by invoking Benchmark_O --list? |
Yes. |
@swift-ci benchmark |
Build comment file:No performance and code size changesHow to read the dataThe tables contain differences in performance which are larger than 8% and differences in code size which are larger than 1%.If you see any unexpected regressions, you should consider fixing the regressions before you merge the PR. Noise: Sometimes the performance results (not code size!) contain false alarms. Unexpected regressions which are marked with '(?)' are probably noise. If you see regressions which you cannot explain you can try to run the benchmarks again. If regressions still show up, please consult with the performance team (@eeckstein). Hardware Overview
|
@swift-ci smoke test |
This PR adds new a
-check-added
option to therun_smoke_bench.py
that finds the added benchmarks and runs an equivalent ofBenchmark_Driver check
on them. It can be run together with-verbose
option to output more details during the benchmark analysis.It outputs the
BenchmarkDoctor
's report with Warnings and Errors, if the new benchmarks violate the best practices.BenchmarkDoctor
currently always returns exit code 0. In the future, we might change this to depend on the presence of ERRORs in the report, if deemed useful, so that violations of good baseline could not be committed to the tree.It depends on
BenchmarkDriver
andcompare_perf_tests.py
being in the same directory.