Skip to content

[libc++] Save benchmark results in a json file #119761

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Dec 13, 2024

Conversation

ldionne
Copy link
Member

@ldionne ldionne commented Dec 12, 2024

When running a benchmark, also save the benchmark results in a JSON file. That is cheap to do and useful to compare benchmark results between different runs.

When running a benchmark, also save the benchmark results in a JSON
file. That is cheap to do and useful to compare benchmark results
between different runs.
@ldionne ldionne requested a review from a team as a code owner December 12, 2024 21:14
@llvmbot llvmbot added the libc++ libc++ C++ Standard Library. Not GNU libstdc++. Not libc++abi. label Dec 12, 2024
@llvmbot
Copy link
Member

llvmbot commented Dec 12, 2024

@llvm/pr-subscribers-libcxx

Author: Louis Dionne (ldionne)

Changes

When running a benchmark, also save the benchmark results in a JSON file. That is cheap to do and useful to compare benchmark results between different runs.


Full diff: https://github.com/llvm/llvm-project/pull/119761.diff

1 Files Affected:

  • (modified) libcxx/utils/libcxx/test/format.py (+1-1)
diff --git a/libcxx/utils/libcxx/test/format.py b/libcxx/utils/libcxx/test/format.py
index f69a7dfedef2d5..59d0fffd378191 100644
--- a/libcxx/utils/libcxx/test/format.py
+++ b/libcxx/utils/libcxx/test/format.py
@@ -348,7 +348,7 @@ def execute(self, test, litConfig):
                 "%dbg(COMPILED WITH) %{cxx} %s %{flags} %{compile_flags} %{benchmark_flags} %{link_flags} -o %t.exe",
             ]
             if "enable-benchmarks=run" in test.config.available_features:
-                steps += ["%dbg(EXECUTED AS) %{exec} %t.exe"]
+                steps += ["%dbg(EXECUTED AS) %{exec} %t.exe --benchmark_out=%T/benchmark-result.json --benchmark_out_format=json"]
             return self._executeShTest(test, litConfig, steps)
         else:
             return lit.Test.Result(

Copy link

⚠️ Python code formatter, darker found issues in your code. ⚠️

You can test this locally with the following command:
darker --check --diff -r 6a9279ca407132eec848eb5c55c2222ce605df81...162028f16a9ae44c8b841c73f6a761b3236101c7 libcxx/utils/libcxx/test/format.py
View the diff from darker here.
--- format.py	2024-12-12 21:12:48.000000 +0000
+++ format.py	2024-12-12 21:17:46.352449 +0000
@@ -346,11 +346,13 @@
                 )
             steps = [
                 "%dbg(COMPILED WITH) %{cxx} %s %{flags} %{compile_flags} %{benchmark_flags} %{link_flags} -o %t.exe",
             ]
             if "enable-benchmarks=run" in test.config.available_features:
-                steps += ["%dbg(EXECUTED AS) %{exec} %t.exe --benchmark_out=%T/benchmark-result.json --benchmark_out_format=json"]
+                steps += [
+                    "%dbg(EXECUTED AS) %{exec} %t.exe --benchmark_out=%T/benchmark-result.json --benchmark_out_format=json"
+                ]
             return self._executeShTest(test, litConfig, steps)
         else:
             return lit.Test.Result(
                 lit.Test.UNRESOLVED, "Unknown test suffix for '{}'".format(filename)
             )

@ldionne ldionne merged commit 2135bab into llvm:main Dec 13, 2024
57 of 64 checks passed
@ldionne ldionne deleted the review/benchmarks-output-json branch December 13, 2024 19:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
libc++ libc++ C++ Standard Library. Not GNU libstdc++. Not libc++abi.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants