Skip to content

[UR][CI] add manually triggered benchmark action #17088

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Feb 21, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
198 changes: 198 additions & 0 deletions .github/workflows/ur-benchmarks-reusable.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,198 @@
name: Benchmarks Reusable

on:
workflow_call:
inputs:
str_name:
required: true
type: string
pr_no:
required: true
# even though this is a number, this is a workaround for issues with
# reusable workflow calls that result in "Unexpected value '0'" error.
type: string
bench_script_params:
required: false
type: string
default: ''
sycl_config_params:
required: false
type: string
default: ''
upload_report:
required: false
type: boolean
default: false
compute_runtime_commit:
required: false
type: string
default: ''

permissions:
contents: read
pull-requests: write

jobs:
bench-run:
name: Build SYCL, Run Benchmarks
strategy:
matrix:
adapter: [
{str_name: "${{ inputs.str_name }}",
sycl_config: "${{ inputs.sycl_config_params }}"
}
]
build_type: [Release]
compiler: [{c: clang, cxx: clang++}]

runs-on: "PVC_PERF"

steps:
- name: Add comment to PR
uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7.0.1
if: ${{ always() && inputs.pr_no != 0 }}
with:
script: |
const pr_no = '${{ inputs.pr_no }}';
const adapter = '${{ matrix.adapter.str_name }}';
const url = '${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}';
const params = '${{ inputs.bench_script_params }}';
const body = `Compute Benchmarks ${adapter} run (with params: ${params}):\n${url}`;

github.rest.issues.createComment({
issue_number: pr_no,
owner: context.repo.owner,
repo: context.repo.repo,
body: body
})

- name: Checkout SYCL
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
with:
path: sycl-repo

# We need to fetch special ref for proper PR's merge commit. Note, this ref may be absent if the PR is already merged.
- name: Fetch PR's merge commit
if: ${{ inputs.pr_no != 0 }}
working-directory: ${{github.workspace}}/sycl-repo
run: |
git fetch -- https://github.com/${{github.repository}} +refs/pull/${{ inputs.pr_no }}/*:refs/remotes/origin/pr/${{ inputs.pr_no }}/*
git checkout origin/pr/${{ inputs.pr_no }}/merge
git rev-parse origin/pr/${{ inputs.pr_no }}/merge

- name: Install pip packages
run: |
pip install --force-reinstall -r ${{github.workspace}}/sycl-repo/unified-runtime/scripts/benchmarks/requirements.txt

- name: Configure SYCL
run: >
python3 sycl-repo/buildbot/configure.py
-t ${{matrix.build_type}}
-o ${{github.workspace}}/sycl_build
--cmake-gen "Ninja"
--cmake-opt="-DLLVM_INSTALL_UTILS=ON"
--cmake-opt="-DSYCL_PI_TESTS=OFF"
--cmake-opt=-DCMAKE_C_COMPILER_LAUNCHER=ccache
--cmake-opt=-DCMAKE_CXX_COMPILER_LAUNCHER=ccache
${{matrix.adapter.sycl_config}}

- name: Build SYCL
run: cmake --build ${{github.workspace}}/sycl_build -j $(nproc)

# We need a complete installed UR for compute-benchmarks.
- name: Configure UR
run: >
cmake -DCMAKE_BUILD_TYPE=${{matrix.build_type}}
-S${{github.workspace}}/sycl-repo/unified-runtime
-B${{github.workspace}}/ur_build
-DCMAKE_INSTALL_PREFIX=${{github.workspace}}/ur_install
-DUR_BUILD_TESTS=OFF
-DUR_BUILD_ADAPTER_L0=ON
-DUR_BUILD_ADAPTER_L0_V2=ON
-DUMF_DISABLE_HWLOC=ON

- name: Build UR
run: cmake --build ${{github.workspace}}/ur_build -j $(nproc)

- name: Install UR
run: cmake --install ${{github.workspace}}/ur_build

- name: Compute core range
run: |
# Compute the core range for the first NUMA node; second node is for UMF jobs.
# Skip the first 4 cores - the kernel is likely to schedule more work on these.
CORES="$(lscpu | awk '
/NUMA node0 CPU|On-line CPU/ {line=$0}
END {
split(line, a, " ")
split(a[4], b, ",")
sub(/^0/, "4", b[1])
print b[1]
}')"
echo "Selected core: $CORES"
echo "CORES=$CORES" >> $GITHUB_ENV

ZE_AFFINITY_MASK=0
echo "ZE_AFFINITY_MASK=$ZE_AFFINITY_MASK" >> $GITHUB_ENV

- name: Run benchmarks
working-directory: ${{ github.workspace }}
id: benchmarks
run: >
taskset -c "${{ env.CORES }}" ${{ github.workspace }}/sycl-repo/unified-runtime/scripts/benchmarks/main.py
~/llvm_bench_workdir
--sycl ${{ github.workspace }}/sycl_build
--ur ${{ github.workspace }}/ur_install
--adapter ${{ matrix.adapter.str_name }}
--compare baseline
--compute-runtime ${{ inputs.compute_runtime_commit }}
--build-igc
${{ inputs.upload_report && '--output-html' || '' }}
${{ inputs.pr_no != 0 && '--output-markdown' || '' }}
${{ inputs.bench_script_params }}

- name: Print benchmark results
run: |
cat ${{ github.workspace }}/benchmark_results.md || true

- name: Add comment to PR
uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7.0.1
if: ${{ always() && inputs.pr_no != 0 }}
with:
script: |
let markdown = ""
try {
const fs = require('fs');
markdown = fs.readFileSync('benchmark_results.md', 'utf8');
} catch(err) {
}

const pr_no = '${{ inputs.pr_no }}';
const adapter = '${{ matrix.adapter.str_name }}';
const url = '${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}';
const test_status = '${{ steps.benchmarks.outcome }}';
const job_status = '${{ job.status }}';
const params = '${{ inputs.bench_script_params }}';
const body = `Benchmarks ${adapter} run (${params}):\n${url}\nJob status: ${job_status}. Test status: ${test_status}.\n ${markdown}`;

github.rest.issues.createComment({
issue_number: pr_no,
owner: context.repo.owner,
repo: context.repo.repo,
body: body
})
Comment on lines +178 to +183
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For future, do we want it to be a comment or a summary of this job (like, e.g., https://github.com/intel/llvm/actions/runs/13438348841)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good question. I think a comment is more visible. Right now we do not fail a job if the scripts think there's a regression, so if the status is hidden in the job logs, people might forget to check.

In the near future we also plan on uploading a set of html charts per PR (basically this: https://oneapi-src.github.io/unified-runtime/benchmark_results.html but with a PR marked on the chart, so you can easily compare against previous nightly runs). Again, I think people are more likely to use this functionality if it's right there in the comment.


- name: Rename benchmark results file
if: ${{ always() && inputs.upload_report }}
run: mv benchmark_results.html benchmark_results_${{ inputs.pr_no }}.html

- name: Upload HTML report
if: ${{ always() && inputs.upload_report }}
uses: actions/cache/save@1bd1e32a3bdc45362d1e726936510720a7c30a57 # v4.2.0
with:
path: benchmark_results_${{ inputs.pr_no }}.html
key: benchmark-results-${{ inputs.pr_no }}-${{ matrix.adapter.str_name }}-${{ github.run_id }}

- name: Get information about platform
if: ${{ always() }}
run: ${{github.workspace}}/sycl-repo/unified-runtime/.github/scripts/get_system_info.sh
53 changes: 53 additions & 0 deletions .github/workflows/ur-benchmarks.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
name: Benchmarks

on:
workflow_dispatch:
inputs:
str_name:
description: Adapter
type: choice
required: true
default: 'level_zero'
options:
- level_zero
- level_zero_v2
pr_no:
description: PR number (0 is sycl main branch)
type: number
required: true
bench_script_params:
description: Benchmark script arguments
type: string
required: false
default: ''
sycl_config_params:
description: Extra params for SYCL configuration
type: string
required: false
default: ''
compute_runtime_commit:
description: 'Compute Runtime commit'
type: string
required: false
default: ''
upload_report:
description: 'Upload HTML report'
type: boolean
required: false
default: false

permissions:
contents: read
pull-requests: write

jobs:
manual:
name: Compute Benchmarks
uses: ./.github/workflows/ur-benchmarks-reusable.yml
with:
str_name: ${{ inputs.str_name }}
pr_no: ${{ inputs.pr_no }}
bench_script_params: ${{ inputs.bench_script_params }}
sycl_config_params: ${{ inputs.sycl_config_params }}
compute_runtime_commit: ${{ inputs.compute_runtime_commit }}
upload_report: ${{ inputs.upload_report }}
2 changes: 1 addition & 1 deletion unified-runtime/scripts/benchmarks/benches/oneapi.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@

class OneAPI:
# random unique number for benchmark oneAPI installation
ONEAPI_BENCHMARK_INSTANCE_ID = 98765
ONEAPI_BENCHMARK_INSTANCE_ID = 987654

def __init__(self):
self.oneapi_dir = os.path.join(options.workdir, "oneapi")
Expand Down
4 changes: 1 addition & 3 deletions unified-runtime/scripts/benchmarks/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -262,9 +262,7 @@ def main(directory, additional_env_vars, save_name, compare_names, filter):
compare_names.append(saved_name)

if options.output_html:
html_content = generate_html(
history.runs, "oneapi-src/unified-runtime", compare_names
)
html_content = generate_html(history.runs, "intel/llvm", compare_names)

with open("benchmark_results.html", "w") as file:
file.write(html_content)
Expand Down
2 changes: 1 addition & 1 deletion unified-runtime/scripts/benchmarks/options.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ class Options:
build_compute_runtime: bool = False
extra_ld_libraries: list[str] = field(default_factory=list)
extra_env_vars: dict = field(default_factory=dict)
compute_runtime_tag: str = "24.52.32224.10"
compute_runtime_tag: str = "25.05.32567.12"
build_igc: bool = False
current_run_name: str = "This PR"

Expand Down
4 changes: 4 additions & 0 deletions unified-runtime/scripts/benchmarks/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
matplotlib==3.9.2
mpld3==0.5.10
dataclasses-json==0.6.7
PyYAML==6.0.1
Loading