Skip to content

[SYCL][Graph] Add specification for kernel binary updates #14896

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
Dec 6, 2024

Conversation

fabiomestre
Copy link
Contributor

@fabiomestre fabiomestre commented Aug 1, 2024

Adds the kernel binary update feature to the sycl graph specification. This introduces a new dynamic_command_group class which can be used to update the command-group function of a kernel nodes in graphs.

Implemented in:

fabiomestre and others added 2 commits October 23, 2024 10:55
Adds the kernel binary update feature to the sycl graph specification.
This introduces a new dynamic_command_group class which can be used
to update the command-group function of a kernel nodes in graphs.
@EwanC EwanC force-pushed the fabio/kernel_binary_update branch from fcf08a6 to 119ba28 Compare October 23, 2024 11:48
EwanC added a commit to reble/llvm that referenced this pull request Oct 31, 2024
Implement Dynamic Command-Group feature specified in
PR [[SYCL][Graph] Add specification for kernel binary updates](intel#14896)

This feature enables updating `ur_kernel_handle_t` objects in graph nodes
between executions as well as parameters and execution range of nodes.

This functionality is currently supported on CUDA & HIP which are used
for testing in the new E2E tests. Level Zero support will follow
shortly, resulting in the removal of the `XFAIL` labels from the E2E
tests.

The code for adding nodes to a graph has been refactored to split out
verification of edges, and marking memory objects used in a node, as
separate helper functions. This allows path for adding a command-group
node to do this functions over each CG in the list before creating the
node itself.

The `dynamic_parameter_impl` code has also been refactored so the code
is shared for updating a dynamic parameter used in both a regular kernel
node and a dynamic command-group node.

See the addition to the design doc for further details on the
implementation.
martygrant pushed a commit that referenced this pull request Nov 8, 2024
Implement Dynamic Command-Group feature specified in PR [[SYCL][Graph]
Add specification for kernel binary
updates](#14896). This feature enables
updating `ur_kernel_handle_t` objects in graph nodes between executions
as well as parameters and execution range of nodes.

Points to note in this change:

* The functionality is currently supported on CUDA & HIP which are used
for testing in the new E2E tests. Level Zero support will follow
shortly, resulting in the removal of the `XFAIL` labels with tracker
number from the E2E tests.

* The code for adding nodes to a graph has been refactored to split out
verification of edges, and marking memory objects used in a node, as
separate helper functions. This allows path for adding a command-group
node to do this functions over each CG in the list before creating the
node itself.

* The `dynamic_parameter_impl` code has also been refactored so the code
is shared for updating a dynamic parameter used in both a regular kernel
node and a dynamic command-group node.

* There is now no need for the `handler::setNDRangeUsed()` API now that
graph kernel nodes can update between kernels using `sycl::nd_range` and
`sycl::range`. The functionality in this method has be turned into a
no-op, however removing the method is an ABI breaking change, so it
remains guarded by the `__INTEL_PREVIEW_BREAKING_CHANGES` macro.

See the addition to the design doc for further details on the
implementation.
@EwanC EwanC marked this pull request as ready for review November 28, 2024 09:48
@EwanC EwanC requested a review from a team as a code owner November 28, 2024 09:48
Copy link
Contributor

@Bensuo Bensuo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, just some minor comments about wording.

Copy link
Contributor

@hdelan hdelan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM apart from a few questions.

@EwanC EwanC requested a review from a team as a code owner December 2, 2024 09:08
@EwanC EwanC requested a review from steffenlarsen December 2, 2024 09:08
Also refine spec wording based on PR feedback
@EwanC EwanC force-pushed the fabio/kernel_binary_update branch from 33ae1f4 to 3edf870 Compare December 2, 2024 09:53
@EwanC EwanC requested a review from gmlueck December 3, 2024 08:51
@EwanC
Copy link
Contributor

EwanC commented Dec 4, 2024

ping @intel/llvm-reviewers-runtime for required approval, also @gmlueck to review spec language

@EwanC
Copy link
Contributor

EwanC commented Dec 6, 2024

@intel/llvm-gatekeepers Can this be merged please

@sommerlukas
Copy link
Contributor

@intel/llvm-gatekeepers Can this be merged please

Merging would associate Fabio's private mail with this commit, which is against policy:

This commit will be authored by [email protected]

@fabiomestre would need to update their Github profile to use the correct mail address for commits.

@sommerlukas
Copy link
Contributor

@intel/llvm-gatekeepers Can this be merged please

Merging would associate Fabio's private mail with this commit, which is against policy:

This commit will be authored by [email protected]

@fabiomestre would need to update their Github profile to use the correct mail address for commits.

After further consideration, our policy was mainly to avoid commits using the anonymous [email protected] mail addresses, which is not the case here.

I'm merging this PR.

@sommerlukas sommerlukas merged commit 95a858d into intel:sycl Dec 6, 2024
15 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants