Skip to content

[mlir] minor documentation fix in GPUTransformOps.td #121157

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Dec 26, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions mlir/include/mlir/Dialect/GPU/TransformOps/GPUTransformOps.td
Original file line number Diff line number Diff line change
Expand Up @@ -168,13 +168,13 @@ def MapNestedForallToThreads :

#### Return modes:

This operation ignores non-gpu_launch ops and drops them in the return.
This operation ignores non-`gpu_launch` ops and drops them in the return.

If any scf.forall with tensors is found, the transform definitely
fails.

If all the scf.forall operations with gpu.thread mapping contained
within the LaunchOp referred to by the `target` PDLOperation lower to GPU
If all the `scf.forall` operations with gpu.thread mapping contained
within the `LaunchOp` referred to by the `target` handle lower to GPU
properly, the transform succeeds. Otherwise the transform definitely
fails.

Expand Down Expand Up @@ -277,8 +277,8 @@ def MapForallToBlocks :
If any scf.forall with tensors is found, the transform definitely
fails.

If all the scf.forall operations contained within the LaunchOp
referred to by the `target` PDLOperation lower to GPU properly, the
If all the `scf.forall` operations contained within the LaunchOp
referred to by the `target` handle lower to GPU properly, the
transform succeeds. Otherwise the transform definitely fails.

The returned handle points to the same LaunchOp operand, consuming it and
Expand Down
Loading