Skip to content

[mlir][docs] Update documentation for canonicalize. #99753

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jul 22, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
39 changes: 38 additions & 1 deletion mlir/docs/Canonicalization.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,10 @@ together.

Some important things to think about w.r.t. canonicalization patterns:

* The goal of canonicalization is to make subsequent analyses and
optimizations more effective. Therefore, performance improvements are not
necessary for canonicalization.

* Pass pipelines should not rely on the canonicalizer pass for correctness.
They should work correctly with all instances of the canonicalization pass
removed.
Expand All @@ -51,6 +55,39 @@ Some important things to think about w.r.t. canonicalization patterns:
* It is always good to eliminate operations entirely when possible, e.g. by
folding known identities (like "x + 0 = x").

* Pattens with expensive running time (i.e. have O(n) complexity) or
complicated cost models don't belong to canonicalization: since the
algorithm is executed iteratively until fixed-point we want patterns that
execute quickly (in particular their matching phase).

* Canonicalize shouldn't lose the semantic of original operation: the original
information should always be recoverable from the transformed IR.

For example, a pattern that transform

```
%transpose = linalg.transpose
ins(%input : tensor<1x2x3xf32>)
outs(%init1 : tensor<2x1x3xf32>)
dimensions = [1, 0, 2]
%out = linalg.transpose
ins(%tranpose: tensor<2x1x3xf32>)
outs(%init2 : tensor<3x1x2xf32>)
permutation = [2, 1, 0]
```

to

```
%out= linalg.transpose
ins(%input : tensor<1x2x3xf32>)
outs(%init2: tensor<3x1x2xf32>)
permutation = [2, 0, 1]
```

is a good canonicalization pattern because it removes a redundant operation,
making other analysis optimizations and more efficient.

## Globally Applied Rules

These transformations are applied to all levels of IR:
Expand Down Expand Up @@ -189,7 +226,7 @@ each of the operands, returning the corresponding constant attribute. These
operands are those that implement the `ConstantLike` trait. If any of the
operands are non-constant, a null `Attribute` value is provided instead. For
example, if MyOp provides three operands [`a`, `b`, `c`], but only `b` is
constant then `adaptor` will return Attribute() for `getA()` and `getC()`,
constant then `adaptor` will return Attribute() for `getA()` and `getC()`,
and b-value for `getB()`.

Also above, is the use of `OpFoldResult`. This class represents the possible
Expand Down
Loading