@@ -66,8 +66,7 @@ vectorizeConvolution(RewriterBase &rewriter, LinalgOp convOp,
66
66
// / * inferred from the static dims in the input and output tensors.
67
67
// / Bails out if:
68
68
// / * vector sizes are not user-provided, and
69
- // / * at least one dim is dynamic (in both the input and output tensors),
70
- // / bails out.
69
+ // / * at least one dim is dynamic (in both the input and output tensors).
71
70
// /
72
71
// / Before:
73
72
// / !t_in_type = tensor<1x2x3xf32>
@@ -1918,15 +1917,15 @@ vectorizeInsertSliceOpPrecondition(tensor::InsertSliceOp sliceOp,
1918
1917
return failure ();
1919
1918
1920
1919
// Get the pad value.
1921
- // TransferReadOp (which is used to vectorize InsertSliceOp, requires a scalar
1922
- // padding value. Note that:
1923
- // * for in-bounds access, the value is actually irrelevant.
1924
- // There are 2 cases in which xfer.read accesses are known to be in-bounds:
1920
+ // TransferReadOp (which is used to vectorize InsertSliceOp), requires a
1921
+ // scalar padding value. Note that:
1922
+ // * for in-bounds accesses,
1923
+ // the value is actually irrelevant. There are 2 cases in which xfer.read
1924
+ // accesses are known to be in-bounds:
1925
1925
// 1. The source shape is static (output vector sizes would be based on
1926
1926
// the source shape and hence all memory accesses would be in-bounds),
1927
- // 2. Masking is used (output vector sizes would be user-provided, in which
1928
- // case it is assumed that all memory accesses are in-bounds). This
1929
- // remains a TODO.
1927
+ // 2. Masking is used, i.e. the output vector sizes are user-provided. In
1928
+ // this case it is safe to assume that all memory accesses are in-bounds.
1930
1929
//
1931
1930
// When the value is not known and not needed, use 0. Otherwise, bail out.
1932
1931
Value padValue = getStaticPadVal (sliceOp);
0 commit comments