Skip to content

Commit 9c48a04

Browse files
authored
[mlir][tensor] Refine the semantics of createPadHighOp (llvm#109667)
Refine `createPadHighOp` so that the output tensor is required to be statically shaped. This is to prevent the current behaviour, which is incorrect: > // If `type` has dynamic dimensions the padding width is set to zero. The actual padding width should be set to: `%new_dim - %old_dim`, where %new_dim` and `%old_dim` are defined via e.g. `tensor.dim` Op applied to output and input tensors, respectively. This PR is an attempt to clarify the semantics surrounding dynamic shapes in preparation for adding support for scalable vectors to the pack/unpack logic in Tensor/Linalg (dynamic shapes is what we use to model scalable (*) sizes at the Tensor/MemRef level). (*) Scalable as in Arm's Scalable Vector Extension (SVE)
1 parent 6d11494 commit 9c48a04

File tree

2 files changed

+12
-7
lines changed

2 files changed

+12
-7
lines changed

mlir/include/mlir/Dialect/Tensor/Utils/Utils.h

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -14,10 +14,10 @@
1414
namespace mlir {
1515
namespace tensor {
1616

17-
// Return a PadOp that pads `source` to `type` size where the static
18-
// sizes are assumed to be greater than the dynamic sizes. If `type` has dynamic
19-
// dimensions the padding width is set to zero. The op performs "high" padding
20-
// (i.e. it adds trailing padding values until the desired size is met).
17+
// Return a PadOp that pads `source` to `type` size. Output sizes (from `type`)
18+
// are assumed to be static and greater than the potentially dynamic input sizes
19+
// (from `source). The op performs "high" padding (i.e. it adds trailing padding
20+
// values until the desired size is met).
2121
PadOp createPadHighOp(RankedTensorType type, Value source, Value pad,
2222
bool nofold, Location loc, OpBuilder &builder);
2323

mlir/lib/Dialect/Tensor/Utils/Utils.cpp

Lines changed: 8 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -24,12 +24,17 @@ using namespace mlir::tensor;
2424
PadOp mlir::tensor::createPadHighOp(RankedTensorType type, Value source,
2525
Value pad, bool nofold, Location loc,
2626
OpBuilder &b) {
27+
28+
// TODO: Either relax or turn this into a failure
29+
assert(!ShapedType::isDynamicShape(type.getShape()) &&
30+
"The output type is dynamic - that's not supported ATM.");
31+
32+
// Init "low" and "high" padding values ("low" is kept as is, "high" is
33+
// computed below).
2734
SmallVector<OpFoldResult> low(type.getRank(), b.getIndexAttr(0));
2835
SmallVector<OpFoldResult> high(type.getRank(), b.getIndexAttr(0));
36+
2937
for (const auto &en : enumerate(type.getShape())) {
30-
// Pad only the static dimensions of the result tensor type.
31-
if (ShapedType::isDynamic(en.value()))
32-
continue;
3338
// Compute the padding width.
3439
AffineExpr d0;
3540
bindDims(b.getContext(), d0);

0 commit comments

Comments
 (0)