Skip to content

Commit 75baf62

Browse files
committed
[mlir][sparse] fixed doc formatting
Indentation seems to have an impact on website layout. Reviewed By: grosul1 Differential Revision: https://reviews.llvm.org/D107403
1 parent cb2a2ba commit 75baf62

File tree

2 files changed

+40
-40
lines changed

2 files changed

+40
-40
lines changed

mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorBase.td

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ def SparseTensor_Dialect : Dialect {
2929
sparse code automatically was pioneered for dense linear algebra by
3030
[Bik96] in MT1 (see https://www.aartbik.com/sparse.php) and formalized
3131
to tensor algebra by [Kjolstad17,Kjolstad20] in the Sparse Tensor
32-
Algebra Compiler (TACO) project (see http://tensor-compiler.org/).
32+
Algebra Compiler (TACO) project (see http://tensor-compiler.org).
3333

3434
The MLIR implementation closely follows the "sparse iteration theory"
3535
that forms the foundation of TACO. A rewriting rule is applied to each

mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td

Lines changed: 39 additions & 39 deletions
Original file line numberDiff line numberDiff line change
@@ -56,20 +56,20 @@ def SparseTensor_ConvertOp : SparseTensor_Op<"convert", [SameOperandsAndResultTy
5656
Results<(outs AnyTensor:$dest)> {
5757
string summary = "Converts between different tensor types";
5858
string description = [{
59-
Converts one sparse or dense tensor type to another tensor type. The rank
60-
and dimensions of the source and destination types must match exactly,
61-
only the sparse encoding of these types may be different. The name `convert`
62-
was preferred over `cast`, since the operation may incur a non-trivial cost.
63-
64-
When converting between two different sparse tensor types, only explicitly
65-
stored values are moved from one underlying sparse storage format to
66-
the other. When converting from an unannotated dense tensor type to a
67-
sparse tensor type, an explicit test for nonzero values is used. When
68-
converting to an unannotated dense tensor type, implicit zeroes in the
69-
sparse storage format are made explicit. Note that the conversions can have
70-
non-trivial costs associated with them, since they may involve elaborate
71-
data structure transformations. Also, conversions from sparse tensor types
72-
into dense tensor types may be infeasible in terms of storage requirements.
59+
Converts one sparse or dense tensor type to another tensor type. The rank
60+
and dimensions of the source and destination types must match exactly,
61+
only the sparse encoding of these types may be different. The name `convert`
62+
was preferred over `cast`, since the operation may incur a non-trivial cost.
63+
64+
When converting between two different sparse tensor types, only explicitly
65+
stored values are moved from one underlying sparse storage format to
66+
the other. When converting from an unannotated dense tensor type to a
67+
sparse tensor type, an explicit test for nonzero values is used. When
68+
converting to an unannotated dense tensor type, implicit zeroes in the
69+
sparse storage format are made explicit. Note that the conversions can have
70+
non-trivial costs associated with them, since they may involve elaborate
71+
data structure transformations. Also, conversions from sparse tensor types
72+
into dense tensor types may be infeasible in terms of storage requirements.
7373

7474
Examples:
7575

@@ -88,15 +88,15 @@ def SparseTensor_ToPointersOp : SparseTensor_Op<"pointers", [NoSideEffect]>,
8888
Results<(outs AnyStridedMemRefOfRank<1>:$result)> {
8989
let summary = "Extract pointers array at given dimension from a tensor";
9090
let description = [{
91-
Returns the pointers array of the sparse storage scheme at the
92-
given dimension for the given sparse tensor. This is similar to the
93-
`memref.buffer_cast` operation in the sense that it provides a bridge
94-
between a tensor world view and a bufferized world view. Unlike the
95-
`memref.buffer_cast` operation, however, this sparse operation actually
96-
lowers into a call into a support library to obtain access to the
97-
pointers array.
91+
Returns the pointers array of the sparse storage scheme at the
92+
given dimension for the given sparse tensor. This is similar to the
93+
`memref.buffer_cast` operation in the sense that it provides a bridge
94+
between a tensor world view and a bufferized world view. Unlike the
95+
`memref.buffer_cast` operation, however, this sparse operation actually
96+
lowers into a call into a support library to obtain access to the
97+
pointers array.
9898

99-
Example:
99+
Example:
100100

101101
```mlir
102102
%1 = sparse_tensor.pointers %0, %c1
@@ -112,15 +112,15 @@ def SparseTensor_ToIndicesOp : SparseTensor_Op<"indices", [NoSideEffect]>,
112112
Results<(outs AnyStridedMemRefOfRank<1>:$result)> {
113113
let summary = "Extract indices array at given dimension from a tensor";
114114
let description = [{
115-
Returns the indices array of the sparse storage scheme at the
116-
given dimension for the given sparse tensor. This is similar to the
117-
`memref.buffer_cast` operation in the sense that it provides a bridge
118-
between a tensor world view and a bufferized world view. Unlike the
119-
`memref.buffer_cast` operation, however, this sparse operation actually
120-
lowers into a call into a support library to obtain access to the
121-
indices array.
115+
Returns the indices array of the sparse storage scheme at the
116+
given dimension for the given sparse tensor. This is similar to the
117+
`memref.buffer_cast` operation in the sense that it provides a bridge
118+
between a tensor world view and a bufferized world view. Unlike the
119+
`memref.buffer_cast` operation, however, this sparse operation actually
120+
lowers into a call into a support library to obtain access to the
121+
indices array.
122122

123-
Example:
123+
Example:
124124

125125
```mlir
126126
%1 = sparse_tensor.indices %0, %c1
@@ -136,15 +136,15 @@ def SparseTensor_ToValuesOp : SparseTensor_Op<"values", [NoSideEffect]>,
136136
Results<(outs AnyStridedMemRefOfRank<1>:$result)> {
137137
let summary = "Extract numerical values array from a tensor";
138138
let description = [{
139-
Returns the values array of the sparse storage scheme for the given
140-
sparse tensor, independent of the actual dimension. This is similar to
141-
the `memref.buffer_cast` operation in the sense that it provides a bridge
142-
between a tensor world view and a bufferized world view. Unlike the
143-
`memref.buffer_cast` operation, however, this sparse operation actually
144-
lowers into a call into a support library to obtain access to the
145-
values array.
146-
147-
Example:
139+
Returns the values array of the sparse storage scheme for the given
140+
sparse tensor, independent of the actual dimension. This is similar to
141+
the `memref.buffer_cast` operation in the sense that it provides a bridge
142+
between a tensor world view and a bufferized world view. Unlike the
143+
`memref.buffer_cast` operation, however, this sparse operation actually
144+
lowers into a call into a support library to obtain access to the
145+
values array.
146+
147+
Example:
148148

149149
```mlir
150150
%1 = sparse_tensor.values %0 : tensor<64x64xf64, #CSR> to memref<?xf64>

0 commit comments

Comments
 (0)