Skip to content

Commit 770f540

Browse files
authored
[SYCL][Matrix] Add documentation about new matrix features (#6157)
The new two features are joint_matrix_fill and get_wi_data for piece-wise operations.
1 parent 8210083 commit 770f540

File tree

1 file changed

+108
-53
lines changed

1 file changed

+108
-53
lines changed

sycl/doc/extensions/experimental/sycl_ext_oneapi_matrix.asciidoc

Lines changed: 108 additions & 53 deletions
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ value to determine which of the extension's APIs the implementation supports.
5353
|======================
5454
|Value |Description
5555
|1 |Initial extension implementation on Intel AMX. Base features are supported.
56-
|2 |Initial extension JIT implementation on Intel AMX and DPAS. load, store, mad and the query interface are supported
56+
|2 |Initial extension JIT implementation on Intel AMX and DPAS. load, store, mad, fill, piece-wise operations, and the query interface are supported
5757
|======================
5858

5959
## New `joint_matrix` class
@@ -165,6 +165,90 @@ namespace sycl::ext::oneapi::experimental::matrix {
165165
The matrix multiply and add function performs the multiply operation on the matrices `A` and `B`, accumulate the result with `C` and return the result.
166166

167167

168+
#### Matrix Initialization: `joint_matrix_fill`
169+
The current interface presented above assumes that all the matrices are directly loaded from memory. This new function called `joint_matrix_fill` makes it possible to multiply a matrix which is not directly loaded from memory but rather initialized directly in the register. On Intel AMX, if the initialization constant is zero, this would map to `_tile_zero` intrinsic:
170+
171+
```c++
172+
namespace sycl::ext::oneapi::experimental::matrix {
173+
template <typename Group, typename T, size_t NumRows, size_t NumCols,
174+
matrix_layout L, typename Tv>
175+
void joint_matrix_fill(Group sg, joint_matrix<T, NumRows, NumCols, L, Group> &m, Tv v);
176+
}
177+
```
178+
IMPORTANT: In the current implementation, only the subgroup scope is supported.
179+
180+
#### Element Indexing and Piece-Wise Operations
181+
##### Background
182+
Besides matrix multiply and add, this extension aims to make it possible to perform piece-wise operations on matrices in a SPMD manner. The mechanisms that are recommended to perform such piece-wise operations depend upon which of the following classes the operation falls into:
183+
184+
Class 1- Element-wise operations where the same operation is performed on every element of the matrix, such that the operation can be performed without knowledge of the position of the element within the matrix. Activation functions or adding a constant value to every element of the matrix are two examples.
185+
186+
Class 2- Piece-wise operations where the operation depends on the element index of the matrix or the operation takes multiple elements as operands (such as a sum of all elements in a row for example). Quantization that is needed for conversion between low precision types like `int8_t` and `fp32` uses piece-wise operations.
187+
188+
// We explored multiple options to enable this feature in the matrix interface: 1) Allowing non-restrictive element indexing on the matrix elements would result into slow indexing on the GPU, 2) Operator overloading can represent only element-wise operations and not the operations on pieces (row, column, diagonal, etc) of the matrix. 3) Providing specific functions for these piece-wise operations can resolve some of the functions we know of today like the ones involved in quantization but it is not general to any problem that may occur in the future.
189+
190+
##### Explicit conversion with mapping from SIMD to SPMD
191+
The data elements in a joint_matrix are distributed or shared across the work-items in the Group in an implementation-defined way. There is no fixed allocation of matrix elements owned by a `joint_matrix` instance to the WIs comprising the group used to instantiate it. For instance, the matrix is a shared entity among the work items in the case of the AMX backend because the AMX tile that holds the matrix data is a 2d register that is shared among the work items. Therefore the partitioning among the WIs is implementation defined. However, it is necessary to allocate WIs to specific elements of the matrix. In order to be able to perform piece-wise operations in a general and efficient way, we provide a conversion function from the joint_matrix domain that is owned by a group of work items to the portion that is owned by each work item. This enables the WI to perform piece-wise operations on the matrix within the SYCL SPMD programming model.
192+
193+
We introduce a new function `get_wi_data` that provides a view of the portion of the matrix that is owned by the current WI. So modifying `wi_data` means also modifying the joint matrix corresponding elements. The indexing provided inside the `wi_data` class acesses only the portion of the current WI and returns `wi_element`. This latter holds a reference to the original joint_matrix that `wi_data` was constructed from. Users can use the `=` operator to update the element of the `joint_matrix` represented by the `wi_element` after the element-wise operation.
194+
195+
Using `get_wi_data`, it is not possible to know which portions of data are owned by each thread in the group as this is implementation defined and change from one backend to the other. For general piece-wise operations like sum of rows of a matrix, the WI data to joint matrix mapping coordinates information must be known to reason about the matrix view and extract the relevant piece. But for element-wise operations where the same operation is performed on all the elements of the matrix, having all the WIs in the group apply the operation inside a loop iterating over the `length` of `wi_data` guarantees the whole matrix element-wise operation.
196+
197+
Therefore, this extension currently only supports class 1 of operations because the mapping between `get_wi_data` and `joint_matrix` elements is not required to be known for these operations. However, general piece-wise operations will be supported in the future as a new API will be provided to convey the mapping from `joint_matrix` domain to WI Domain (See Section "WI data to joint matrix mapping coordinates information for piece-wise operations for more information").
198+
199+
Also, note that `get_wi_data` cannot return a fixed size array length because the length of the WI portion is a runtime variable for the following reasons:
200+
201+
1- The main compilation mode of SYCL is JIT compilation and partitioning among WIs is implementation defined.
202+
203+
2- SG size is not fixed (like in the CUDA backend where warp size is always 32).
204+
205+
3- AMX has the flexibility of allowing variable sizes on the matrix (`dynamic_extent`).
206+
207+
In the case of CUDA backend which is SYCL AOT compiled and SG size = 32 known and fixed, the additional marray capability will be provided.
208+
209+
The code listing below shows a synopsis of these new APIs.
210+
211+
```c++
212+
namespace sycl::ext::oneapi::experimental::matrix {
213+
template <typename T, size_t NumRows, size_t NumCols,
214+
matrix_layout Layout = matrix_layout::row_major,
215+
typename Group = sycl::sub_group>
216+
struct joint_matrix {
217+
wi_data<T, NumRows, NumCols, Layout, Group> get_wi_data();
218+
};
219+
template <typename T, size_t NumRows, size_t NumCols, matrix_layout Layout, typename Group>
220+
class wi_data {
221+
size_t length();
222+
wi_element<T, NumRows, NumCols, Layout, Group> operator[](size_t i);
223+
};
224+
template <typename T, size_t NumRows, size_t NumCols,
225+
matrix_layout Layout = matrix_layout::row_major,
226+
typename Group = sycl::sub_group>
227+
class wi_element {
228+
operator T();
229+
wi_element &operator=(const T &rhs);
230+
231+
};
232+
}
233+
```
234+
235+
In the following example `wi_data_c` is a reference to the WI owned portion of the joint matrix `matC`. As such `wi_data_c[i] OP rhs` updates the corresponding matrix element in the joint_matrix `matC`.
236+
Vectorization along the subgroup dimension will get enabled automatically to vectorize the contiguous portion of the matrix.
237+
238+
239+
```c++
240+
auto wi_data_c = matC.get_wi_data();
241+
for (int i = 0; i < wi_data_c.length(); i++)
242+
wi_data_c[i] *= alpha; // Note that the indexing here "i" is in the vector owned by a WI, not in the matrix C
243+
```
244+
245+
IMPORTANT: In the current implementation, only the subgroup scope is supported.
246+
247+
IMPORTANT: The WI data to joint matrix mapping coordinates information is not implemented yet.
248+
249+
IMPORTANT: Since the current tensorcores implementation is AOT, it is possible to know how many elements are owned by each WI at compile time. In this case, `wi_data` can be of type `marray`. An additional interface will be provided for the tensorcores AOT backend.
250+
251+
168252
## VNNI/Packed Layout
169253
Intel AMX and DPAS compute assumes register for B tile (src1) to be in VNNI format as they need 32bit of K-data in A and B to be contiguous in memory.
170254
The VNNI blocking factor is 2 in the case of 16-bit types, and it is 4 in the case of 8-bit types. While the current implementation assumes that the matrix has been already packed by the user for performance reasons, the layout information is needed to inform the implementation about this transform. The following example illustrates how a matrix in `row_major` layout is transformed into the `packed_b` layout for a 16-bit type.
@@ -225,12 +309,15 @@ q.parallel_for(nd_range<2>(G, L), [=](nd_item<2> item)
225309
// users need to specify the packed_b layout
226310
joint_matrix<int8_t, tK, tN, packed_b> tB(sg);
227311
joint_matrix<int32_t, tM, tN> tC(sg);
228-
joint_matrix_load(sg, tC, memC + sg_startx * tM * N + sg_starty/SG_SIZE*tN, N, matrix_layout::row_major);
312+
joint_matrix_fill(sg, tC, 0);
229313
for (int k = 0; k < K; k += tk) {
230314
joint_matrix_load(sg, tA, memA + sg_startx * tM * K + k, K, matrix_layout::row_major);
231315
joint_matrix_load(sg, tB, memB + k * N + sg_starty/SG_SIZE*tN*4, N*4, matrix_layout::packed_b); // VNNI
232316
tC = joint_matrix_mad(sg, tA, tB, tC);
233317
}
318+
auto wi_data_c = matC.get_wi_data();
319+
for (int i = 0; i < wi_data_c.length(); i++)
320+
wi_data_c[i] *= alpha; // The indexing here "i" is in the vector owned by a WI, not in the matrix C
234321
joint_matrix_store(sg, tC, memC + sg_startx * tM * N + sg_starty/SG_SIZE*tN, N, matrix_layout::row_major);
235322
}).wait();
236323
```
@@ -509,71 +596,38 @@ joint_matrix<int, msize, nsize> sub_c(sg);
509596

510597
## Future-looking API
511598

512-
### Matrix Initialization: `joint_matrix_fill`
513-
The current interface presented above assumes that all the matrices are directly loaded from memory. This new function called `joint_matrix_fill` makes it possible to multiply a matrix which is not directly loaded from memory but rather initialized directly in the register. On Intel AMX, if the initialization constant is zero, this would map to `_tile_zero` intrinsic:
514-
515-
```c++
516-
namespace sycl::ext::oneapi::experimental::matrix {
517-
template <typename Group, typename T, size_t NumRows, size_t NumCols,
518-
matrix_layout L>
519-
void joint_matrix_fill(Group sg, joint_matrix<T, NumRows, NumCols, L, Group> &m, const T& v);
520-
}
521-
```
522-
523-
### Element Indexing and Element-Wise Operations
524-
There are multiple options on how to enable this feature.
599+
### Memory scope
600+
The current experimental API uses `joint_` semantics to define the memory scope of the matrix. The long term solution is to use the proposed link:../supported/sycl_ext_oneapi_local_memory.asciidoc[`group_local_memory` extension] to allocate the matrix in local memory associated with a SYCL group as shown in the example below.
525601

526-
#### Option 1: Non-restrictive element indexing
527-
Allowing non-restrictive element indexing on the matrix element as shown below would result into slow indexing on the GPU.
528-
Besides, it will rely heavily on spirv and compiler vectorization:
529602

530603
```c++
531-
matrix<int, 8, 8> C;
532-
for (int i = 0; i < 8; i++)
533-
for (int j = 0; j < 8; j++)
534-
C(i,j) *= alpha; //Align with mdspan
535-
```
536-
#### Option2: Restrictive fast element indexing
537-
In the DPC++ context, the expectation is that all element-wise operations will happen in a converged control path by all work items in the group.
538-
Option 2 proposes a new set of element-wise operations by overloading existing operations to work on `matrix` object. An example is shown below:
539-
```c++
540-
joint_matrix<ONEAPI::sub_group, int, 8, 8> C(sg);
541-
C *= alpha;
604+
multi_ptr<matrix<T>, address_space::local_space> tA_ptr = group_local_memory<matrix<sub_group, int8_t, tM, tN>>(sg);
542605
```
543-
The problem with this option is that it is restrictive to a very limited set of operations.
544-
545-
#### Option3: Restrictive conversion in the interface from SIMD to SPMD
546-
Nvidia wmma interface added a new member to `fragment` class to designate the WI owned part of the matrix.
547-
While this provides fast element indexing on the GPU compared to the non-restrictive option, the user does not know the mapping of the owned data to the original matrix.
548-
However using the `mma` ptx instructions as opposed to the `wmma` ptx instructions the mapping is known. Knowing this mapping is important for the user to implement new operations like sum of rows of a matrix for quantized algorithms.
606+
We did not utilize this extension for this matrix API version because sub-group local memory is not yet well defined in {dpcpp}. Moreover, the representation of this notion in LLVM IR and SPIR-V is not clear yet.
549607

550-
#### proposal: Explicit conversion in the interface from SIMD to SPMD
551-
We introduce a new function `get_wi_data` that provides any portion of the matrix that the user wants but in a SPMD array object:.
608+
### WI data to joint matrix mapping coordinates information for piece-wise operations
609+
The indexing provided inside the `wi_data` class acesses only the portion of the current WI. It is not possible the location or coordinates of this portion in the original matrix. This coordinates mapping is implementation defined and change from one backend to the other. For general piece-wise operations like sum of rows of a matrix, the WI data to joint matrix mapping coordinates information is needed to reason about the matrix view.
610+
With joint matrix, we want to write, as much as possible, one code to run on different backends. So if backend X states that a WI owns one exact row of the matrix for instance. Writing the following code will work only on that backend for that version of hardware. The hardware and implementations change, for instance, the same WI can own half of the row because SG size increased or hardware units increased.
552611

553612
```c++
554-
namespace sycl::ext::oneapi::experimental::matrix {
555-
template <typename Group, typename T, size_t NumRows, size_t NumCols, matrix_layout L>
556-
marray<T, n_rows * n_cols> get_wi_data(joint_matrix<T, NumRows, NumCols, L, Group> &m, size_t row_index,
557-
size_t col_index, size_t n_rows, size_t n_cols);
613+
auto data = C.get_wi_data();
614+
for (int i = 0; i < length; ++i) {
615+
sum_of_local_rows[row] += data[i];
558616
}
559617
```
560618

561-
Example where each WI gets 1 column:
562-
```c++
563-
marray<T,msize> wi_C = get_wi_data(C, 0, wi_idx, msize, 1, matrix_layout::row_major);
564-
for (int i = 0; i < msize; i++)
565-
row_sum += wi_C[i];
566-
```
567619

568620

569-
### Memory scope
570-
The current experimental API uses `joint_` semantics to define the memory scope of the matrix. The long term solution is to use the proposed link:../supported/sycl_ext_oneapi_local_memory.asciidoc[`group_local_memory` extension] to allocate the matrix in local memory associated with a SYCL group as shown in the example below.
571-
621+
We want to keep backward compatibility in the joint matrix code when implementations or hardware change. To that end, instead of hard-code this mapping, we write general backend and target-agnostic, especially in the JIT compilation mode of SYCL. This is possible by querying this mapping so code does not have to change from one version to the other.
572622

623+
So for the mapping problem, since this mapping is implementation-defined, one of the proposals is to add runtime functions like:
573624
```c++
574-
multi_ptr<matrix<T>, address_space::local_space> tA_ptr = group_local_memory<matrix<sub_group, int8_t, tM, tN>>(sg);
625+
auto data = C.get_wi_data();
626+
for (int i = 0; i < length; ++i) {
627+
auto row, col = data[i].get_coord();
628+
sum_of_local_rows[row] += data[i];
629+
}
575630
```
576-
We did not utilize this extension for this matrix API version because sub-group local memory is not yet well defined in {dpcpp}. Moreover, the representation of this notion in LLVM IR and SPIR-V is not clear yet.
577631

578632

579633
## Open Questions
@@ -585,7 +639,7 @@ We did not utilize this extension for this matrix API version because sub-group
585639
- In the future looking APIs, `get_wi_data` (that is currently under design) returns an owned object. Should this return a view object to make sure the original matrix C is changed after its slices are modified.
586640

587641
## TODO List
588-
- Add support for fill matrix and element-wise operations features
642+
- Add WI data to joint matrix mapping coordinates information for piece-wise operations. This will be added as part of the query or new methods to the 'get_wi_data' class.
589643
- Add 'matrix_use' parameter to the matrix to distinguish between matrix A, B, and matrix accumulator. This is necessary for supporting VNNI and transpose transform
590644
- Change the names default sizes in the query from defaultM, defaultN, defaultK to M,N,K
591645
- Change the type of `scope` in the query interface to be able to return more than one value. This will be useful in the event we support other scopes like workgroup besides subgroups
@@ -599,4 +653,5 @@ We did not utilize this extension for this matrix API version because sub-group
599653
|Rev |Date |Author |Changes
600654
|1 |2021-04-13 |Dounia Khaldi |Initial public working draft.
601655
|2 |2021-10-05 |Dounia Khaldi |JIT implementation on both Intel AMX and DPAS
656+
|3 |2022-05-16 |Dounia Khaldi |Add matrix fill and piece-wise operations support
602657
|======================

0 commit comments

Comments
 (0)