Skip to content

Commit 397a7ce

Browse files
shoumikhinfacebook-github-bot
authored andcommitted
Fix @param doc comments. (#2565)
Summary: . Reviewed By: kirklandsign Differential Revision: D55205164
1 parent 12b5324 commit 397a7ce

File tree

5 files changed

+26
-26
lines changed

5 files changed

+26
-26
lines changed

examples/models/llama2/custom_ops/op_sdpa.cpp

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -702,21 +702,21 @@ Tensor& flash_attention_kernel_out(
702702

703703
/*
704704
Input params
705-
@params[in]: q_projected: Projected query with query weights.
705+
@param[in] q_projected Projected query with query weights.
706706
Format [n_layers, batch size, seq_len, num heads, head dim]
707-
@params[in]: k_projected: Projected query with key weights.
707+
@param[in] k_projected Projected query with key weights.
708708
Format [n_layers, batch size, seq_len, num heads, head dim]
709-
@params[in]: v_projected: Projected query with value weights.
709+
@param[in] v_projected Projected query with value weights.
710710
Format [n_layers, batch size, seq_len, num heads, head dim]
711-
@params[in]: key_cache: Cache of previous k_projected.
711+
@param[in] key_cache Cache of previous k_projected.
712712
Format [n_layers, batch size, max_seq_len, num heads, head dim]
713-
@params[in]: key_cache: Cache of previous v_projected.
713+
@param[in] key_cache Cache of previous v_projected.
714714
Format [n_layers, batch size, max_seq_len, num heads, head dim]
715715
....
716-
@params[in] layer_id: which layer this call belongs to.
716+
@param[in] layer_id which layer this call belongs to.
717717
Used to updated appropriate entry of kv cache
718-
@params[in]: start_pos: sequence position
719-
@params[in]: seq_len: Seq length. e.g. seq_len dim of q_projected.
718+
@param[in] start_pos sequence position
719+
@param[in] seq_len Seq length. e.g. seq_len dim of q_projected.
720720
*/
721721
Tensor& sdpa_with_kv_cache_out(
722722
RuntimeContext& ctx,

extension/aten_util/aten_bridge.cpp

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -124,8 +124,8 @@ c10::ScalarType execuTorchtoTorchScalarType(torch::executor::ScalarType type) {
124124
* assumption , a strong one, that, such memory is arena allocated whose
125125
* lifetime is tied to model's lifetime, we assume that memory is not leaked as
126126
* it is freed when arean is freed.
127-
* @param[in] aten_tensor: Input at::Tensor
128-
* @param[in/out] mutable_et: ETensor whose underlying memory now will alias to
127+
* @param[in] aten_tensor Input at::Tensor
128+
* @param[in/out] mutable_et ETensor whose underlying memory now will alias to
129129
* aten_tensor
130130
*/
131131
void alias_etensor_to_attensor(

extension/aten_util/aten_bridge.h

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -26,16 +26,16 @@ torch::executor::ScalarType torchToExecuTorchScalarType(caffe2::TypeMeta type);
2626
c10::ScalarType execuTorchtoTorchScalarType(torch::executor::ScalarType type);
2727

2828
/*
29-
* @param[in] aten_tensor: Input at::Tensor
30-
* @param[in/out] mutable_et: ETensor whose underlying memory now will alias to
29+
* @param[in] aten_tensor Input at::Tensor
30+
* @param[in,out] mutable_et ETensor whose underlying memory now will alias to
3131
* aten_tensor
3232
*/
3333
void alias_etensor_to_attensor(at::Tensor& at, torch::executor::Tensor& et);
3434

3535
/*
36-
* @param[in] et: ETensor whose underlying memory now will alias to returned
36+
* @param[in] et ETensor whose underlying memory now will alias to returned
3737
* output tensor
38-
* @param[ret] aten_tensor: output at::Tensor
38+
* @param[ret] aten_tensor output at::Tensor
3939
* Notes:
4040
* It is owned by the caller of alias_attensor_to_etensor.
4141
* Lifetime of tensor meta must be >= to that of the returned tensor since

kernels/portable/cpu/util/transpose_util.h

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -38,19 +38,19 @@ namespace {
3838
* Increments an N dimensional index like x[0,0,0] to x[0, 0, 1] to x[0, 0, 2]
3939
* to x[0, 1, 0] to x[0, 1, 1] etc...
4040
*
41-
* @param index: An array of the same size as sizes. This stores the "counter"
41+
* @param index An array of the same size as sizes. This stores the "counter"
4242
* being incremented.
4343
*
44-
* @param new_sizes: The output tensor dimensions. Allows us to compute the
44+
* @param new_sizes The output tensor dimensions. Allows us to compute the
4545
* offset into the input tensor.
4646
*
47-
* @param non_one_indices: A list of indices into index that contain non-1
47+
* @param non_one_indices A list of indices into index that contain non-1
4848
* dimension values. This allows us to eliminate an O(dim) factor from the
4949
* runtime in case many dimensions have a value of 1.
5050
*
51-
* @param new_strides: Strides corresponding to new_sizes.
51+
* @param new_strides Strides corresponding to new_sizes.
5252
*
53-
* @param offset: The computed offset to index into the input tensor's memory
53+
* @param offset The computed offset to index into the input tensor's memory
5454
* array.
5555
*/
5656
inline void increment_index_and_offset(

runtime/core/portable_type/tensor_impl.h

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -86,16 +86,16 @@ class TensorImpl {
8686
TensorImpl() = delete;
8787

8888
/**
89-
* @param type: The type of the data (int, float, bool).
90-
* @param dim: Number of dimensions, and the length of the `sizes` array.
91-
* @param sizes: Sizes of the tensor at each dimension. Must contain `dim`
89+
* @param type The type of the data (int, float, bool).
90+
* @param dim Number of dimensions, and the length of the `sizes` array.
91+
* @param sizes Sizes of the tensor at each dimension. Must contain `dim`
9292
* entries.
93-
* @param data: Pointer to the data, whose size is determined by `type`,
93+
* @param data Pointer to the data, whose size is determined by `type`,
9494
* `dim`, and `sizes`. The tensor will not own this memory.
95-
* @param dim_order: Order in which dimensions are laid out in memory.
96-
* @param strides: Strides of the tensor at each dimension. Must contain `dim`
95+
* @param dim_order Order in which dimensions are laid out in memory.
96+
* @param strides Strides of the tensor at each dimension. Must contain `dim`
9797
* entries.
98-
* @param dynamism: The mutability of the shape of the tensor.
98+
* @param dynamism The mutability of the shape of the tensor.
9999
*/
100100
TensorImpl(
101101
ScalarType type,

0 commit comments

Comments
 (0)