Skip to content

Fix @param doc comments. #2565

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 8 additions & 8 deletions examples/models/llama2/custom_ops/op_sdpa.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -702,21 +702,21 @@ Tensor& flash_attention_kernel_out(

/*
Input params
@params[in]: q_projected: Projected query with query weights.
@param[in] q_projected Projected query with query weights.
Format [n_layers, batch size, seq_len, num heads, head dim]
@params[in]: k_projected: Projected query with key weights.
@param[in] k_projected Projected query with key weights.
Format [n_layers, batch size, seq_len, num heads, head dim]
@params[in]: v_projected: Projected query with value weights.
@param[in] v_projected Projected query with value weights.
Format [n_layers, batch size, seq_len, num heads, head dim]
@params[in]: key_cache: Cache of previous k_projected.
@param[in] key_cache Cache of previous k_projected.
Format [n_layers, batch size, max_seq_len, num heads, head dim]
@params[in]: key_cache: Cache of previous v_projected.
@param[in] key_cache Cache of previous v_projected.
Format [n_layers, batch size, max_seq_len, num heads, head dim]
....
@params[in] layer_id: which layer this call belongs to.
@param[in] layer_id which layer this call belongs to.
Used to updated appropriate entry of kv cache
@params[in]: start_pos: sequence position
@params[in]: seq_len: Seq length. e.g. seq_len dim of q_projected.
@param[in] start_pos sequence position
@param[in] seq_len Seq length. e.g. seq_len dim of q_projected.
*/
Tensor& sdpa_with_kv_cache_out(
RuntimeContext& ctx,
Expand Down
4 changes: 2 additions & 2 deletions extension/aten_util/aten_bridge.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -124,8 +124,8 @@ c10::ScalarType execuTorchtoTorchScalarType(torch::executor::ScalarType type) {
* assumption , a strong one, that, such memory is arena allocated whose
* lifetime is tied to model's lifetime, we assume that memory is not leaked as
* it is freed when arean is freed.
* @param[in] aten_tensor: Input at::Tensor
* @param[in/out] mutable_et: ETensor whose underlying memory now will alias to
* @param[in] aten_tensor Input at::Tensor
* @param[in/out] mutable_et ETensor whose underlying memory now will alias to
* aten_tensor
*/
void alias_etensor_to_attensor(
Expand Down
8 changes: 4 additions & 4 deletions extension/aten_util/aten_bridge.h
Original file line number Diff line number Diff line change
Expand Up @@ -26,16 +26,16 @@ torch::executor::ScalarType torchToExecuTorchScalarType(caffe2::TypeMeta type);
c10::ScalarType execuTorchtoTorchScalarType(torch::executor::ScalarType type);

/*
* @param[in] aten_tensor: Input at::Tensor
* @param[in/out] mutable_et: ETensor whose underlying memory now will alias to
* @param[in] aten_tensor Input at::Tensor
* @param[in,out] mutable_et ETensor whose underlying memory now will alias to
* aten_tensor
*/
void alias_etensor_to_attensor(at::Tensor& at, torch::executor::Tensor& et);

/*
* @param[in] et: ETensor whose underlying memory now will alias to returned
* @param[in] et ETensor whose underlying memory now will alias to returned
* output tensor
* @param[ret] aten_tensor: output at::Tensor
* @param[ret] aten_tensor output at::Tensor
* Notes:
* It is owned by the caller of alias_attensor_to_etensor.
* Lifetime of tensor meta must be >= to that of the returned tensor since
Expand Down
10 changes: 5 additions & 5 deletions kernels/portable/cpu/util/transpose_util.h
Original file line number Diff line number Diff line change
Expand Up @@ -38,19 +38,19 @@ namespace {
* Increments an N dimensional index like x[0,0,0] to x[0, 0, 1] to x[0, 0, 2]
* to x[0, 1, 0] to x[0, 1, 1] etc...
*
* @param index: An array of the same size as sizes. This stores the "counter"
* @param index An array of the same size as sizes. This stores the "counter"
* being incremented.
*
* @param new_sizes: The output tensor dimensions. Allows us to compute the
* @param new_sizes The output tensor dimensions. Allows us to compute the
* offset into the input tensor.
*
* @param non_one_indices: A list of indices into index that contain non-1
* @param non_one_indices A list of indices into index that contain non-1
* dimension values. This allows us to eliminate an O(dim) factor from the
* runtime in case many dimensions have a value of 1.
*
* @param new_strides: Strides corresponding to new_sizes.
* @param new_strides Strides corresponding to new_sizes.
*
* @param offset: The computed offset to index into the input tensor's memory
* @param offset The computed offset to index into the input tensor's memory
* array.
*/
inline void increment_index_and_offset(
Expand Down
14 changes: 7 additions & 7 deletions runtime/core/portable_type/tensor_impl.h
Original file line number Diff line number Diff line change
Expand Up @@ -86,16 +86,16 @@ class TensorImpl {
TensorImpl() = delete;

/**
* @param type: The type of the data (int, float, bool).
* @param dim: Number of dimensions, and the length of the `sizes` array.
* @param sizes: Sizes of the tensor at each dimension. Must contain `dim`
* @param type The type of the data (int, float, bool).
* @param dim Number of dimensions, and the length of the `sizes` array.
* @param sizes Sizes of the tensor at each dimension. Must contain `dim`
* entries.
* @param data: Pointer to the data, whose size is determined by `type`,
* @param data Pointer to the data, whose size is determined by `type`,
* `dim`, and `sizes`. The tensor will not own this memory.
* @param dim_order: Order in which dimensions are laid out in memory.
* @param strides: Strides of the tensor at each dimension. Must contain `dim`
* @param dim_order Order in which dimensions are laid out in memory.
* @param strides Strides of the tensor at each dimension. Must contain `dim`
* entries.
* @param dynamism: The mutability of the shape of the tensor.
* @param dynamism The mutability of the shape of the tensor.
*/
TensorImpl(
ScalarType type,
Expand Down