Skip to content

Update and fix docs (namespaces, consistency) #6185

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Oct 17, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion docs/source/Doxyfile
Original file line number Diff line number Diff line change
Expand Up @@ -943,7 +943,8 @@ WARN_LOGFILE =
# spaces. See also FILE_PATTERNS and EXTENSION_MAPPING
# Note: If this tag is empty the current directory is searched.

INPUT = ../runtime/executor/memory_manager.h \
INPUT = ../devtools/bundled_program/bundled_program.h \
../runtime/executor/memory_manager.h \
../runtime/executor/method.h \
../runtime/executor/method_meta.h \
../runtime/executor/program.h \
Expand Down
5 changes: 2 additions & 3 deletions docs/source/build-run-coreml.md
Original file line number Diff line number Diff line change
Expand Up @@ -147,11 +147,10 @@ libsqlite3.tbd

7. Update the code to load the program from the Application's bundle.
``` objective-c
using namespace torch::executor;

NSURL *model_url = [NBundle.mainBundle URLForResource:@"mv3_coreml_all" extension:@"pte"];

Result<util::FileDataLoader> loader = util::FileDataLoader::from(model_url.path.UTF8String);
Result<executorch::extension::FileDataLoader> loader =
executorch::extension::FileDataLoader::from(model_url.path.UTF8String);
```

8. Use [Xcode](https://developer.apple.com/documentation/xcode/building-and-running-an-app#Build-run-and-debug-your-app) to deploy the application on the device.
Expand Down
30 changes: 15 additions & 15 deletions docs/source/bundled-io.md
Original file line number Diff line number Diff line change
Expand Up @@ -201,51 +201,51 @@ This stage mainly focuses on executing the model with the bundled inputs and and
### Get ExecuTorch Program Pointer from `BundledProgram` Buffer
We need the pointer to ExecuTorch program to do the execution. To unify the process of loading and executing `BundledProgram` and Program flatbuffer, we create an API:

:::{dropdown} `GetProgramData`
:::{dropdown} `get_program_data`

```{eval-rst}
.. doxygenfunction:: torch::executor::bundled_program::GetProgramData
.. doxygenfunction:: ::executorch::bundled_program::get_program_data
```
:::

Here's an example of how to use the `GetProgramData` API:
Here's an example of how to use the `get_program_data` API:
```c++
// Assume that the user has read the contents of the file into file_data using
// whatever method works best for their application. The file could contain
// either BundledProgram data or Program data.
void* file_data = ...;
size_t file_data_len = ...;

// If file_data contains a BundledProgram, GetProgramData() will return a
// If file_data contains a BundledProgram, get_program_data() will return a
// pointer to the Program data embedded inside it. Otherwise it will return
// file_data, which already pointed to Program data.
const void* program_ptr;
size_t program_len;
status = torch::executor::bundled_program::GetProgramData(
status = executorch::bundled_program::get_program_data(
file_data, file_data_len, &program_ptr, &program_len);
ET_CHECK_MSG(
status == Error::Ok,
"GetProgramData() failed with status 0x%" PRIx32,
"get_program_data() failed with status 0x%" PRIx32,
status);
```

### Load Bundled Input to Method
To execute the program on the bundled input, we need to load the bundled input into the method. Here we provided an API called `torch::executor::bundled_program::LoadBundledInput`:
To execute the program on the bundled input, we need to load the bundled input into the method. Here we provided an API called `executorch::bundled_program::load_bundled_input`:

:::{dropdown} `LoadBundledInput`
:::{dropdown} `load_bundled_input`

```{eval-rst}
.. doxygenfunction:: torch::executor::bundled_program::LoadBundledInput
.. doxygenfunction:: ::executorch::bundled_program::load_bundled_input
```
:::

### Verify the Method's Output.
We call `torch::executor::bundled_program::VerifyResultWithBundledExpectedOutput` to verify the method's output with bundled expected outputs. Here's the details of this API:
We call `executorch::bundled_program::verify_method_outputs` to verify the method's output with bundled expected outputs. Here's the details of this API:

:::{dropdown} `VerifyResultWithBundledExpectedOutput`
:::{dropdown} `verify_method_outputs`

```{eval-rst}
.. doxygenfunction:: torch::executor::bundled_program::VerifyResultWithBundledExpectedOutput
.. doxygenfunction:: ::executorch::bundled_program::verify_method_outputs
```
:::

Expand All @@ -266,13 +266,13 @@ ET_CHECK_MSG(
method.error());

// Load testset_idx-th input in the buffer to plan
status = torch::executor::bundled_program::LoadBundledInput(
status = executorch::bundled_program::load_bundled_input(
*method,
program_data.bundled_program_data(),
FLAGS_testset_idx);
ET_CHECK_MSG(
status == Error::Ok,
"LoadBundledInput failed with status 0x%" PRIx32,
"load_bundled_input failed with status 0x%" PRIx32,
status);

// Execute the plan
Expand All @@ -283,7 +283,7 @@ ET_CHECK_MSG(
status);

// Verify the result.
status = torch::executor::bundled_program::VerifyResultWithBundledExpectedOutput(
status = executorch::bundled_program::verify_method_outputs(
*method,
program_data.bundled_program_data(),
FLAGS_testset_idx,
Expand Down
8 changes: 4 additions & 4 deletions docs/source/concepts.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ The goal of ATen dialect is to capture users’ programs as faithfully as possib

## ATen mode

ATen mode uses the ATen implementation of Tensor (`at::Tensor`) and related types, such as `ScalarType`, from the PyTorch core. This is in contrast to portable mode, which uses ExecuTorch’s smaller implementation of tensor (`torch::executor::Tensor`) and related types, such as `torch::executor::ScalarType`.
ATen mode uses the ATen implementation of Tensor (`at::Tensor`) and related types, such as `ScalarType`, from the PyTorch core. This is in contrast to ETensor mode, which uses ExecuTorch’s smaller implementation of tensor (`executorch::runtime::etensor::Tensor`) and related types, such as `executorch::runtime::etensor::ScalarType`.
- ATen kernels that rely on the full `at::Tensor` API are usable in this configuration.
- ATen kernels tend to do dynamic memory allocation and often have extra flexibility (and thus overhead) to handle cases not needed by mobile/embedded clients. e.g., CUDA support, sparse tensor support, and dtype promotion.
- Note: ATen mode is currently a WIP.
Expand Down Expand Up @@ -244,10 +244,10 @@ Kernels that support a subset of tensor dtypes and/or dim orders.

Parts of a model may be delegated to run on an optimized backend. The partitioner splits the graph into the appropriate sub-networks and tags them for delegation.

## Portable mode (lean mode)
## ETensor mode

Portable mode uses ExecuTorch’s smaller implementation of tensor (`torch::executor::Tensor`) along with related types (`torch::executor::ScalarType`, etc.). This is in contrast to ATen mode, which uses the ATen implementation of Tensor (`at::Tensor`) and related types (`ScalarType`, etc.)
- `torch::executor::Tensor`, also known as ETensor, is a source-compatible subset of `at::Tensor`. Code written against ETensor can build against `at::Tensor`.
ETensor mode uses ExecuTorch’s smaller implementation of tensor (`executorch::runtime::etensor::Tensor`) along with related types (`executorch::runtime::etensor::ScalarType`, etc.). This is in contrast to ATen mode, which uses the ATen implementation of Tensor (`at::Tensor`) and related types (`ScalarType`, etc.)
- `executorch::runtime::etensor::Tensor`, also known as ETensor, is a source-compatible subset of `at::Tensor`. Code written against ETensor can build against `at::Tensor`.
- ETensor does not own or allocate memory on its own. To support dynamic shapes, kernels can allocate Tensor data using the MemoryAllocator provided by the client.

## Portable kernels
Expand Down
2 changes: 1 addition & 1 deletion docs/source/etdump.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ Generating an ETDump is a relatively straightforward process. Users can follow t
2. ***Create*** an Instance of the ETDumpGen class and pass it into the `load_method` call that is invoked in the runtime.

```C++
torch::executor::ETDumpGen etdump_gen = torch::executor::ETDumpGen();
executorch::etdump::ETDumpGen etdump_gen;
Result<Method> method =
program->load_method(method_name, &memory_manager, &etdump_gen);
```
Expand Down
16 changes: 8 additions & 8 deletions docs/source/executorch-runtime-api-reference.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,25 +11,25 @@ For detailed information on how APIs evolve and the deprecation process, please
Model Loading and Execution
---------------------------

.. doxygenclass:: executorch::runtime::DataLoader
.. doxygenclass:: executorch::runtime::Program
:members:

.. doxygenclass:: executorch::runtime::MemoryAllocator
.. doxygenclass:: executorch::runtime::Method
:members:

.. doxygenclass:: executorch::runtime::HierarchicalAllocator
.. doxygenclass:: executorch::runtime::MethodMeta
:members:

.. doxygenclass:: executorch::runtime::MemoryManager
.. doxygenclass:: executorch::runtime::DataLoader
:members:

.. doxygenclass:: executorch::runtime::Program
.. doxygenclass:: executorch::runtime::MemoryAllocator
:members:

.. doxygenclass:: executorch::runtime::Method
.. doxygenclass:: executorch::runtime::HierarchicalAllocator
:members:

.. doxygenclass:: executorch::runtime::MethodMeta
.. doxygenclass:: executorch::runtime::MemoryManager
:members:

Values
Expand All @@ -38,5 +38,5 @@ Values
.. doxygenstruct:: executorch::runtime::EValue
:members:

.. doxygenclass:: executorch::aten::Tensor
.. doxygenclass:: executorch::runtime::etensor::Tensor
:members:
Loading
Loading