Skip to content

Commit bff26f3

Browse files
shoumikhinfacebook-github-bot
authored andcommitted
Update runtime tutorial to promote Module APIs in the beginning. (#6198)
Summary: Pull Request resolved: #6198 Reviewed By: dbort Differential Revision: D64352860 fbshipit-source-id: 907dbe5438737b1a14b30da94fd0b02510dee542
1 parent 5c8b115 commit bff26f3

File tree

2 files changed

+3
-5
lines changed

2 files changed

+3
-5
lines changed

docs/source/extension-module.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -240,6 +240,6 @@ if (auto* etdump = dynamic_cast<ETDumpGen*>(module.event_tracer())) {
240240
}
241241
```
242242

243-
# Conclusion
243+
## Conclusion
244244

245245
The `Module` APIs provide a simplified interface for running ExecuTorch models in C++, closely resembling the experience of PyTorch's eager mode. By abstracting away the complexities of the lower-level runtime APIs, developers can focus on model execution without worrying about the underlying details.

docs/source/running-a-model-cpp-tutorial.md

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -2,8 +2,7 @@
22

33
**Author:** [Jacob Szwejbka](https://github.com/JacobSzwejbka)
44

5-
In this tutorial, we will cover the APIs to load an ExecuTorch model,
6-
prepare the MemoryManager, set inputs, execute the model, and retrieve outputs.
5+
In this tutorial, we will cover how to run an ExecuTorch model in C++ using the more detailed, lower-level APIs: prepare the `MemoryManager`, set inputs, execute the model, and retrieve outputs. However, if you’re looking for a simpler interface that works out of the box, consider trying the [Module Extension Tutorial](extension-module.md).
76

87
For a high level overview of the ExecuTorch Runtime please see [Runtime Overview](runtime-overview.md), and for more in-depth documentation on
98
each API please see the [Runtime API Reference](executorch-runtime-api-reference.rst).
@@ -153,5 +152,4 @@ assert(output.isTensor());
153152

154153
## Conclusion
155154

156-
In this tutorial, we went over the APIs and steps required to load and perform an inference with an ExecuTorch model in C++.
157-
Also, check out the [Simplified Runtime APIs Tutorial](extension-module.md).
155+
This tutorial demonstrated how to run an ExecuTorch model using low-level runtime APIs, which offer granular control over memory management and execution. However, for most use cases, we recommend using the Module APIs, which provide a more streamlined experience without sacrificing flexibility. For more details, check out the [Module Extension Tutorial](extension-module.md).

0 commit comments

Comments
 (0)