Skip to content

Commit 03184f5

Browse files
Svetlana Karslioglufacebook-github-bot
authored andcommitted
Fix to the Export ExecuTorch section (#506)
Summary: Pull Request resolved: #506 * Changed the Export ExecuTorch section not to be a redirect but provide entrypoints to other sections. This provides a better UX. * Fix a few typos. Differential Revision: https://internalfb.com/D49706614 fbshipit-source-id: c74d94d789c11e14e1f50411ed02fa7cd66f6fd7
1 parent 732d92b commit 03184f5

File tree

7 files changed

+24
-28
lines changed

7 files changed

+24
-28
lines changed

docs/source/conf.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -68,6 +68,8 @@
6868
"colon_fence",
6969
]
7070

71+
myst_heading_anchors = 3
72+
7173
sphinx_gallery_conf = {
7274
"examples_dirs": ["tutorials_source"],
7375
"gallery_dirs": ["tutorials"],

docs/source/export-overview.md

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
# Exporting to Executorch
2+
3+
One of the important steps in getting your PyTorch programs ready for execution
4+
on an edge device is exporting them. This is achieved through the use of a
5+
PyTorch API called `torch.export`.
6+
7+
The `torch.export` documentation, which is part of the PyTorch core library, can
8+
be found in the Core PyTorch documentation set. Additionally, we provide a
9+
step-by-step tutorial that takes you through the process of exporting a PyTorch
10+
program, making it easier for you to understand and implement the process.
11+
12+
To learn more about exporting your model:
13+
14+
* Complete the [Exporting to ExecuTorch tutorial](./tutorials/export-to-executorch-tutorial).
15+
* Read the [torch.export documentation](https://pytorch.org/docs/2.1/export.html).

docs/source/export-overview.rst

Lines changed: 0 additions & 10 deletions
This file was deleted.

docs/source/export-user-guide.rst

Lines changed: 0 additions & 10 deletions
This file was deleted.

docs/source/getting-started-setup.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -263,6 +263,6 @@ ExecuTorch program. Now that you have a basic understanding of how ExecuTorch
263263
works, you can start exploring its advanced features and capabilities. Here
264264
is a list of sections you might want to read next:
265265

266-
* [Exporting a model](export-overview.rst)
266+
* [Exporting a model](export-overview.md)
267267
* Using [EXIR](ir-exir.md) for advanced exports
268268
* Review more advanced examples in the [executorch/examples](https://github.com/pytorch/executorch/tree/main/examples) directory

docs/source/index.rst

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -92,12 +92,11 @@ Topics in this section will help you get started with ExecuTorch.
9292
:hidden:
9393

9494
export-overview
95-
export-user-guide
9695

9796
.. toctree::
9897
:glob:
9998
:maxdepth: 1
100-
:caption: Intermediate Representation (IR) Specification
99+
:caption: IR Specification
101100
:hidden:
102101

103102
ir-exir

docs/source/tutorials_source/export-to-executorch-tutorial.py

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
######################################################################
1616
# ExecuTorch is a unified ML stack for lowering PyTorch models to edge devices.
1717
# It introduces improved entry points to perform model, device, and/or use-case
18-
# specific optizations such as backend delegation, user-defined compiler
18+
# specific optimizations such as backend delegation, user-defined compiler
1919
# transformations, default or user-defined memory planning, and more.
2020
#
2121
# At a high level, the workflow looks as follows:
@@ -62,7 +62,7 @@
6262
# ``torch.export``.
6363
#
6464
# Both APIs take in a model (any callable or ``torch.nn.Module``), a tuple of
65-
# positional arguments, optionally a dictionary of keywork arguments (not shown
65+
# positional arguments, optionally a dictionary of keyword arguments (not shown
6666
# in the example), and a list of constraints (covered later).
6767

6868
import torch
@@ -96,7 +96,7 @@ def forward(self, x: torch.Tensor) -> torch.Tensor:
9696
# The output of ``torch._export.capture_pre_autograd_graph`` is a fully
9797
# flattened graph (meaning the graph does not contain any module hierarchy,
9898
# except in the case of control flow operators). Furthermore, the captured graph
99-
# contains only ATen operators (~3000 ops) which are autograd safe, i.e. safe
99+
# contains only ATen operators (~3000 ops) which are Autograd safe, for example, safe
100100
# for eager mode training.
101101
#
102102
# The output of ``torch.export`` further compiles the graph to a lower and
@@ -116,7 +116,7 @@ def forward(self, x: torch.Tensor) -> torch.Tensor:
116116
# Since the result of ``torch.export`` is a graph containing the Core ATen
117117
# operators, we will call this the ``ATen Dialect``, and since
118118
# ``torch._export.capture_pre_autograd_graph`` returns a graph containing the
119-
# set of ATen operators which are autograd safe, we will call it the
119+
# set of ATen operators which are Autograd safe, we will call it the
120120
# ``Pre-Autograd ATen Dialect``.
121121

122122
######################################################################
@@ -231,7 +231,7 @@ def f(x, y):
231231
# `FX Graph Mode Quantization <https://pytorch.org/tutorials/prototype/fx_graph_mode_ptq_static.html>`__,
232232
# we will need to call two new APIs: ``prepare_pt2e`` and ``compare_pt2e``
233233
# instead of ``prepare_fx`` and ``convert_fx``. It differs in that
234-
# ``prepare_pt2e`` takes a backend-specific ``Quantizer`` as an arugument, which
234+
# ``prepare_pt2e`` takes a backend-specific ``Quantizer`` as an argument, which
235235
# will annotate the nodes in the graph with information needed to quantize the
236236
# model properly for a specific backend.
237237

0 commit comments

Comments
 (0)