Skip to content

Fix to the Export ExecuTorch section #506

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,8 @@
"colon_fence",
]

myst_heading_anchors = 3

sphinx_gallery_conf = {
"examples_dirs": ["tutorials_source"],
"gallery_dirs": ["tutorials"],
Expand Down
15 changes: 15 additions & 0 deletions docs/source/export-overview.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
# Exporting to Executorch

One of the important steps in getting your PyTorch programs ready for execution
on an edge device is exporting them. This is achieved through the use of a
PyTorch API called `torch.export`.

The `torch.export` documentation, which is part of the PyTorch core library, can
be found in the Core PyTorch documentation set. Additionally, we provide a
step-by-step tutorial that takes you through the process of exporting a PyTorch
program, making it easier for you to understand and implement the process.

To learn more about exporting your model:

* Complete the [Exporting to ExecuTorch tutorial](./tutorials/export-to-executorch-tutorial).
* Read the [torch.export documentation](https://pytorch.org/docs/2.1/export.html).
10 changes: 0 additions & 10 deletions docs/source/export-overview.rst

This file was deleted.

10 changes: 0 additions & 10 deletions docs/source/export-user-guide.rst

This file was deleted.

2 changes: 1 addition & 1 deletion docs/source/getting-started-setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -263,6 +263,6 @@ ExecuTorch program. Now that you have a basic understanding of how ExecuTorch
works, you can start exploring its advanced features and capabilities. Here
is a list of sections you might want to read next:

* [Exporting a model](export-overview.rst)
* [Exporting a model](export-overview.md)
* Using [EXIR](ir-exir.md) for advanced exports
* Review more advanced examples in the [executorch/examples](https://github.com/pytorch/executorch/tree/main/examples) directory
3 changes: 1 addition & 2 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -92,12 +92,11 @@ Topics in this section will help you get started with ExecuTorch.
:hidden:

export-overview
export-user-guide

.. toctree::
:glob:
:maxdepth: 1
:caption: Intermediate Representation (IR) Specification
:caption: IR Specification
:hidden:

ir-exir
Expand Down
10 changes: 5 additions & 5 deletions docs/source/tutorials_source/export-to-executorch-tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
######################################################################
# ExecuTorch is a unified ML stack for lowering PyTorch models to edge devices.
# It introduces improved entry points to perform model, device, and/or use-case
# specific optizations such as backend delegation, user-defined compiler
# specific optimizations such as backend delegation, user-defined compiler
# transformations, default or user-defined memory planning, and more.
#
# At a high level, the workflow looks as follows:
Expand Down Expand Up @@ -62,7 +62,7 @@
# ``torch.export``.
#
# Both APIs take in a model (any callable or ``torch.nn.Module``), a tuple of
# positional arguments, optionally a dictionary of keywork arguments (not shown
# positional arguments, optionally a dictionary of keyword arguments (not shown
# in the example), and a list of constraints (covered later).

import torch
Expand Down Expand Up @@ -96,7 +96,7 @@ def forward(self, x: torch.Tensor) -> torch.Tensor:
# The output of ``torch._export.capture_pre_autograd_graph`` is a fully
# flattened graph (meaning the graph does not contain any module hierarchy,
# except in the case of control flow operators). Furthermore, the captured graph
# contains only ATen operators (~3000 ops) which are autograd safe, i.e. safe
# contains only ATen operators (~3000 ops) which are Autograd safe, for example, safe
# for eager mode training.
#
# The output of ``torch.export`` further compiles the graph to a lower and
Expand All @@ -116,7 +116,7 @@ def forward(self, x: torch.Tensor) -> torch.Tensor:
# Since the result of ``torch.export`` is a graph containing the Core ATen
# operators, we will call this the ``ATen Dialect``, and since
# ``torch._export.capture_pre_autograd_graph`` returns a graph containing the
# set of ATen operators which are autograd safe, we will call it the
# set of ATen operators which are Autograd safe, we will call it the
# ``Pre-Autograd ATen Dialect``.

######################################################################
Expand Down Expand Up @@ -231,7 +231,7 @@ def f(x, y):
# `FX Graph Mode Quantization <https://pytorch.org/tutorials/prototype/fx_graph_mode_ptq_static.html>`__,
# we will need to call two new APIs: ``prepare_pt2e`` and ``compare_pt2e``
# instead of ``prepare_fx`` and ``convert_fx``. It differs in that
# ``prepare_pt2e`` takes a backend-specific ``Quantizer`` as an arugument, which
# ``prepare_pt2e`` takes a backend-specific ``Quantizer`` as an argument, which
# will annotate the nodes in the graph with information needed to quantize the
# model properly for a specific backend.

Expand Down