Skip to content

Commit b0f72a9

Browse files
guangy10facebook-github-bot
authored andcommitted
concepts.md (#896)
Summary: Pull Request resolved: #896 Many fixes Reviewed By: lucylq Differential Revision: D50248916 fbshipit-source-id: 98252ab248cc4ae87eb1ae92c7a0bbb4de9e5d52
1 parent c33932c commit b0f72a9

File tree

1 file changed

+14
-12
lines changed

1 file changed

+14
-12
lines changed

docs/source/concepts.md

Lines changed: 14 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -11,9 +11,9 @@ Fundamentally, it is a tensor library on top of which almost all other Python an
1111

1212
## [ATen Dialect](./ir-exir.md#aten-dialect)
1313

14-
ATen dialect is the result of exporting an eager module to a graph representation. It is the entry point of the ExecuTorch compilation pipeline; after exporting to ATen dialect, subsequent passes can lower to Core ATen dialect and Edge dialect.
14+
ATen dialect is the immediate result of exporting an eager module to a graph representation. It is the entry point of the ExecuTorch compilation pipeline; after exporting to ATen dialect, subsequent passes can lower to [Core ATen dialect](./concepts.md#concepts#core-aten-dialect) and [Edge dialect](./concepts.md#edge-dialect).
1515

16-
ATen dialect is a valid Export IR with additional properties. It consists of functional ATen operators, higher order operators (like control flow operators) and registered custom operators.
16+
ATen dialect is a valid [EXIR](./concepts.md#exir) with additional properties. It consists of functional ATen operators, higher order operators (like control flow operators) and registered custom operators.
1717

1818
The goal of ATen dialect is to capture users’ programs as faithfully as possible.
1919

@@ -26,15 +26,15 @@ ATen mode uses the ATen implementation of Tensor (`at::Tensor`) and related type
2626

2727
## Autograd safe ATen Dialect
2828

29-
Autograd safe ATen dialect contains the autograd safe ATen operators, along with higher order operators (control flow ops) and registered custom operators.
29+
Autograd safe ATen dialect includes only differentiable ATen operators, along with higher order operators (control flow ops) and registered custom operators.
3030

3131
## Backend
3232

3333
A specific hardware (like GPU, NPU) or a software stack (like XNNPACK) that consumes a graph or part of it, with performance and efficiency benefits.
3434

3535
## [Backend Dialect](./ir-exir.md#backend-dialect)
3636

37-
Backend dialect is the result of exporting Edge dialect to specific backend. It’s target-aware, and may contain operators or submodules that are only meaningful to the target backend. This dialect allows the introduction of target-specific operators that do not conform to the schema defined in the Core ATen Operator Set and are not shown in ATen or Edge Dialect.
37+
Backend dialect is the immediate result of exporting Edge dialect to specific backend. It’s target-aware, and may contain operators or submodules that are only meaningful to the target backend. This dialect allows the introduction of target-specific operators that do not conform to the schema defined in the Core ATen Operator Set and are not shown in ATen or Edge Dialect.
3838

3939
## Backend registry
4040

@@ -56,7 +56,7 @@ An open-source, cross-platform family of tools designed to build, test and packa
5656

5757
In ExecuTorch, code generation is used to generate the [kernel registration library](./kernel-library-selective_build.md).
5858

59-
## Core ATen Dialect
59+
## [Core ATen Dialect](https://pytorch.org/docs/stable/torch.compiler_ir.html#irs)
6060

6161
Core ATen dialect contains the core ATen operators along with higher order operators (control flow) and registered custom operators.
6262

@@ -66,15 +66,11 @@ A select subset of the PyTorch ATen operator library. Core ATen operators will n
6666

6767
## Core ATen Decomposition Table
6868

69-
Decomposing an operator involves expressing it as a combination of other operators. During the export process, a default list of decompositions are used; this is known as the Core ATen Decomposition Table.
70-
71-
## [Core ATen IR](https://pytorch.org/docs/stable/torch.compiler_ir.html#irs)
72-
73-
Contains only the core ATen operators and registered custom operators. Registered custom operators are registered into the current PyTorch eager mode runtime, usually with a `TORCH_LIBRARY` call.
69+
Decomposing an operator means expressing it as a combination of other operators. During the AOT process, a default list of decompositions is employed, breaking down ATen operators into core ATen operators. This is referred to as the Core ATen Decomposition Table.
7470

7571
## [Custom operator](https://docs.google.com/document/d/1_W62p8WJOQQUzPsJYa7s701JXt0qf2OfLub2sbkHOaU/edit?fbclid=IwAR1qLTrChO4wRokhh_wHgdbX1SZwsU-DUv1XE2xFq0tIKsZSdDLAe6prTxg#heading=h.ahugy69p2jmz)
7672

77-
These are operators that aren't part of the ATen library, but which appear in eager mode. They are most likely associated with a specific target model or hardware platform. For example, [torchvision::roi_align](https://pytorch.org/vision/main/generated/torchvision.ops.roi_align.html) is a custom operator widely used by torchvision (doesn't target a specific hardware).
73+
These are operators that aren't part of the ATen library, but which appear in [eager mode](./concepts.md#eager-mode). Registered custom operators are registered into the current PyTorch eager mode runtime, usually with a `TORCH_LIBRARY` call. They are most likely associated with a specific target model or hardware platform. For example, [torchvision::roi_align](https://pytorch.org/vision/main/generated/torchvision.ops.roi_align.html) is a custom operator widely used by torchvision (doesn't target a specific hardware).
7874

7975
## DataLoader
8076

@@ -94,7 +90,7 @@ Data type, the type of data (eg. float, integer, etc.) in a tensor.
9490

9591
## [Dynamic Quantization](https://pytorch.org/docs/main/quantization.html#general-quantization-flow)
9692

97-
A method of quantizing wherein tensors are quantized on the fly during inference. This is in contrast to static quantization, where tensors are quantized before inference.
93+
A method of quantizing wherein tensors are quantized on the fly during inference. This is in contrast to [static quantization](./concepts.md#static-quantization), where tensors are quantized before inference.
9894

9995
## Dynamic shapes
10096

@@ -160,6 +156,12 @@ An EXIR Graph is a PyTorch program represented in the form of a DAG (directed ac
160156

161157
In graph mode, operators are first synthesized into a graph, which will then be compiled and executed as a whole. This is in contrast to eager mode, where operators are executed as they are encountered. Graph mode typically delivers higher performance as it allows optimizations such as operator fusion.
162158

159+
## Higher Order Operators
160+
161+
A higher order operator (HOP) is an operator that:
162+
- either accepts a Python function as input, returns a Python function as output, or both.
163+
- like all PyTorch operators, higher-order operators also have an optional implementation for backends and functionalities. This lets us e.g. register an autograd formula for the higher-order operator or define how the higher-order operator behaves under ProxyTensor tracing.
164+
163165
## Hybrid Quantization
164166

165167
A quantization technique where different parts of the model are quantized with different techniques based on computational complexity and sensitivity to accuracy loss. Some parts of the model may not be quantized to retain accuracy.

0 commit comments

Comments
 (0)