You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docsrc/contributors/fx_converters.rst
+29-91Lines changed: 29 additions & 91 deletions
Original file line number
Diff line number
Diff line change
@@ -1,36 +1,27 @@
1
-
.. _conversion:
1
+
.. _dynamo_conversion:
2
2
3
-
FX Converters
3
+
Dynamo Converters
4
4
==================
5
-
The FX converter library in Torch-TensorRT is located in ``TensorRT/py/torch_tensorrt/fx/converters`` (Converters present in FX will soon be deprecated) and ``TensorRT/py/torch_tensorrt/dynamo/conversion``.
6
-
FX converters are categorized into - ``aten_ops_converters``, ``acc_ops_converters`` and ``nn_ops_converters``, while dynamo converters only cover ``aten_ops_converters``
7
-
The individual converters present in the above folders are useful for the quantization workflow.
5
+
The dynamo converter library in Torch-TensorRT is located in ``TensorRT/py/torch_tensorrt/dynamo/conversion``.
8
6
9
-
The dynamo converters are registered using the ``dynamo_tensorrt_converter`` and the FX converters are registered using the ``tensorrt_converter``.
10
-
Since FX converters will be deprecated soon, this document will focus more on the dynamo converters.
11
7
12
8
13
9
Steps
14
10
==================
15
11
16
-
Operation Sets
12
+
Operation Set
17
13
-------------------
18
-
There are three different converter sets for FX in torch_tensorrt. Depending on whether the operation is generated using acc_trace, aten_trace or fx_trace, the converters are categorized to one of the three operation sets -
19
-
``aten_ops_converters``, ``acc_ops_converters`` or ``nn_ops_converters``. The converters are registered using ``tensorrt_converter`` decorator for FX and ``dynamo_tensorrt_converter`` for dynamo. The function decorated
14
+
The converters in dynamo are produced by ``aten_trace`` and falls under ``aten_ops_converters`` ( FX earlier had ``acc_ops_converters``, ``aten_ops_converters`` or ``nn_ops_converters`` depending on the trace through which it was produced). The converters are registered using ``dynamo_tensorrt_converter`` for dynamo. The function decorated
20
15
has the arguments - ``network, target, args, kwargs, name``, which is common across all the operators schema.
21
16
These functions are mapped in the ``aten`` converter registry dictionary (at present a compilation of FX and dynamo converters, FX will be deprecated soon), with key as the function target name.
22
17
23
-
* acc_ops_converters
24
-
* acc_trace is produced by ``torch_tensorrt.fx.tracer.acc_tracer.acc_tracer.trace``.
25
-
* aten_ops
26
-
There are two options at present for this
27
-
#. Dynamo: aten_trace is produced by ``torch_tensorrt.dynamo.backend.compile``. The second round of trace is produced by ``aot_torch_tensorrt_aten_backend`` by invoking ``aot_module_simplified`` from ``torch._functorch.aot_autograd``
28
-
#. FX: aten_trace is produced by ``torch_tensorrt.fx.tracer.dispatch_tracer.aten_tracer.trace``. This flow is more common currently, but this will soon be deprecated in torch_tensorrt.
29
-
* nn_ops
30
-
* symbolic_trace is produced by ``torch.fx._symbolic_trace``.
18
+
* aten_trace is produced by ``torch_tensorrt.dynamo.backend.compile`` or ``torch_tensorrt.dynamo.backend.export``.
19
+
The second round of trace in compile produced by ``aot_torch_tensorrt_aten_backend`` by invoking ``aot_module_simplified`` from ``torch._functorch.aot_autograd``,
20
+
Both these simplify the torch operators to reduced set of Aten operations.
21
+
31
22
32
23
As mentioned above, if you would like to add a new converter, its implementation will be included in ``TensorRT/py/torch_tensorrt/dynamo/conversion/impl``
33
-
Although there is a corresponding implementation of the converters included in the common implementation library present in ``TensorRT/py/torch_tensorrt/fx/impl`` for FX converters, this documentation focuses on the implementation of the ``aten_ops`` converters in dynamo. There might be some steps involved in reorganizing files for ``acc_ops`` converters. This is discussed in more detail in the next section.
24
+
Although there is a corresponding implementation of the converters included in the common implementation library present in ``TensorRT/py/torch_tensorrt/fx/impl`` for FX converters, this documentation focuses on the implementation of the ``aten_ops`` converters in dynamo.
34
25
35
26
36
27
Converter implementation
@@ -40,60 +31,14 @@ Each of them is detailed with the help of an example
40
31
41
32
* Registration
42
33
43
-
The converter needs to be registered with the appropriate op code in the ``tensorrt_converter`` and ``dynamo_tensorrt_converter``.
34
+
The converter needs to be registered with the appropriate op code in the ``dynamo_tensorrt_converter``.
44
35
45
36
* Activation type
46
37
47
38
Example: ``leaky_relu``
48
39
49
-
* acc_ops_converters
50
-
51
-
* FX_converters (soon to be deprecated)
52
-
53
-
Define in ``py/torch_tensorrt/fx/converters/acc_ops_converters``. One needs to register the opcode generated in the trace, with ``tensorrt_converter`` decorator. Op code to be used for the registration or the converter registry key in this case is ``acc_ops.leaky_relu``
Note since the above is deprecated, you may need to revisit these files only for file reorganization.
73
-
74
-
* Dynamo_converters
75
-
76
-
The ``acc_ops`` are not present in dynamo converters
77
40
78
-
* aten_ops_converters
79
-
80
-
* FX_converters (soon to be deprecated)
81
-
82
-
Define in ``py/torch_tensorrt/fx/converters/aten_ops_converters``. One needs to register the opcode generated in the trace with ``tensorrt_converter`` decorator. Op code to be used for the registration or the converter registry key in this case is ``torch.ops.aten.leaky_relu.default``
Define in ``py/torch_tensorrt/dynamo/conversion/aten_ops_converters``. One needs to register the opcode generated in the trace with ``dynamo_tensorrt_converter`` decorator. Op code to be used for the registration or the converter registry key in this case is ``torch.ops.aten.leaky_relu.default``
99
44
@@ -109,7 +54,7 @@ Each of them is detailed with the help of an example
The ``tensorrt_converter`` and ``dynamo_tensorrt_converter`` are similar decorator functions with some differences.
57
+
The ``tensorrt_converter`` (used for FX registration) and ``dynamo_tensorrt_converter`` are similar decorator functions with some differences.
113
58
114
59
#. Both register the converters in the registeries (python dictionaries) - ``CONVERTERS`` and ``DYNAMO_CONVERTERS`` respectively. These are two dictioneries which are concatenated to form the overall converter registry
115
60
#. The dictionary is keyed on the ``OpOverLoad`` which is mentioned in more detail below with examples
@@ -136,21 +81,19 @@ Each of them is detailed with the help of an example
136
81
The function decorated by ``tensorrt_converter`` and ``dynamo_tensorrt_converter`` has the following arguments which are automatically generated by the trace functions mentioned above.
137
82
138
83
#. network : Node in the form of ``call_module`` or ``call_function`` having the target as the key
139
-
#. target: Target key in the ``call_module`` or ``call_function`` above. eg: ``torch.ops.aten_.leaky_relu.default``. Note that ``torch.ops.aten._leaky_relu`` is the ``OpOverloadPacket`` while ``torch.ops.aten_.leaky_relu.default`` is ``OpOverload``. The
84
+
#. target: Target key in the ``call_module`` or ``call_function`` above. eg: ``torch.ops.aten_.leaky_relu.default``. Note that ``torch.ops.aten._leaky_relu`` is the ``OpOverloadPacket`` while ``torch.ops.aten_.leaky_relu.default`` is ``OpOverload``.
140
85
#. args: The arguments passed in the ``call_module`` or ``call_function`` above
141
86
#. kwargs: The kwargs passed in the ``call_module`` or ``call_function`` above
142
87
#. name: String containing the name of the target
143
88
144
-
As a user writing new converters, one just needs to take care that the approriate arguments are extracted from the trace generated to the implementation function in the implementation lib function ``activation.leaky_relu`` (which we will discuss below in detail). As one can see in the example above, the trace for ``acc_op`` and ``aten_op`` is different.
145
-
``Acc_ops`` has arguments in the ``args`` whereas ``aten_ops`` has arguments in the ``kwargs`` in the trace.
146
-
89
+
As a user writing new converters, one just needs to take care that the approriate arguments are extracted from the trace generated to the implementation function in the implementation lib function ``activation.leaky_relu`` (which we will discuss below in detail).
147
90
148
91
* Operation type
149
92
150
93
Example: ``fmod``
151
94
152
95
It follows the same steps as the above converter. In this case the opcode is ``torch.ops.aten.fmod.Scalar`` or ``torch.ops.aten.fmod.Tensor``.
153
-
Hence both the opcodes are registered in ``py/torch_tensorrt/fx/converters/aten_ops_converters`` and ``py/torch_tensorrt/dynamo/conversion/aten_ops_converters``. The opcode is ``acc_ops.fmod`` in ``py/torch_tensorrt/fx/converters/acc_ops_converters``.
96
+
Hence both the opcodes are registered in ``py/torch_tensorrt/dynamo/conversion/aten_ops_converters``.
154
97
Note that ``torch.ops.aten.fmod`` is the ``OpOverLoadPacket`` while the registry is keyed on ``torch.ops.aten.fmod.Scalar`` or ``torch.ops.aten.fmod.Tensor``, which is ``OpOverLoad``
155
98
156
99
Example: ``embedding``
@@ -165,7 +108,7 @@ Each of them is detailed with the help of an example
So if there is a new converted in which certain special cases are not to be supported then they can be specified in the ``capability_validator``.
111
+
So if there is a new converter in which certain special cases are not to be supported then they can be specified in the ``capability_validator``.
169
112
170
113
* Evaluator type
171
114
@@ -177,14 +120,13 @@ Each of them is detailed with the help of an example
177
120
178
121
* Implementation Library
179
122
180
-
The converters across all the above three opsets have the common implementation library. FX converters would be ``py/torch_tensorrt/fx/converters/impl`` and dynamo converters would be ``py/torch_tensorrt/dynamo/conversion/impl``
181
-
Again as mentioned above, one should focus on the dynamo converters which are implemented in ``py/torch_tensorrt/dynamo/conversion/impl``
123
+
The dynamo converters would be located in ``py/torch_tensorrt/dynamo/conversion/impl``
182
124
183
125
* Activation
184
126
185
127
Example: ``leaky_relu``
186
128
187
-
The implementation is to be placed in present in ``py/torch_tensorrt/fx/impl/activation.py``. This is where all the activation functions are defined and implemented.
129
+
The implementation is to be placed in present in ``py/torch_tensorrt/dynamo/conversion/impl/activation.py``. This is where all the activation functions are defined and implemented.
188
130
189
131
.. code-block:: python
190
132
@@ -202,22 +144,22 @@ Each of them is detailed with the help of an example
202
144
203
145
#. network : ``network`` passed from the decorated function registration
204
146
#. target: ``target`` passed from the decorated function registration
205
-
#. source_ir: Enum attribute. ``SourceIR`` enum is defined in ``py/torch_tensorrt/fx/converters/impl/converter_utils``
147
+
#. source_ir: Enum attribute. ``SourceIR`` enum is defined in ``py/torch_tensorrt/dynamo/conversion/impl/converter_utils``
206
148
#. name: ``name`` passed from the decorated function registration
207
149
#. input_val: Approriate arguments extracted from the decorated function registration from args or kwargs
208
150
#. alpha: Approriate arguments extracted from the decorated function registration from args or kwargs. If not None, it will set the alpha attribute of the created TensorRT activation layer eg: Used in leaky_relu, elu, hardtanh
209
151
#. beta: Approriate arguments extracted from the decorated function registration from args or kwargs. If not None, it will set the beta attribute of the created TensorRT activation layer eg: Used in hardtanh
210
152
#. dyn_range_fn: A optional function which takes the dynamic range of a TensorRT Tensor and returns the output dynamic range
211
153
212
-
The implementation functions call the ``convert_activation`` function in ``py/torch_tensorrt/fx/impl/activation.py``. This function will add the approriate activation layer via ``network.add_activation``.
154
+
The implementation functions call the ``convert_activation`` function in ``py/torch_tensorrt/dynamo/conversion/impl/activation.py``. This function will add the approriate activation layer via ``network.add_activation``.
213
155
214
156
* Operator
215
157
216
-
The implementation is to be placed in ``py/torch_tensorrt/fx/impl/elementwise/ops.py`` for FX and ``py/torch_tensorrt/dynamo/conversion/impl`` for dynamo. This is where all the elementwise functions are defined and implemented.
158
+
The implementation is to be placed in ``py/torch_tensorrt/dynamo/conversion/impl/elementwise/ops.py`` for dynamo. This is where all the elementwise functions are defined and implemented.
217
159
For a new operator, one should identify the category to which it belongs. Following are some examples
218
160
219
-
#. Elementwise operators like ``fmod`` is present in ``py/torch_tensorrt/dynamo/conversion/impl/elementwise``. The ``py/torch_tensorrt/fx/impl/elementwise/base`` contains base functions for elementwise operator.
220
-
#. Unary operators like ``sqrt`` will be present in ``py/torch_tensorrt/dynamo/conversion/impl/unary``. The ``py/torch_tensorrt/fx/impl/unary/base`` contains base functions for unary operator.
161
+
#. Elementwise operators like ``fmod`` is present in ``py/torch_tensorrt/dynamo/conversion/impl/elementwise``. The ``py/torch_tensorrt/dynamo/conversion/impl/elementwise/base`` contains base functions for elementwise operator.
162
+
#. Unary operators like ``sqrt`` will be present in ``py/torch_tensorrt/dynamo/conversion/impl/unary``. The ``py/torch_tensorrt/dynamo/conversion/impl/unary/base`` contains base functions for unary operator.
221
163
#. Normalization operators like ``softmax``, ``layer_norm``, ``batch_norm`` will be present in ``py/torch_tensorrt/dynamo/conversion/impl/normalization``. Since there are no base operations common to all, there is no base file. But one can choose to implement a base file, if there are common functions across all normalization operations
222
164
#. Individual operators like ``slice``, ``select``, ``where``, ``embedding`` will be present in ``py/torch_tensorrt/dynamo/conversion/impl/*.py``. They will have individual operator implementation with the same API structure as above but with different individual arguments
223
165
@@ -244,21 +186,17 @@ Each of them is detailed with the help of an example
Note that there are some pre-existing dynamo decompositions in torch directory, in which case they should be used,
191
+
In that case please enable the decompositions in ``py/torch_tensorrt/dynamo/lowering/_decomposition_groups.py`` in ``torch_enabled_decompositions``.
192
+
Similarly you can choose to disable any in ``torch_disabled_decompositions``. Please note that the ones already defined in the lowering will take precedence over torch lowering ops.
193
+
247
194
248
195
249
196
250
197
Tests
251
198
-----
252
199
253
-
* FX testing:
254
-
255
-
Implement the fx tests in ``py/torch_tensorrt/fx/test/converters/aten_op/test_<operator_name>_aten.py``. Derive the test class from ``DispatchTestCase``, with parameterized testing to implement different test cases. Check for the following two conditions
256
-
257
-
#. Compare the results for ``dispatch_tracer.aten_trace`` and torch.
258
-
#. Test the ``expected_op``. You can find examples in the above tests. This op will be called by the model and needs to be specified in the test so that the test checks that the approriate converter is invoked
259
-
260
-
The tests should fail if any of the above two conditions fail
261
-
262
200
* Dynamo testing:
263
201
264
202
Dynamo tests are present for the lowering ops in ``py/torch_tensorrt/dynamo/backend/test/test_decompositions.py``. The above converters will soon be ported to dynamo tests
0 commit comments