Skip to content

Commit d890b7d

Browse files
committed
chore: Nest it to dynamo/fx_ts_compat
Signed-off-by: Dheeraj Peri <[email protected]>
1 parent 48618f4 commit d890b7d

File tree

128 files changed

+289
-152
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

128 files changed

+289
-152
lines changed

.circleci/config.yml

Lines changed: 41 additions & 41 deletions
Original file line numberDiff line numberDiff line change
@@ -711,13 +711,13 @@ commands:
711711
# =================== FX tests end ======================== #
712712

713713
# =================== Dynamo tests start ======================== #
714-
test-dynamo_core:
714+
test-dynamo-fx_ts_core:
715715
description: "Test the Dynamo core"
716716
steps:
717717
- run:
718718
name: Run Dynamo core tests
719719
command: |
720-
cd py/torch_tensorrt/dynamo/test
720+
cd py/torch_tensorrt/dynamo/fx_ts_compat/test
721721
pushd core/
722722
pytest --junitxml=/tmp/artifacts/test_results/dynamo/core/test_results.xml
723723
popd
@@ -727,13 +727,13 @@ commands:
727727
- store_artifacts:
728728
path: /tmp/testlogs
729729

730-
test-dynamo_converters_acc:
730+
test-dynamo-fx_ts_converters_acc:
731731
description: "Test the Dynamo acc converters"
732732
steps:
733733
- run:
734734
name: Run FX converter tests
735735
command: |
736-
cd py/torch_tensorrt/dynamo/test
736+
cd py/torch_tensorrt/dynamo/fx_ts_compat/test
737737
pushd converters/acc_op/
738738
pytest --junitxml=/tmp/artifacts/test_results/dynamo/converters/acc_op/test_results.xml
739739
popd
@@ -743,13 +743,13 @@ commands:
743743
- store_artifacts:
744744
path: /tmp/testlogs
745745

746-
test-dynamo_converters_aten:
746+
test-dynamo-fx_ts_converters_aten:
747747
description: "Test the dynamo aten converters"
748748
steps:
749749
- run:
750750
name: Run dynamo converter tests
751751
command: |
752-
cd py/torch_tensorrt/dynamo/test
752+
cd py/torch_tensorrt/dynamo/fx_ts_compat/test
753753
pushd converters/aten_op/
754754
pytest --junitxml=/tmp/artifacts/test_results/dynamo/converters/aten_op/test_results.xml
755755
popd
@@ -759,13 +759,13 @@ commands:
759759
- store_artifacts:
760760
path: /tmp/testlogs
761761

762-
test-dynamo_converters_vanilla:
762+
test-dynamo-fx_ts_converters_vanilla:
763763
description: "Test the dynamo vanilla converters"
764764
steps:
765765
- run:
766766
name: Run dynamo converter tests
767767
command: |
768-
cd py/torch_tensorrt/dynamo/test
768+
cd py/torch_tensorrt/dynamo/fx_ts_compat/test
769769
pushd converters/vanilla/
770770
pytest --junitxml=/tmp/artifacts/test_results/dynamo/converters/vanilla/test_results.xml
771771
popd
@@ -775,13 +775,13 @@ commands:
775775
- store_artifacts:
776776
path: /tmp/testlogs
777777

778-
test-dynamo_passes:
778+
test-dynamo-fx_ts_passes:
779779
description: "Test the dynamo passes"
780780
steps:
781781
- run:
782782
name: Run dynamo passes
783783
command: |
784-
cd py/torch_tensorrt/dynamo/test
784+
cd py/torch_tensorrt/dynamo/fx_ts_compat/test
785785
pushd passes
786786
list_passes=$(ls | grep -v test_setitem*)
787787
pytest $list_passes --junitxml=/tmp/artifacts/test_results/dynamo/passes/test_results.xml
@@ -791,13 +791,13 @@ commands:
791791
- store_artifacts:
792792
path: /tmp/testlogs
793793

794-
test-dynamo_tools:
794+
test-dynamo-fx_ts_tools:
795795
description: "Test the dynamo tools"
796796
steps:
797797
- run:
798798
name: Run dynamo tools
799799
command: |
800-
cd py/torch_tensorrt/dynamo/test
800+
cd py/torch_tensorrt/dynamo/fx_ts_compat/test
801801
pushd tools
802802
pytest --junitxml=/tmp/artifacts/test_results/dynamo/tools/test_results.xml
803803
popd
@@ -806,13 +806,13 @@ commands:
806806
- store_artifacts:
807807
path: /tmp/testlogs
808808

809-
test-dynamo_trt_lower:
809+
test-dynamo-fx_ts_trt_lower:
810810
description: "Test the dynamo TRT lowering"
811811
steps:
812812
- run:
813813
name: Run dynamo TRT lowering
814814
command: |
815-
cd py/torch_tensorrt/dynamo/test
815+
cd py/torch_tensorrt/dynamo/fx_ts_compat/test
816816
pushd trt_lower
817817
pytest --junitxml=/tmp/artifacts/test_results/dynamo/trt_lower/test_results.xml
818818
popd
@@ -821,13 +821,13 @@ commands:
821821
- store_artifacts:
822822
path: /tmp/testlogs
823823

824-
test-dynamo_tracer:
824+
test-dynamo-fx_ts_tracer:
825825
description: "Test all dynamo tracers"
826826
steps:
827827
- run:
828828
name: Run dynamo tracer
829829
command: |
830-
cd py/torch_tensorrt/dynamo/test
830+
cd py/torch_tensorrt/dynamo/fx_ts_compat/test
831831
pushd tracer
832832
list_tracer=$(ls | grep -v test_dispatch_*)
833833
pytest $list_tracer --junitxml=/tmp/artifacts/test_results/fx/tracer/test_results.xml
@@ -837,13 +837,13 @@ commands:
837837
- store_artifacts:
838838
path: /tmp/testlogs
839839

840-
test-dynamo_tracer_acc:
840+
test-dynamo-fx_ts_tracer_acc:
841841
description: "Test the dynamo acc tracer only"
842842
steps:
843843
- run:
844844
name: Run dynamo tracer
845845
command: |
846-
cd py/torch_tensorrt/dynamo/test
846+
cd py/torch_tensorrt/dynamo/fx_ts_compat/test
847847
pushd tracer
848848
list_tracer=$(ls | grep test_acc)
849849
pytest $list_tracer --junitxml=/tmp/artifacts/test_results/dynamo/tracer/test_results.xml
@@ -853,13 +853,13 @@ commands:
853853
- store_artifacts:
854854
path: /tmp/testlogs
855855

856-
test-dynamo_quant:
856+
test-dynamo-fx_ts_quant:
857857
description: "Test the dynamo quant"
858858
steps:
859859
- run:
860860
name: Run dynamo quant tests
861861
command: |
862-
cd py/torch_tensorrt/dynamo/test
862+
cd py/torch_tensorrt/dynamo/fx_ts_compat/test
863863
pushd quant/
864864
pytest --junitxml=/tmp/artifacts/test_results/dynamo/quant/test_results.xml
865865
popd
@@ -869,42 +869,42 @@ commands:
869869
- store_artifacts:
870870
path: /tmp/testlogs
871871

872-
test-dynamo:
872+
test-dynamo-fx_ts:
873873
description: "Test the dynamo backend"
874874
steps:
875875
- run:
876876
name: Run dynamo tests
877877
command: |
878878
mkdir -p /tmp/artifacts/test_results
879-
- test-dynamo_converters_acc
880-
- test-dynamo_converters_aten
881-
- test-dynamo_converters_vanilla
882-
- test-dynamo_passes
883-
- test-dynamo_tools
884-
- test-dynamo_trt_lower
885-
- test-dynamo_tracer
886-
- test-dynamo_core
887-
- test-dynamo_quant
879+
- test-dynamo-fx_ts_converters_acc
880+
- test-dynamo-fx_ts_converters_aten
881+
- test-dynamo-fx_ts_converters_vanilla
882+
- test-dynamo-fx_ts_passes
883+
- test-dynamo-fx_ts_tools
884+
- test-dynamo-fx_ts_trt_lower
885+
- test-dynamo-fx_ts_tracer
886+
- test-dynamo-fx_ts_core
887+
- test-dynamo-fx_ts_quant
888888
- store_test_results:
889889
path: /tmp/artifacts
890890
- store_artifacts:
891891
path: /tmp/testlogs
892892

893-
test-dynamo-no-aten:
893+
test-dynamo-fx_ts-no-aten:
894894
description: "Test the dynamo backend without aten operators"
895895
steps:
896896
- run:
897897
name: Run dynamo tests without aten ops
898898
command: |
899899
mkdir -p /tmp/artifacts/test_results
900-
- test-dynamo_converters_acc
901-
- test-dynamo_converters_vanilla
902-
- test-dynamo_passes
903-
- test-dynamo_tools
904-
- test-dynamo_trt_lower
905-
- test-dynamo_tracer_acc
906-
- test-dynamo_core
907-
- test-dynamo_quant
900+
- test-dynamo-fx_ts_converters_acc
901+
- test-dynamo-fx_ts_converters_vanilla
902+
- test-dynamo-fx_ts_passes
903+
- test-dynamo-fx_ts_tools
904+
- test-dynamo-fx_ts_trt_lower
905+
- test-dynamo-fx_ts_tracer_acc
906+
- test-dynamo-fx_ts_core
907+
- test-dynamo-fx_ts_quant
908908
- store_test_results:
909909
path: /tmp/artifacts
910910
- store_artifacts:
@@ -1117,7 +1117,7 @@ jobs:
11171117
command: pip3 install --pre /tmp/dist/x86_64-linux/*cp39-cp39*.whl
11181118
# We install torch after torch-trt because pip automatically enforces the version constraint otherwise
11191119
- dump-test-env
1120-
- test-dynamo
1120+
- test-dynamo-fx_ts
11211121

11221122
test-py-dynamo-x86_64-linux-no-aten:
11231123
parameters:
@@ -1148,7 +1148,7 @@ jobs:
11481148
command: pip3 install --pre /tmp/dist/x86_64-linux/*cp39-cp39*.whl
11491149
# We install torch after torch-trt because pip automatically enforces the version constraint otherwise
11501150
- dump-test-env
1151-
- test-dynamo-no-aten
1151+
- test-dynamo-fx_ts-no-aten
11521152

11531153
package-x86_64-linux:
11541154
parameters:
Lines changed: 137 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,137 @@
1+
# PyTorch Operations Dynamic Shape Support Summary
2+
3+
4+
5+
| Operation | Test Method | Supports Dynamic Shape | Shape | Num of dimensions | Reason |
6+
| --- | --- | --- | --- | --- | --- |
7+
| adaptive_avgpool | | partially | (-1, -1, 256, 256) | 2 | AdaptiveAvgPool2d and AdaptiveAvgPool3d currently doesn't support dynamic shapes for last two dims. |
8+
| any | | no | | | torch.zeros(tuple(\[*input_t.shape\])). Trying to create tensor with negative dimension -1: \[-1, -1, -1, -1\] |
9+
| as_strided | | no | | | RuntimeError: setStorage: sizes \[2, 3\], strides \[1, 2\], storage offset 0, and itemsize 8 requiring a storage size of 48 are out of bounds for storage of size 16 |
10+
| avg_pool | avg_pool2d | yes | (-1,-,1,-1,-1) | 4 | |
11+
| | avg_pool1d | partially | (-1, 3, 3) | 1 | |
12+
| batchnorm | | partially | (-1, 3, -1, -1) | 3 | "Channel dim can't be dynamic for batch norm." |
13+
| binary_ops | | yes | (-1,-,1,-1,-1) | 4 | |
14+
| cat | | yes | (-1,-,1,-1,-1) | 4 | |
15+
| chunk | | partially | (-1, 1, 3, -1) | any (not chunk dim) | AssertionError: Can't chunk on dynamic shape dimension! |
16+
| clamp | | yes | (-1,-,1,-1,-1) | | |
17+
| convolution | conv2d | partially | (-1, 3, -1, -1) | 3 | AssertionError: Channel dim can't be dynamic for convolution. |
18+
| | conv1d | partially | (-1, 3, 3) | 1 | |
19+
| | conv3d | partially | (-1,-,1,-1,-1) | 4 | AssertionError: Channel dim can't be dynamic for convolution. |
20+
| dequantize | | yes | (-1,-,1,-1,-1) | 4 | |
21+
| eimsum | | yes | (-1,-,1,-1,-1) | 4 | |
22+
| elu | | yes | (-1,-,1,-1,-1) | 4 | |
23+
| embedding | | yes | (-1,-,1,-1,-1) | 4 | |
24+
| eq | SimpleConverter | yes | (-1,-,1,-1,-1) | 4 | |
25+
| | ConstInputConverter | yes | (-1,-,1,-1,-1) | 4 | |
26+
| | EqMethodConverter | no | limitation in converter | | RuntimeError: Trying to create tensor with negative dimension -1: \[-1, -1, -1, -1\] |
27+
| | EqOperatorConverter | no | limitation in converter | | RuntimeError: Trying to create tensor with negative dimension -1: \[-1, -1, -1, -1\] |
28+
| | EqOperatorConstant | partially | (3,-1) | 1 | |
29+
| | EqConverter | no | limitation in converter | | RuntimeError: Trying to create tensor with negative dimension -1: \[-1, -1, -1, -1\] |
30+
| expand | | no | | | Dynamic shape is not suitable for the expand operation. |
31+
| flatten | | yes | (-1, -1, -1, -1, -1) | 5 | |
32+
| gelu | | yes | (-1,-,1,-1,-1) | 4 | |
33+
| getitem | | yes | (-1,-,1,-1,-1) | 4 | |
34+
| gt | EqOperatorSimpleConverter | yes | (-1,-,1,-1,-1) | 4 | |
35+
| | ConstInputConverter | yes | (-1,-,1,-1,-1) | 4 | |
36+
| | GtConverter | no | limitation in converter | | RuntimeError: Trying to create tensor with negative dimension -1: \[-1, -1, -1, -1\] |
37+
| | GtMethodConverter | no | limitation in converter | | RuntimeError: Trying to create tensor with negative dimension -1: \[-1, -1, -1, -1\] |
38+
| | GtOperator | no | limitation in converter | | RuntimeError: Trying to create tensor with negative dimension -1: \[-1, -1, -1, -1\] |
39+
| | EqOperator | no | limitation in converter | | RuntimeError: Trying to create tensor with negative dimension -1: \[-1, -1, -1, -1\] |
40+
| hardsigmoid | | yes | (-1,-,1,-1,-1) | 4 | |
41+
| hardtanh | | yes | (-1,-,1,-1,-1) | 4 | |
42+
| interpolate | | yes | (-1,-,1,-1,-1) | 4 | |
43+
| isinf | | yes | (-1,-,1,-1,-1) | 4 | |
44+
| leaky_relu | | yes | (-1,-,1,-1,-1) | 4 | |
45+
| linear | | partially | (-1, 3, 5) | 1 | AssertionError: Currently we only support one dynmaic dim for linear and it can't be the last dim. |
46+
| logical_and | | yes | (-1, -1, -1, -1) | 4 | |
47+
| logical_or | | yes | (-1, -1, -1, -1) | 4 | |
48+
| logical_xor | | yes | (-1, -1, -1, -1) | 4 | |
49+
| lt | | yes | (-1, -1, -1, -1) | 4 | |
50+
| masked_fill | | no | limitation in converter | | RuntimeError: Trying to create tensor with negative dimension -1: \[-1, -1, -1, -1\] |
51+
| mat_mul | | yes | batch dim | | |
52+
| max | MaxFullReduce | yes | (-1, -1, -1, -1) | 4 | |
53+
| | MaxDimReduce | yes | (-1, -1, -1, -1) | 4 | |
54+
| | MaxMethod | yes | (-1, -1, -1, -1) | 4 | |
55+
| maximum | | yes | (-1, -1, -1, -1) | 4 | |
56+
| maxpool | max_pool1d | partially | (1, 1, -1) | 1 | shape is not set to (-1, -1, -1) as reshape dimension with, more than one -1 wildcard is not allowed while adding unsqueeze layer |
57+
| | max_pool2d | yes | (-1, -1, -1, -1) | 4 | |
58+
| | max_pool3d | yes | (-1, -1, -1, -1, -1) | 5 | |
59+
| min | MinFullReduce | yes | (-1, -1, -1, -1) | 4 | |
60+
| | MinDimReduce | yes | (-1, -1, -1, -1) | 4 | |
61+
| | MinMethod | yes | (-1, -1, -1, -1) | 4 | |
62+
| minimum | | yes | (-1, -1, -1, -1) | 4 | |
63+
| narrow | | partially | (-1, 3, -1, -1) | 3 | AssertionError: Can't chunk on dynamic shape dimension! |
64+
| ne | NeFunctionConverter | yes | (-1, -1, -1, -1) | 4 | |
65+
| | NeMethodConverter | yes | (-1, -1, -1, -1) | 4 | |
66+
| | NeOperatorConverter | yes | (-1, -1, -1, -1) | 4 | |
67+
| | ConstInputConverter | yes | (-1, -1, -1, -1) | 4 | |
68+
| | NeOperatorConstantConverter | partially | (3, -1) | 1 | |
69+
| new_ones | | yes | (-1, -1, -1, -1) | 4 | |
70+
| numel | | no | limitation in converter | | RuntimeError: numel does not support dynamic shapes. |
71+
| pad | | no | limitation in converter | | test\_pad\_with\_dynamic\_shape\_four\_dimensions\_0\_2d (deeplearning.trt.torch\_tensorrt.py.torch\_tensorrt.fx.test.converters.acc\_op.test\_pad.TestPadConverter) ... \[07/15/2022-09:23:18\] \[TRT\] \[E\] 2: \[intInterval.cpp::max::26\] Error Code 2: Internal Error (Assertion !empty() failed. |
72+
| permute | | yes | (-1, -1, -1, -1) | 4 | |
73+
| prod | | yes | (-1, -1, -1, -1) | 4 | |
74+
| quantize\_per\_tensor | | yes | (-1, -1, -1, -1) | 4 | |
75+
| reduce op | | yes | (-1, -1, -1, -1) | 4 | |
76+
| relu | | yes | (-1, -1, -1, -1) | 4 | |
77+
| repeat interleave | | partially | (-1, 3, 2) | 1 | AssertionError: Currently we don't support unsqueeze with more than one dynamic dims. |
78+
| reshape | | yes | (-1, -1, -1, -1) | 4 | |
79+
| selu | | yes | (-1, -1, -1, -1) | 4 | |
80+
| sigmoid | | yes | (-1,-,1,-1,-1) | 4 | |
81+
| silu | | yes | (-1,-,1,-1,-1) | 4 | |
82+
| size | | yes | (-1, -1, -1, -1) | 4 | |
83+
| softmax | | yes | (-1, -1, -1, -1) | 4 | |
84+
| softsign | | yes | (-1, -1, -1, -1) | 4 | |
85+
| split | | partially | (-1, 10, -1) | 2 | AssertionError: Can't chunk on dynamic shape dimension! |
86+
| squeeze | | partially | (1, -1, 2) | 1 | AssertionError: Currently more than one dynamic dim for input to squeeze is not supported. |
87+
| std | | yes | (-1, -1, -1, -1) | 4 | |
88+
| tanh | | yes | (-1, -1, -1, -1) | 4 | |
89+
| tile | | yes | (-1, -1, -1, -1) | 4 | |
90+
| to_dtype | int | yes | (-1, -1, -1, -1) | 4 | |
91+
| | float | yes | (-1, -1, -1, -1) | 4 | |
92+
| topk | | yes | (-1, -1, -1, -1) | 4 | |
93+
| transpose_convolution | conv_transpose2d | partially | (-1, 3, -1, -1) | 3 | |
94+
| | conv_transpose3d | partially | (-1, 3, -1, -1, -1) | 4 | |
95+
| type_as | | yes | (-1, -1, -1, -1) | 4 | RuntimeError: ShapeProp error for: node=%type\_1 : \[#users=1\] = call\_method\[target=type\](args = (%input_1,), kwargs = {dtype: torch.float32}) with meta={} |
96+
| unary ops | | yes | (-1, -1, -1, -1) | 4 | |
97+
| unsqueeze | | partially | (-1, 2, 3) | 1 | AssertionError: Currently we don't support unsqueeze with more than one dynamic dims. |
98+
| where | | no | limitation in converter | | torch.broadcast_shape can not handle -1 dimension in shape \[-1, 2, 2\] |
99+
100+
101+
102+
Binary Ops Include following operations:
103+
|Binary Ops |
104+
|----------|
105+
|add |
106+
|sub |
107+
|div |
108+
|mul |
109+
|floor_div |
110+
|fmod |
111+
|floor_divide|
112+
|pow |
113+
114+
115+
Unary Ops Include following operations:
116+
|Unary Ops |
117+
|----------|
118+
|rsqrt |
119+
|sin |
120+
|cos |
121+
|tan |
122+
|sinh |
123+
|cosh |
124+
|asin |
125+
|acos |
126+
|atan |
127+
|abs |
128+
|neg |
129+
|reciprocal|
130+
|sqrt |
131+
|log |
132+
|exp |
133+
|floor |
134+
|ceil |
135+
|sign |
136+
137+
Note: For more information about the test method, please refer to the operation test files. Additionally, test files include information about errors encountered during dynamic shape testing.

py/torch_tensorrt/dynamo/fx2trt.py renamed to py/torch_tensorrt/dynamo/fx_ts_compat/fx2trt.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@
1313
from torch.fx.node import _get_qualified_name
1414
from torch.fx.passes.shape_prop import TensorMetadata
1515

16-
from torch_tensorrt.dynamo import CONVERTERS
16+
from torch_tensorrt.dynamo.fx_ts_compat import CONVERTERS
1717
from .input_tensor_spec import InputTensorSpec
1818
from torch_tensorrt.fx.observer import Observer
1919
from .utils import get_dynamic_dims, LowerPrecision, torch_dtype_to_trt

py/torch_tensorrt/dynamo/passes/lower_pass_manager_builder.py renamed to py/torch_tensorrt/dynamo/fx_ts_compat/passes/lower_pass_manager_builder.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88
from torch.fx.passes.pass_manager import inplace_wrapper, PassManager
99
from torch.fx.passes.shape_prop import ShapeProp
1010
from torch.fx.passes.splitter_base import generate_inputs_for_submodules, SplitResult
11-
from torch_tensorrt.dynamo.utils import LowerPrecision
11+
from torch_tensorrt.dynamo.fx_ts_compat.utils import LowerPrecision
1212
from torch_tensorrt import _Input
1313
from ..input_tensor_spec import InputTensorSpec
1414

0 commit comments

Comments
 (0)