Skip to content

Commit e7267a1

Browse files
narendasangs-olive
authored andcommitted
Combination: 16 commits with aten improvements
refactor: Moving elementwise and unary core to impl Signed-off-by: Naren Dasan <[email protected]> new file: ../converters/impl/unary/base.py Moving elementwise core to impl - rsqrt (FX Converter Refactor [9/N]) <Target: converter_reorg_elementwise> (#1905) Converter reorg fmod Converter reorg and rsub Rsub error fixes and linting error fixed Rsub test case to include different inputs Converter reorg batch norm batch norm error fix and linting issue error fix layer_norm converter Layer norm linting correction ops file correction fixing lint Acc_ops layer_norm correction Converter reorg and softmax operation softmax linting error fix Converter reorg and gelu Linting error Converter reorg and squeeze operator Correcting squeeze operator implementation, linting error and acc squeeze test Adding the condition to convert dim to int and removing the comment Converter reorg and select operation select operation correction and linting changes converter reorg and slice converter reorg slice op Correcting linting error and slice changes Correcting the slice operation converter reorg and matmul Matmul issue fixes and lint error check moving matmul to individual file Converter reorg and where operator adding where aten op aten::where correction and linting error changes aten::unsqueeze impl refactor Signed-off-by: Boris Fomitchev <[email protected]> Moved clamp to impl Signed-off-by: Boris Fomitchev <[email protected]> fixed method name Signed-off-by: Boris Fomitchev <[email protected]> fix: Add automatic type promotion for FX ops - Implement functionality to cast tensors to alternative types - Add functionality to elementwise ops to promote types and perform necessary casts - Address issues in FX ops where mixed-precision computations can cause errors - Add test cases to validate fix Revert all changes to py/torch_tensorrt/fx Revert "fix: Add automatic type promotion for FX ops" This reverts commit f1f3716. Revert "Moved clamp to impl" This reverts commit df401dd. Revert "aten::unsqueeze impl refactor" This reverts commit b424735. Revert "Converter reorg and where operator" This reverts commit b4da15e. Revert "converter reorg and matmul" This reverts commit 7551eee. Revert "converter reorg and slice" This reverts commit 9bbdc9e. Revert "Converter reorg and select operation" This reverts commit fb70253. Revert "Converter reorg and squeeze operator" This reverts commit 294545c. Revert "Converter reorg and gelu" This reverts commit 37d1168. Revert "Converter reorg and softmax operation" This reverts commit 1ba6d13. Revert "layer_norm converter" This reverts commit e0b34b1. Revert "Converter reorg batch norm" This reverts commit 59354e5. Revert "Converter reorg and rsub" This reverts commit db15d27. Revert "Converter reorg fmod" This reverts commit ce3fa67. Revert "Moving elementwise core to impl - rsqrt (FX Converter Refactor [9/N]) <Target: converter_reorg_elementwise> (#1905)" This reverts commit 7158ca5. Revert "refactor: Moving elementwise and unary core to impl" This reverts commit 45e43ca. fix: Replay all FX changes in Dynamo - Add multiple fixes to make FX changes appear in Dynamo directory, using Dynamo registry - All converters with open PRs are linked and shown - Update references, imports, code, merges, rebases accordingly - Add new test cases to Dynamo for converters Temporarily removing rsub pending fix Fixing clamp to not use Torch Signed-off-by: Boris Fomitchev <[email protected]> Fixing select to not use torch fix: Reorganize folders in latest implementation - Update test references and imports accordingly Embedding operator in dynamo reciprocal lowering pass fix: Fix for Dynamic Shape Tests + Input class feat: Add permute operation implementation chore: Move converter registry, update imports
1 parent 9c24ba3 commit e7267a1

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

66 files changed

+4827
-17
lines changed

.circleci/config.yml

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -780,6 +780,21 @@ commands:
780780
- store_artifacts:
781781
path: /tmp/testlogs
782782

783+
test-dynamo-converters:
784+
description: "Test the Dynamo aten converters"
785+
steps:
786+
- run:
787+
name: Run Dynamo converter tests
788+
command: |
789+
cd tests/py/dynamo/converters
790+
TESTS_TO_RUN=$(circleci tests glob "test_*.py" | circleci tests split --split-by=timings)
791+
pytest --junitxml=/tmp/artifacts/test_results/dynamo/converters/test_results.xml $TESTS_TO_RUN
792+
793+
- store_test_results:
794+
path: /tmp/artifacts
795+
- store_artifacts:
796+
path: /tmp/testlogs
797+
783798
# =================== Dynamo tests end ======================== #
784799

785800
# Define a job to be invoked later in a workflow.
@@ -1036,6 +1051,7 @@ jobs:
10361051
command: pip3 install --pre /tmp/dist/x86_64-linux/*cp39-cp39*.whl
10371052
# We install torch after torch-trt because pip automatically enforces the version constraint otherwise
10381053
- dump-test-env
1054+
- test-dynamo-converters
10391055
- test-dynamo-torch_compile
10401056
- test-dynamo-models_torch_compile
10411057
- test-dynamo-models_torch_export

py/torch_tensorrt/_Input.py

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -68,6 +68,17 @@ def __init__(self, *args, **kwargs):
6868
- Input(shape=(1,3,32,32), dtype=torch_tensorrt.dtype.int32, format=torch_tensorrt.TensorFormat.NCHW)
6969
- Input(min_shape=(1,3,32,32), opt_shape=[2,3,32,32], max_shape=(3,3,32,32)) #Implicitly dtype=torch_tensorrt.dtype.float32, format=torch_tensorrt.TensorFormat.NCHW
7070
"""
71+
# Compatibility code for switching over from InputTensorSpec
72+
if "shape" in kwargs and "shape_ranges" in kwargs:
73+
assert (
74+
len(kwargs["shape_ranges"]) == 1 and len(kwargs["shape_ranges"][0]) == 3
75+
)
76+
del kwargs["shape"]
77+
78+
kwargs["min_shape"] = kwargs["shape_ranges"][0][0]
79+
kwargs["opt_shape"] = kwargs["shape_ranges"][0][1]
80+
kwargs["max_shape"] = kwargs["shape_ranges"][0][2]
81+
7182
if len(args) == 1:
7283
if not Input._supported_input_size_type(args[0]):
7384
raise TypeError(

py/torch_tensorrt/dynamo/__init__.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,8 +3,9 @@
33

44
if version.parse(sanitized_torch_version()) >= version.parse("2.1.dev"):
55
from ._settings import *
6+
from .conversion import *
67
from .aten_tracer import trace
7-
from .converter_registry import (
8+
from .conversion.converter_registry import (
89
DYNAMO_CONVERTERS,
910
dynamo_tensorrt_converter,
1011
)
Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,24 @@
1+
from enum import Enum, auto
2+
3+
4+
class SourceIR(Enum):
5+
NN = auto()
6+
ACC = auto()
7+
ATEN = auto()
8+
PRIM = auto()
9+
TORCHTRT_LOWERED = auto()
10+
UNKNOWN = auto()
11+
12+
def __str__(self):
13+
if self == SourceIR.NN:
14+
return "nn"
15+
elif self == SourceIR.ACC:
16+
return "acc"
17+
elif self == SourceIR.ATEN:
18+
return "aten"
19+
elif self == SourceIR.PRIM:
20+
return "prim"
21+
elif self == SourceIR.TORCHTRT_LOWERED:
22+
return "torchtrt_lowered"
23+
else:
24+
return "unknown_ir"
Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,5 @@
1+
from .SourceIR import SourceIR
2+
from .aten_ops_converters import *
13
from .trt_interpreter import *
24
from .conversion import *
35
from .truncate_long_and_double import repair_long_or_double_inputs

0 commit comments

Comments
 (0)