-
Notifications
You must be signed in to change notification settings - Fork 363
Feat 865 #905
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat 865 #905
Conversation
Signed-off-by: Naren Dasan <[email protected]> Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]> Signed-off-by: Naren Dasan <[email protected]>
@peri044 what do you think of the names here, I think in a vacuum they make more sense but it might be confusing transitioning from the old tools. Ideally this becomes what people use 99% of the time though |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to C++ style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to Python style guidelines:
--- /workspace/py/torch_tensorrt/logging.py (original)
+++ /workspace/py/torch_tensorrt/logging.py (reformatted)
@@ -99,7 +99,6 @@
"""
_log(Level._to_internal_level(level), msg)
-
InternalError = LogLevel.INTERNAL_ERROR
Error = LogLevel.ERROR
Warning = LogLevel.WARNING
@@ -107,7 +106,9 @@
Debug = LogLevel.DEBUG
Graph = LogLevel.GRAPH
+
class internal_errors:
+
def __enter__(self):
self.external_lvl = get_reportable_log_level()
set_reportable_log_level(Level.InternalError)
@@ -115,7 +116,9 @@
def __exit__(self, exc_type, exc_value, exc_tb):
set_reportable_log_level(self.external_lvl)
+
class errors:
+
def __enter__(self):
self.external_lvl = get_reportable_log_level()
set_reportable_log_level(Level.Error)
@@ -123,7 +126,9 @@
def __exit__(self, exc_type, exc_value, exc_tb):
set_reportable_log_level(self.external_lvl)
+
class warnings:
+
def __enter__(self):
self.external_lvl = get_reportable_log_level()
set_reportable_log_level(Level.Warning)
@@ -131,7 +136,9 @@
def __exit__(self, exc_type, exc_value, exc_tb):
set_reportable_log_level(self.external_lvl)
+
class info:
+
def __enter__(self):
self.external_lvl = get_reportable_log_level()
set_reportable_log_level(Level.Info)
@@ -139,7 +146,9 @@
def __exit__(self, exc_type, exc_value, exc_tb):
set_reportable_log_level(self.external_lvl)
+
class debug:
+
def __enter__(self):
self.external_lvl = get_reportable_log_level()
set_reportable_log_level(Level.Debug)
@@ -147,7 +156,9 @@
def __exit__(self, exc_type, exc_value, exc_tb):
set_reportable_log_level(self.external_lvl)
+
class graphs:
+
def __enter__(self):
self.external_lvl = get_reportable_log_level()
set_reportable_log_level(Level.Graph)
Reformatting /workspace/py/torch_tensorrt/ptq.py
Reformatting /workspace/py/torch_tensorrt/_Device.py
Reformatting /workspace/py/torch_tensorrt/logging.py
Reformatting /workspace/py/torch_tensorrt/_compile.py
Reformatting /workspace/py/torch_tensorrt/_enums.py
Reformatting /workspace/py/torch_tensorrt/_Input.py
Reformatting /workspace/py/torch_tensorrt/_util.py
Reformatting /workspace/py/torch_tensorrt/__init__.py
Reformatting /workspace/py/torch_tensorrt/ts/_compiler.py
Reformatting /workspace/py/torch_tensorrt/ts/_compile_spec.py
Reformatting /workspace/py/torch_tensorrt/ts/__init__.py
Reformatting /workspace/py/setup.py
ERROR: Some files do not conform to style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to C++ style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to Python style guidelines:
--- /workspace/py/torch_tensorrt/logging.py (original)
+++ /workspace/py/torch_tensorrt/logging.py (reformatted)
@@ -99,7 +99,6 @@
"""
_log(Level._to_internal_level(level), msg)
-
InternalError = LogLevel.INTERNAL_ERROR
Error = LogLevel.ERROR
Warning = LogLevel.WARNING
@@ -107,7 +106,9 @@
Debug = LogLevel.DEBUG
Graph = LogLevel.GRAPH
+
class internal_errors:
+
def __enter__(self):
self.external_lvl = get_reportable_log_level()
set_reportable_log_level(Level.InternalError)
@@ -115,7 +116,9 @@
def __exit__(self, exc_type, exc_value, exc_tb):
set_reportable_log_level(self.external_lvl)
+
class errors:
+
def __enter__(self):
self.external_lvl = get_reportable_log_level()
set_reportable_log_level(Level.Error)
@@ -123,7 +126,9 @@
def __exit__(self, exc_type, exc_value, exc_tb):
set_reportable_log_level(self.external_lvl)
+
class warnings:
+
def __enter__(self):
self.external_lvl = get_reportable_log_level()
set_reportable_log_level(Level.Warning)
@@ -131,7 +136,9 @@
def __exit__(self, exc_type, exc_value, exc_tb):
set_reportable_log_level(self.external_lvl)
+
class info:
+
def __enter__(self):
self.external_lvl = get_reportable_log_level()
set_reportable_log_level(Level.Info)
@@ -139,7 +146,9 @@
def __exit__(self, exc_type, exc_value, exc_tb):
set_reportable_log_level(self.external_lvl)
+
class debug:
+
def __enter__(self):
self.external_lvl = get_reportable_log_level()
set_reportable_log_level(Level.Debug)
@@ -147,7 +156,9 @@
def __exit__(self, exc_type, exc_value, exc_tb):
set_reportable_log_level(self.external_lvl)
+
class graphs:
+
def __enter__(self):
self.external_lvl = get_reportable_log_level()
set_reportable_log_level(Level.Graph)
Reformatting /workspace/py/torch_tensorrt/ptq.py
Reformatting /workspace/py/torch_tensorrt/_Device.py
Reformatting /workspace/py/torch_tensorrt/logging.py
Reformatting /workspace/py/torch_tensorrt/_compile.py
Reformatting /workspace/py/torch_tensorrt/_enums.py
Reformatting /workspace/py/torch_tensorrt/_Input.py
Reformatting /workspace/py/torch_tensorrt/_util.py
Reformatting /workspace/py/torch_tensorrt/__init__.py
Reformatting /workspace/py/torch_tensorrt/ts/_compiler.py
Reformatting /workspace/py/torch_tensorrt/ts/_compile_spec.py
Reformatting /workspace/py/torch_tensorrt/ts/__init__.py
Reformatting /workspace/py/setup.py
ERROR: Some files do not conform to style guidelines
Signed-off-by: Naren Dasan <[email protected]> Signed-off-by: Naren Dasan <[email protected]>
Reminder for myself to squash and merge this when its ready |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to Python style guidelines:
--- /workspace/py/torch_tensorrt/logging.py (original)
+++ /workspace/py/torch_tensorrt/logging.py (reformatted)
@@ -99,7 +99,6 @@
"""
_log(Level._to_internal_level(level), msg)
-
InternalError = LogLevel.INTERNAL_ERROR
Error = LogLevel.ERROR
Warning = LogLevel.WARNING
@@ -107,6 +106,7 @@
Debug = LogLevel.DEBUG
Graph = LogLevel.GRAPH
+
class internal_errors:
"""Context-manager to limit displayed log messages to just internal errors
@@ -115,12 +115,14 @@
with torch_tensorrt.logging.internal_errors():
outputs = model_torchtrt(inputs)
"""
+
def __enter__(self):
self.external_lvl = get_reportable_log_level()
set_reportable_log_level(Level.InternalError)
def __exit__(self, exc_type, exc_value, exc_tb):
set_reportable_log_level(self.external_lvl)
+
class errors:
"""Context-manager to limit displayed log messages to just errors and above
@@ -130,12 +132,14 @@
with torch_tensorrt.logging.errors():
outputs = model_torchtrt(inputs)
"""
+
def __enter__(self):
self.external_lvl = get_reportable_log_level()
set_reportable_log_level(Level.Error)
def __exit__(self, exc_type, exc_value, exc_tb):
set_reportable_log_level(self.external_lvl)
+
class warnings:
"""Context-manager to limit displayed log messages to just warnings and above
@@ -153,6 +157,7 @@
def __exit__(self, exc_type, exc_value, exc_tb):
set_reportable_log_level(self.external_lvl)
+
class info:
"""Context-manager to display all info and greater severity messages
@@ -161,12 +166,14 @@
with torch_tensorrt.logging.info():
model_trt = torch_tensorrt.compile(model, **spec)
"""
+
def __enter__(self):
self.external_lvl = get_reportable_log_level()
set_reportable_log_level(Level.Info)
def __exit__(self, exc_type, exc_value, exc_tb):
set_reportable_log_level(self.external_lvl)
+
class debug:
"""Context-manager to display full debug information through the logger
@@ -176,12 +183,14 @@
with torch_tensorrt.logging.debug():
model_trt = torch_tensorrt.compile(model, **spec)
"""
+
def __enter__(self):
self.external_lvl = get_reportable_log_level()
set_reportable_log_level(Level.Debug)
def __exit__(self, exc_type, exc_value, exc_tb):
set_reportable_log_level(self.external_lvl)
+
class graphs:
"""Context-manager to display the results of intermediate lowering passes
@@ -192,6 +201,7 @@
with torch_tensorrt.logging.graphs():
model_trt = torch_tensorrt.compile(model, **spec)
"""
+
def __enter__(self):
self.external_lvl = get_reportable_log_level()
set_reportable_log_level(Level.Graph)
Reformatting /workspace/py/torch_tensorrt/ptq.py
Reformatting /workspace/py/torch_tensorrt/_Device.py
Reformatting /workspace/py/torch_tensorrt/logging.py
Reformatting /workspace/py/torch_tensorrt/_compile.py
Reformatting /workspace/py/torch_tensorrt/_enums.py
Reformatting /workspace/py/torch_tensorrt/_Input.py
Reformatting /workspace/py/torch_tensorrt/_util.py
Reformatting /workspace/py/torch_tensorrt/__init__.py
Reformatting /workspace/py/torch_tensorrt/ts/_compiler.py
Reformatting /workspace/py/torch_tensorrt/ts/_compile_spec.py
Reformatting /workspace/py/torch_tensorrt/ts/__init__.py
Reformatting /workspace/py/setup.py
ERROR: Some files do not conform to style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to C++ style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to C++ style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to Python style guidelines:
Reformatting /workspace/py/torch_tensorrt/ptq.py
Reformatting /workspace/py/torch_tensorrt/_Device.py
Reformatting /workspace/py/torch_tensorrt/logging.py
Reformatting /workspace/py/torch_tensorrt/_compile.py
Reformatting /workspace/py/torch_tensorrt/_enums.py
Reformatting /workspace/py/torch_tensorrt/_Input.py
Reformatting /workspace/py/torch_tensorrt/_util.py
Reformatting /workspace/py/torch_tensorrt/__init__.py
Reformatting /workspace/py/torch_tensorrt/ts/_compiler.py
Reformatting /workspace/py/torch_tensorrt/ts/_compile_spec.py
Reformatting /workspace/py/torch_tensorrt/ts/__init__.py
Reformatting /workspace/py/setup.py
Reformatting /workspace/tests/py/test_api_dla.py
Reformatting /workspace/tests/py/test_to_backend_api.py
Reformatting /workspace/tests/py/test_ptq_dataloader_calibrator.py
Reformatting /workspace/tests/py/test_multi_gpu.py
Reformatting /workspace/tests/modules/hub.py
--- /workspace/tests/py/test_api.py (original)
+++ /workspace/tests/py/test_api.py (reformatted)
@@ -366,6 +366,7 @@
lvl = torchtrt.logging.get_reportable_log_level()
self.assertEqual(base_lvl, lvl)
+
class TestDevice(unittest.TestCase):
Reformatting /workspace/tests/py/test_ptq_to_backend.py
Reformatting /workspace/tests/py/test_ptq_trt_calibrator.py
Reformatting /workspace/tests/py/model_test_case.py
Reformatting /workspace/tests/py/test_qat_trt_accuracy.py
Reformatting /workspace/tests/py/test_trt_intercompatability.py
Reformatting /workspace/tests/py/test_api.py
ERROR: Some files do not conform to style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to Python style guidelines:
Reformatting /workspace/py/torch_tensorrt/ptq.py
Reformatting /workspace/py/torch_tensorrt/_Device.py
Reformatting /workspace/py/torch_tensorrt/logging.py
Reformatting /workspace/py/torch_tensorrt/_compile.py
Reformatting /workspace/py/torch_tensorrt/_enums.py
Reformatting /workspace/py/torch_tensorrt/_Input.py
Reformatting /workspace/py/torch_tensorrt/_util.py
Reformatting /workspace/py/torch_tensorrt/__init__.py
Reformatting /workspace/py/torch_tensorrt/ts/_compiler.py
Reformatting /workspace/py/torch_tensorrt/ts/_compile_spec.py
Reformatting /workspace/py/torch_tensorrt/ts/__init__.py
Reformatting /workspace/py/setup.py
Reformatting /workspace/tests/py/test_api_dla.py
Reformatting /workspace/tests/py/test_to_backend_api.py
Reformatting /workspace/tests/py/test_ptq_dataloader_calibrator.py
Reformatting /workspace/tests/py/test_multi_gpu.py
Reformatting /workspace/tests/modules/hub.py
--- /workspace/tests/py/test_api.py (original)
+++ /workspace/tests/py/test_api.py (reformatted)
@@ -366,6 +366,7 @@
lvl = torchtrt.logging.get_reportable_log_level()
self.assertEqual(base_lvl, lvl)
+
class TestDevice(unittest.TestCase):
Reformatting /workspace/tests/py/test_ptq_to_backend.py
Reformatting /workspace/tests/py/test_ptq_trt_calibrator.py
Reformatting /workspace/tests/py/model_test_case.py
Reformatting /workspace/tests/py/test_qat_trt_accuracy.py
Reformatting /workspace/tests/py/test_trt_intercompatability.py
Reformatting /workspace/tests/py/test_api.py
ERROR: Some files do not conform to style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to C++ style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. One question is
are we doing with torch_tensorrt.logging.debug():
or with torch_tensorrt.debug():
.
The latter is described in the feature request #865
Any preference here ?
I think logging debug is better
…________________________________
From: Dheeraj Peri ***@***.***>
Sent: Friday, March 18, 2022 12:41:15 PM
To: NVIDIA/Torch-TensorRT ***@***.***>
Cc: Naren Dasan ***@***.***>; Author ***@***.***>
Subject: Re: [NVIDIA/Torch-TensorRT] Feat 865 (PR #905)
@peri044 approved this pull request.
LGTM. One question is
are we doing with torch_tensorrt.logging.debug(): or with torch_tensorrt.debug():.
The latter is described in the feature request #865<#865>
Any preference here ?
—
Reply to this email directly, view it on GitHub<#905 (review)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AANVFFIF4P7UREFZLS7T4CTVATE4XANCNFSM5PS4RKCQ>.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
Signed-off-by: Naren Dasan <[email protected]> Signed-off-by: Naren Dasan <[email protected]>
Description
Adds context managers to quickly change the log level for a subset of code
Fixes #865
Type of change
Please delete options that are not relevant and/or add your own.
Checklist: