-
Notifications
You must be signed in to change notification settings - Fork 608
feat: Add DrawGraph tool for graph visualization #7172
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Nick-Wei
commented
Dec 4, 2024
- Implemented the DrawGraph tool to visualize graphs.
- Added test cases to validate the correctness of the generated graphs.
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/7172
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New FailureAs of commit 938d8f2 with merge base 8861b9a ( NEW FAILURE - The following job has failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Hi @haowhsu-quic , this PR enables to draw graphs in the C++ backend. During preprocessing, we can obtain the op wrapper list, which can then be used to generate corresponding graphs. Hopefully we could have this merged, thanks. |
.def( | ||
"GetInputTensors", | ||
&OpWrapper::GetInputTensors, | ||
"A function which get input tensors") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
gets
. Ditto for followings.
); | ||
|
||
py::enum_<Qnn_Definition_t>(m, "QnnDefinition") | ||
.value("IMPL_GENERATED", Qnn_Definition_t::QNN_DEFINITION_IMPL_GENERATED) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"IMPL_GENERATED" -> "QNN_DEFINITION_IMPL_GENERATED" to align the style. Ditto for followings.
backends/qualcomm/tests/models.py
Outdated
@@ -1079,3 +1079,19 @@ def forward(self, x, y): | |||
x = x.view(new_shape) | |||
x = x.permute(0, 2, 1, 3) | |||
return torch.matmul(x, y.transpose(-1, -2)) | |||
|
|||
class draw_graph_model(torch.nn.Module): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Camel form on class name would be better, please also put this in alphabetical order.
backends/qualcomm/utils/utils.py
Outdated
@@ -989,3 +995,135 @@ def tag_quant_io(gm: torch.fx.GraphModule, get_quant_io_dtype_fn: Callable): | |||
for node in gm.graph.nodes: | |||
if dtype := get_quant_io_dtype_fn(node): | |||
node.meta[QCOM_QUANTIZED_IO] = dtype | |||
|
|||
|
|||
class DrawGraph: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you put class
definition to the below of _AnnotationSkipper
? thanks.
backends/qualcomm/utils/utils.py
Outdated
if input_node_name not in node_list: | ||
node_list[input_node_name] = {"node" : input_node, "input_list" : []} | ||
input_list.append(input_node_name) | ||
# ToDo: tensor v2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#TODO
backends/qualcomm/utils/utils.py
Outdated
input_list.append(input_node_name) | ||
# ToDo: tensor v2 | ||
elif (op_wrapper.GetOpConfig()["outputTensors"][j].version == 2): | ||
pass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we raise an exception here to prevent drawing the wrong graph?
backends/qualcomm/utils/utils.py
Outdated
node_list[node_name]["input_list"] = input_list | ||
# TODO: tensor v2 | ||
elif (op_wrapper.GetOpConfig()["outputTensors"][i].version == 2): | ||
pass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ditto.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the contribution, I think this is definitely a great tool for us to debug real graph in QNN.
backends/qualcomm/utils/utils.py
Outdated
node_name = node.name | ||
input_list = [] | ||
for j in range(op_wrapper.GetOpConfig()["numOfInputs"]): | ||
if(op_wrapper.GetOpConfig()["inputTensors"][j].version == 1): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like wrong indentation here.
@pytorchbot label "feature" |
backends/qualcomm/utils/utils.py
Outdated
@@ -93,6 +93,12 @@ | |||
from torch.fx import passes | |||
from torch.fx.passes.operator_support import OperatorSupportBase | |||
from torch.library import Library | |||
from graphviz import Digraph |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you create a debugger
folder under backends/qualcomm
and move this implementation to there. Then we could get rid of dependency of python package in utils.py
.
- Implemented the DrawGraph tool to visualize graphs. - Added test cases to validate the correctness of the generated graphs.
eb5c334
to
938d8f2
Compare
@pytorchbot label "feature" |
@haowhsu-quic The above modifications have been completed. Please help confirm, thanks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for taking the time to review it. |
@cccclai has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for contributing!