|
1 | 1 | # Debugging Models in ExecuTorch
|
2 | 2 |
|
3 |
| -With the ExecuTorch Developer Tools, users can debug their models for numerical inaccurcies and extract model outputs from their device to do quality analysis (such as Signal-to-Noise, Mean square error etc.). |
4 |
| - |
5 |
| -Currently, ExecuTorch supports the following debugging flows: |
6 |
| -- Extraction of model level outputs via ETDump. |
7 |
| -- Extraction of intermediate outputs (outside of delegates) via ETDump: |
8 |
| - - Linking of these intermediate outputs back to the eager model python code. |
9 |
| - |
10 |
| - |
11 |
| -## Steps to debug a model in ExecuTorch |
12 |
| - |
13 |
| -### Runtime |
14 |
| -For a real example reflecting the steps below, please refer to [example_runner.cpp](https://github.com/pytorch/executorch/blob/main/examples/devtools/example_runner/example_runner.cpp). |
15 |
| - |
16 |
| -1. [Optional] Generate an [ETRecord](./etrecord.rst) while exporting your model. When provided, this enables users to link profiling information back to the eager model source code (with stack traces and module hierarchy). |
17 |
| -2. Integrate [ETDump generation](./sdk-etdump.md) into the runtime and set the debugging level by configuring the `ETDumpGen` object. Then, provide an additional buffer to which intermediate outputs and program outputs will be written. Currently we support two levels of debugging: |
18 |
| - - Program level outputs |
19 |
| - ```C++ |
20 |
| - Span<uint8_t> buffer((uint8_t*)debug_buffer, debug_buffer_size); |
21 |
| - etdump_gen.set_debug_buffer(buffer); |
22 |
| - etdump_gen.set_event_tracer_debug_level( |
23 |
| - EventTracerDebugLogLevel::kProgramOutputs); |
24 |
| - ``` |
25 |
| - |
26 |
| - - Intermediate outputs of executed (non-delegated) operations (will include the program level outputs too) |
27 |
| - ```C++ |
28 |
| - Span<uint8_t> buffer((uint8_t*)debug_buffer, debug_buffer_size); |
29 |
| - etdump_gen.set_debug_buffer(buffer); |
30 |
| - etdump_gen.set_event_tracer_debug_level( |
31 |
| - EventTracerDebugLogLevel::kIntermediateOutputs); |
32 |
| - ``` |
33 |
| -3. Build the runtime with the pre-processor flag that enables tracking of debug events. Instructions are in the [ETDump documentation](./sdk-etdump.md). |
34 |
| -4. Run your model and dump out the ETDump buffer as described [here](./sdk-etdump.md). (Do so similarly for the debug buffer if configured above) |
35 |
| - |
36 |
| - |
37 |
| -### Accessing the debug outputs post run using the Inspector API's |
38 |
| -Once a model has been run, using the generated ETDump and debug buffers, users can leverage the [Inspector API's](./sdk-inspector.rst) to inspect these debug outputs. |
39 |
| - |
40 |
| -```python |
41 |
| -from executorch.devtools import Inspector |
42 |
| - |
43 |
| -# Create an Inspector instance with etdump and the debug buffer. |
44 |
| -inspector = Inspector(etdump_path=etdump_path, |
45 |
| - buffer_path = buffer_path, |
46 |
| - # etrecord is optional, if provided it'll link back |
47 |
| - # the runtime events to the eager model python source code. |
48 |
| - etrecord = etrecord_path) |
49 |
| - |
50 |
| -# Accessing program outputs is as simple as this: |
51 |
| -for event_block in inspector.event_blocks: |
52 |
| - if event_block.name == "Execute": |
53 |
| - print(event_blocks.run_output) |
54 |
| - |
55 |
| -# Accessing intermediate outputs from each event (an event here is essentially an instruction that executed in the runtime). |
56 |
| -for event_block in inspector.event_blocks: |
57 |
| - if event_block.name == "Execute": |
58 |
| - for event in event_block.events: |
59 |
| - print(event.debug_data) |
60 |
| - # If an ETRecord was provided by the user during Inspector initialization, users |
61 |
| - # can print the stacktraces and module hierarchy of these events. |
62 |
| - print(event.stack_traces) |
63 |
| - print(event.module_hierarchy) |
64 |
| -``` |
65 |
| - |
66 |
| -We've also provided a simple set of utilities that let users perform quality analysis of their model outputs with respect to a set of reference outputs (possibly from the eager mode model). |
67 |
| - |
68 |
| - |
69 |
| -```python |
70 |
| -from executorch.devtools.inspector import compare_results |
71 |
| - |
72 |
| -# Run a simple quality analysis between the model outputs sourced from the |
73 |
| -# runtime and a set of reference outputs. |
74 |
| -# |
75 |
| -# Setting plot to True will result in the quality metrics being graphed |
76 |
| -# and displayed (when run from a notebook) and will be written out to the |
77 |
| -# filesystem. A dictionary will always be returned which will contain the |
78 |
| -# results. |
79 |
| -for event_block in inspector.event_blocks: |
80 |
| - if event_block.name == "Execute": |
81 |
| - compare_results(event_blocks.run_output, ref_outputs, plot = True) |
82 |
| -``` |
| 3 | +Please update your link to <https://pytorch.org/executorch/main/model-debugging.html>. This URL will be deleted after v0.4.0. |
0 commit comments