Skip to content

Add option to disable operator profiling #5720

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from
Closed

Conversation

tarun292
Copy link
Contributor

Summary:
For smaller models the overhead of profiling ops might be prohibitively large (distorting the inference execution time significantly) so we provide users an option to disable op profiling and essentially only profile the important events such as inference execution time.

To disable operator profiling users need to do:

etdump_gen.set_event_tracer_profiling_level(executorch::runtime::EventTracerProfilingLevel::kNoOperatorProfiling);

Differential Revision: D61883224

Copy link

pytorch-bot bot commented Sep 27, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/5720

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 0062774 with merge base 3a25651 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Sep 27, 2024
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D61883224

facebook-github-bot pushed a commit to pytorch/pytorch that referenced this pull request Sep 27, 2024
Summary:
X-link: pytorch/executorch#5720

For smaller models the overhead of profiling ops might be prohibitively large (distorting the inference execution time significantly) so we provide users an option to disable op profiling and essentially only profile the important events such as inference execution time.

To disable operator profiling users need to do:
```
etdump_gen.set_event_tracer_profiling_level(executorch::runtime::EventTracerProfilingLevel::kNoOperatorProfiling);
```

Test Plan: Added test case.

Differential Revision: D61883224
facebook-github-bot pushed a commit that referenced this pull request Sep 27, 2024
Summary:
X-link: pytorch/pytorch#136838


For smaller models the overhead of profiling ops might be prohibitively large (distorting the inference execution time significantly) so we provide users an option to disable op profiling and essentially only profile the important events such as inference execution time.

To disable operator profiling users need to do:
```
etdump_gen.set_event_tracer_profiling_level(executorch::runtime::EventTracerProfilingLevel::kNoOperatorProfiling);
```

Reviewed By: Vysarat

Differential Revision: D61883224
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D61883224

facebook-github-bot pushed a commit to pytorch/pytorch that referenced this pull request Sep 27, 2024
Summary:

X-link: pytorch/executorch#5720

For smaller models the overhead of profiling ops might be prohibitively large (distorting the inference execution time significantly) so we provide users an option to disable op profiling and essentially only profile the important events such as inference execution time.

To disable operator profiling users need to do:
```
etdump_gen.set_event_tracer_profiling_level(executorch::runtime::EventTracerProfilingLevel::kNoOperatorProfiling);
```

Test Plan: Added test case.

Reviewed By: Vysarat

Differential Revision: D61883224
facebook-github-bot pushed a commit that referenced this pull request Sep 28, 2024
Summary:
X-link: pytorch/pytorch#136838


For smaller models the overhead of profiling ops might be prohibitively large (distorting the inference execution time significantly) so we provide users an option to disable op profiling and essentially only profile the important events such as inference execution time.

To disable operator profiling users need to do:
```
etdump_gen.set_event_tracer_profiling_level(executorch::runtime::EventTracerProfilingLevel::kNoOperatorProfiling);
```

Differential Revision: D61883224
facebook-github-bot pushed a commit to pytorch/pytorch that referenced this pull request Sep 28, 2024
Summary:

X-link: pytorch/executorch#5720

For smaller models the overhead of profiling ops might be prohibitively large (distorting the inference execution time significantly) so we provide users an option to disable op profiling and essentially only profile the important events such as inference execution time.

To disable operator profiling users need to do:
```
etdump_gen.set_event_tracer_profiling_level(executorch::runtime::EventTracerProfilingLevel::kNoOperatorProfiling);
```

Test Plan: Added test case.

Differential Revision: D61883224
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D61883224

facebook-github-bot pushed a commit that referenced this pull request Oct 1, 2024
Summary:
X-link: pytorch/pytorch#136838


For smaller models the overhead of profiling ops might be prohibitively large (distorting the inference execution time significantly) so we provide users an option to disable op profiling and essentially only profile the important events such as inference execution time.

To disable operator profiling users need to do:
```
etdump_gen.set_event_tracer_profiling_level(executorch::runtime::EventTracerProfilingLevel::kNoOperatorProfiling);
```

Differential Revision: D61883224
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D61883224

facebook-github-bot pushed a commit to pytorch/pytorch that referenced this pull request Oct 1, 2024
Summary:

X-link: pytorch/executorch#5720

For smaller models the overhead of profiling ops might be prohibitively large (distorting the inference execution time significantly) so we provide users an option to disable op profiling and essentially only profile the important events such as inference execution time.

To disable operator profiling users need to do:
```
etdump_gen.set_event_tracer_profiling_level(executorch::runtime::EventTracerProfilingLevel::kNoOperatorProfiling);
```

Test Plan: Added test case.

Differential Revision: D61883224
tarun292 added a commit that referenced this pull request Oct 2, 2024
Summary:
X-link: pytorch/pytorch#136838


For smaller models the overhead of profiling ops might be prohibitively large (distorting the inference execution time significantly) so we provide users an option to disable op profiling and essentially only profile the important events such as inference execution time.

To disable operator profiling users need to do:
```
etdump_gen.set_event_tracer_profiling_level(executorch::runtime::EventTracerProfilingLevel::kNoOperatorProfiling);
```

Differential Revision: D61883224
tarun292 added a commit to pytorch/pytorch that referenced this pull request Oct 2, 2024
Summary:

X-link: pytorch/executorch#5720

For smaller models the overhead of profiling ops might be prohibitively large (distorting the inference execution time significantly) so we provide users an option to disable op profiling and essentially only profile the important events such as inference execution time.

To disable operator profiling users need to do:
```
etdump_gen.set_event_tracer_profiling_level(executorch::runtime::EventTracerProfilingLevel::kNoOperatorProfiling);
```

Test Plan: Added test case.

Differential Revision: D61883224
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D61883224

facebook-github-bot pushed a commit that referenced this pull request Oct 2, 2024
Summary:
X-link: pytorch/pytorch#136838


For smaller models the overhead of profiling ops might be prohibitively large (distorting the inference execution time significantly) so we provide users an option to disable op profiling and essentially only profile the important events such as inference execution time.

To disable operator profiling users need to do:
```
etdump_gen.set_event_tracer_profiling_level(executorch::runtime::EventTracerProfilingLevel::kNoOperatorProfiling);
```

Differential Revision: D61883224
facebook-github-bot pushed a commit to pytorch/pytorch that referenced this pull request Oct 2, 2024
Summary:

X-link: pytorch/executorch#5720

For smaller models the overhead of profiling ops might be prohibitively large (distorting the inference execution time significantly) so we provide users an option to disable op profiling and essentially only profile the important events such as inference execution time.

To disable operator profiling users need to do:
```
etdump_gen.set_event_tracer_profiling_level(executorch::runtime::EventTracerProfilingLevel::kNoOperatorProfiling);
```

Test Plan: Added test case.

Differential Revision: D61883224
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D61883224

facebook-github-bot pushed a commit that referenced this pull request Oct 2, 2024
Summary:
X-link: pytorch/pytorch#136838


For smaller models the overhead of profiling ops might be prohibitively large (distorting the inference execution time significantly) so we provide users an option to disable op profiling and essentially only profile the important events such as inference execution time.

To disable operator profiling users need to do:
```
etdump_gen.set_event_tracer_profiling_level(executorch::runtime::EventTracerProfilingLevel::kNoOperatorProfiling);
```

Reviewed By: dbort

Differential Revision: D61883224
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D61883224

facebook-github-bot pushed a commit to pytorch/pytorch that referenced this pull request Oct 2, 2024
Summary:

X-link: pytorch/executorch#5720

For smaller models the overhead of profiling ops might be prohibitively large (distorting the inference execution time significantly) so we provide users an option to disable op profiling and essentially only profile the important events such as inference execution time.

To disable operator profiling users need to do:
```
etdump_gen.set_event_tracer_profiling_level(executorch::runtime::EventTracerProfilingLevel::kNoOperatorProfiling);
```

Test Plan: Added test case.

Reviewed By: dbort

Differential Revision: D61883224
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D61883224

facebook-github-bot pushed a commit that referenced this pull request Oct 3, 2024
Summary:
X-link: pytorch/pytorch#136838


For smaller models the overhead of profiling ops might be prohibitively large (distorting the inference execution time significantly) so we provide users an option to disable op profiling and essentially only profile the important events such as inference execution time.

To disable operator profiling users need to do:
```
etdump_gen.set_event_tracer_profiling_level(executorch::runtime::EventTracerProfilingLevel::kNoOperatorProfiling);
```

Reviewed By: dbort

Differential Revision: D61883224
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D61883224

tarun292 added a commit to pytorch/pytorch that referenced this pull request Oct 3, 2024
Summary:

X-link: pytorch/executorch#5720

For smaller models the overhead of profiling ops might be prohibitively large (distorting the inference execution time significantly) so we provide users an option to disable op profiling and essentially only profile the important events such as inference execution time.

To disable operator profiling users need to do:
```
etdump_gen.set_event_tracer_profiling_level(executorch::runtime::EventTracerProfilingLevel::kNoOperatorProfiling);
```

Test Plan: Added test case.

Reviewed By: dbort

Differential Revision: D61883224
tarun292 added a commit to pytorch/pytorch that referenced this pull request Oct 3, 2024
Summary:

X-link: pytorch/executorch#5720

For smaller models the overhead of profiling ops might be prohibitively large (distorting the inference execution time significantly) so we provide users an option to disable op profiling and essentially only profile the important events such as inference execution time.

To disable operator profiling users need to do:
```
etdump_gen.set_event_tracer_profiling_level(executorch::runtime::EventTracerProfilingLevel::kNoOperatorProfiling);
```

Test Plan: Added test case.

Reviewed By: dbort

Differential Revision: D61883224
tarun292 added a commit that referenced this pull request Oct 3, 2024
Summary:
X-link: pytorch/pytorch#136838


For smaller models the overhead of profiling ops might be prohibitively large (distorting the inference execution time significantly) so we provide users an option to disable op profiling and essentially only profile the important events such as inference execution time.

To disable operator profiling users need to do:
```
etdump_gen.set_event_tracer_profiling_level(executorch::runtime::EventTracerProfilingLevel::kNoOperatorProfiling);
```

Reviewed By: dbort

Differential Revision: D61883224
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D61883224

facebook-github-bot pushed a commit that referenced this pull request Oct 3, 2024
Summary:
X-link: pytorch/pytorch#136838


For smaller models the overhead of profiling ops might be prohibitively large (distorting the inference execution time significantly) so we provide users an option to disable op profiling and essentially only profile the important events such as inference execution time.

To disable operator profiling users need to do:
```
etdump_gen.set_event_tracer_profiling_level(executorch::runtime::EventTracerProfilingLevel::kNoOperatorProfiling);
```

Reviewed By: dbort

Differential Revision: D61883224
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D61883224

facebook-github-bot pushed a commit to pytorch/pytorch that referenced this pull request Oct 3, 2024
Summary:

X-link: pytorch/executorch#5720

For smaller models the overhead of profiling ops might be prohibitively large (distorting the inference execution time significantly) so we provide users an option to disable op profiling and essentially only profile the important events such as inference execution time.

To disable operator profiling users need to do:
```
etdump_gen.set_event_tracer_profiling_level(executorch::runtime::EventTracerProfilingLevel::kNoOperatorProfiling);
```

Test Plan: Added test case.

Reviewed By: dbort

Differential Revision: D61883224
facebook-github-bot pushed a commit that referenced this pull request Oct 3, 2024
Summary:
X-link: pytorch/pytorch#136838


For smaller models the overhead of profiling ops might be prohibitively large (distorting the inference execution time significantly) so we provide users an option to disable op profiling and essentially only profile the important events such as inference execution time.

To disable operator profiling users need to do:
```
etdump_gen.set_event_tracer_profiling_level(executorch::runtime::EventTracerProfilingLevel::kNoOperatorProfiling);
```

Reviewed By: dbort

Differential Revision: D61883224
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D61883224

facebook-github-bot pushed a commit to pytorch/pytorch that referenced this pull request Oct 3, 2024
Summary:

X-link: pytorch/executorch#5720

For smaller models the overhead of profiling ops might be prohibitively large (distorting the inference execution time significantly) so we provide users an option to disable op profiling and essentially only profile the important events such as inference execution time.

To disable operator profiling users need to do:
```
etdump_gen.set_event_tracer_profiling_level(executorch::runtime::EventTracerProfilingLevel::kNoOperatorProfiling);
```

Test Plan: Added test case.

Reviewed By: dbort

Differential Revision: D61883224
facebook-github-bot pushed a commit that referenced this pull request Oct 3, 2024
Summary:
X-link: pytorch/pytorch#136838


For smaller models the overhead of profiling ops might be prohibitively large (distorting the inference execution time significantly) so we provide users an option to disable op profiling and essentially only profile the important events such as inference execution time.

To disable operator profiling users need to do:
```
etdump_gen.set_event_tracer_profiling_level(executorch::runtime::EventTracerProfilingLevel::kNoOperatorProfiling);
```

Reviewed By: dbort

Differential Revision: D61883224
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D61883224

facebook-github-bot pushed a commit to pytorch/pytorch that referenced this pull request Oct 3, 2024
Summary:

X-link: pytorch/executorch#5720

For smaller models the overhead of profiling ops might be prohibitively large (distorting the inference execution time significantly) so we provide users an option to disable op profiling and essentially only profile the important events such as inference execution time.

To disable operator profiling users need to do:
```
etdump_gen.set_event_tracer_profiling_level(executorch::runtime::EventTracerProfilingLevel::kNoOperatorProfiling);
```

Test Plan: Added test case.

Reviewed By: dbort

Differential Revision: D61883224
facebook-github-bot pushed a commit to pytorch/pytorch that referenced this pull request Oct 4, 2024
Summary:

X-link: pytorch/executorch#5720

For smaller models the overhead of profiling ops might be prohibitively large (distorting the inference execution time significantly) so we provide users an option to disable op profiling and essentially only profile the important events such as inference execution time.

To disable operator profiling users need to do:
```
etdump_gen.set_event_tracer_profiling_level(executorch::runtime::EventTracerProfilingLevel::kNoOperatorProfiling);
```

Test Plan: Added test case.

Reviewed By: dbort

Differential Revision: D61883224
Summary:
X-link: pytorch/pytorch#136838


For smaller models the overhead of profiling ops might be prohibitively large (distorting the inference execution time significantly) so we provide users an option to disable op profiling and essentially only profile the important events such as inference execution time.

To disable operator profiling users need to do:
```
etdump_gen.set_event_tracer_profiling_level(executorch::runtime::EventTracerProfilingLevel::kNoOperatorProfiling);
```

Reviewed By: dbort

Differential Revision: D61883224
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D61883224

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in acfcdd5.

pytorchmergebot pushed a commit to pytorch/pytorch that referenced this pull request Oct 4, 2024
Summary:
X-link: pytorch/executorch#5720

For smaller models the overhead of profiling ops might be prohibitively large (distorting the inference execution time significantly) so we provide users an option to disable op profiling and essentially only profile the important events such as inference execution time.

To disable operator profiling users need to do:
```
etdump_gen.set_event_tracer_profiling_level(executorch::runtime::EventTracerProfilingLevel::kNoOperatorProfiling);
```

Test Plan: Added test case.

Differential Revision: D61883224

Pull Request resolved: #136838
Approved by: https://github.com/dbort
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported Merged
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants