-
Notifications
You must be signed in to change notification settings - Fork 607
Consolidate EXECUTORCH_BUILD_CUSTOM option #2935
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/2935
Note: Links to docs will display an error until the docs builds have been completed. ❌ 3 New Failures, 2 Unrelated FailuresAs of commit adc04ba with merge base 554cd27 ( NEW FAILURES - The following jobs have failed:
BROKEN TRUNK - The following jobs failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This pull request was exported from Phabricator. Differential Revision: D55907750 |
Summary: Currently `EXECUTORCH_BUILD_CUSTOM` is not being respected properly. If this option is false, we should not build `llama2/custom_ops` anywhere. If this option is true, we should build `llama2/custom_ops` in both llama runner binary and pybind. This PR consolidates it. Differential Revision: D55907750
e313a29
to
f634ed8
Compare
This pull request was exported from Phabricator. Differential Revision: D55907750 |
Summary: Currently `EXECUTORCH_BUILD_CUSTOM` is not being respected properly. If this option is false, we should not build `llama2/custom_ops` anywhere. If this option is true, we should build `llama2/custom_ops` in both llama runner binary and pybind. This PR consolidates it. Differential Revision: D55907750
f634ed8
to
6ef104a
Compare
This pull request was exported from Phabricator. Differential Revision: D55907750 |
Summary: Currently `EXECUTORCH_BUILD_CUSTOM` is not being respected properly. If this option is false, we should not build `llama2/custom_ops` anywhere. If this option is true, we should build `llama2/custom_ops` in both llama runner binary and pybind. This PR consolidates it. Differential Revision: D55907750
Summary: Currently `EXECUTORCH_BUILD_CUSTOM` is not being respected properly. If this option is false, we should not build `llama2/custom_ops` anywhere. If this option is true, we should build `llama2/custom_ops` in both llama runner binary and pybind. This PR consolidates it. Differential Revision: D55907750
6ef104a
to
5ed091c
Compare
This pull request was exported from Phabricator. Differential Revision: D55907750 |
Summary: Currently `EXECUTORCH_BUILD_CUSTOM` is not being respected properly. If this option is false, we should not build `llama2/custom_ops` anywhere. If this option is true, we should build `llama2/custom_ops` in both llama runner binary and pybind. This PR consolidates it. Differential Revision: D55907750
5ed091c
to
323324a
Compare
This pull request was exported from Phabricator. Differential Revision: D55907750 |
Summary: Currently `EXECUTORCH_BUILD_CUSTOM` is not being respected properly. If this option is false, we should not build `llama2/custom_ops` anywhere. If this option is true, we should build `llama2/custom_ops` in both llama runner binary and pybind. This PR consolidates it. Reviewed By: lucylq Differential Revision: D55907750
323324a
to
b440523
Compare
This pull request was exported from Phabricator. Differential Revision: D55907750 |
Summary: Fix the way we use `at::from_blob()` and add proper namespace to `CompileTimeFunctionPointer` so to not confused with `at::CompileTimeFunctionPointer`. bypass-github-pytorch-ci-checks bypass-export-ci-checks Reviewed By: lucylq Differential Revision: D55907751
Summary: Currently `EXECUTORCH_BUILD_CUSTOM` is not being respected properly. If this option is false, we should not build `llama2/custom_ops` anywhere. If this option is true, we should build `llama2/custom_ops` in both llama runner binary and pybind. This PR consolidates it. bypass-github-pytorch-ci-checks bypass-export-ci-checks Reviewed By: lucylq Differential Revision: D55907750
b440523
to
adc04ba
Compare
This pull request was exported from Phabricator. Differential Revision: D55907750 |
This pull request has been merged in d209e41. |
Summary:
Currently
EXECUTORCH_BUILD_CUSTOM
is not being respected properly.If this option is false, we should not build
llama2/custom_ops
anywhere.If this option is true, we should build
llama2/custom_ops
in both llama runner binary and pybind.This PR consolidates it.
Differential Revision: D55907750