-
Notifications
You must be signed in to change notification settings - Fork 608
embed extended header inside .ptd flatbuffer section #7965
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
embed extended header inside .ptd flatbuffer section #7965
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/7965
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New Failure, 1 Unrelated FailureAs of commit b73baeb with merge base cd51da4 ( NEW FAILURE - The following job has failed:
BROKEN TRUNK - The following job failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This pull request was exported from Phabricator. Differential Revision: D68578075 |
Summary: Embed the header inside the flatbuffer. We do this for .pte and it lets us reuse a lot of flatbuffer tools natively. Differential Revision: D68578075
fc72e99
to
a052347
Compare
Summary: Embed the header inside the flatbuffer. We do this for .pte and it lets us reuse a lot of flatbuffer tools natively. Differential Revision: D68578075
a052347
to
f2987eb
Compare
This pull request was exported from Phabricator. Differential Revision: D68578075 |
1 similar comment
This pull request was exported from Phabricator. Differential Revision: D68578075 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lgtm,
nit to change extended_header
to flat_tensor_header
for flat_tensor?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good! Thanks for making this change.
A good double-check would be to try parsing a .ptd file using the flatc
commandline tool, like
flatc --json --defaults-json --strict-json -o /tmp extension/flat_tensor/serialize/flat_tensor.fbs -- input.ptd
and see if it was able to create /tmp/input.json
Summary: Embed the header inside the flatbuffer. We do this for .pte and it lets us reuse a lot of flatbuffer tools natively. Reviewed By: lucylq Differential Revision: D68578075
f2987eb
to
b73baeb
Compare
This pull request was exported from Phabricator. Differential Revision: D68578075 |
Ran the command Dave suggested on a local update to the train xor stack that makes it use .ptd and it worked (ill land that change later once the runtime can load .ptds |
Differential Revision: D68578075 Pull Request resolved: #7965
Differential Revision: D68578075 Pull Request resolved: pytorch#7965
Summary: Embed the header inside the flatbuffer. We do this for .pte and it lets us reuse a lot of flatbuffer tools natively.
Differential Revision: D68578075