-
Notifications
You must be signed in to change notification settings - Fork 608
Default to file load mode in module #10827
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/10827
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 585f0c7 with merge base 27e159e ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This pull request was exported from Phabricator. Differential Revision: D74607072 |
How about changing it to |
That's good with me - I'll swap it to File. I initially went with MMap because peak memory is reduced by quite a bit, but I'm okay to leave that as an opt-in optimization. |
Summary: We've seen a number of people hit issues with mlock as the default load mode. In particular, mlock will fail above a certain PTE size and may require elevated permissions on some systems. We should not default to using mlock and can leave at as an opt-in optimization. We could use file or mmap mode. I'm leaning towards mmap, as it can lead to lower peak resident memory size, which can be significant for larger models. Differential Revision: D74607072
89ac873
to
585f0c7
Compare
This pull request was exported from Phabricator. Differential Revision: D74607072 |
Summary:
We've seen a number of people hit issues with mlock as the default load mode. In particular, mlock will fail above a certain PTE size and may require elevated permissions on some systems. We should not default to using mlock and can leave at as an opt-in optimization.
Differential Revision: D74607072