-
Notifications
You must be signed in to change notification settings - Fork 10.5k
[Reflection] Sanity-check metadata sizes in MetadataReader before reading. #37863
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@swift-ci please test |
Build failed |
Build failed |
@swift-ci please test |
Build failed |
…ding. MetadataReader can be given corrupt or garbage data and we need to be able to handle it gracefully. When reading metadata, the full size to read is calculated from partial data. When we're given bad data, these calculated sizes can be enormous, up to 4GB. Trying to read that much data can cause address space exhaustion which leads to unpleasantness. To avoid this, fix a limit of 1MB on metadata sizes, and fail early if the size is larger. No real-world metadata should ever be that large. We also switch these potentially large calls to use the readBytes variant that returns a unique_ptr, rather than allocating a buffer and reading into it. Our clients typically implement that as the primitive, so this avoids an unnecessary extra data copy and extra address space usage for them. Clients that implement reading into a provided buffer as the primitive should see the same performance as before. rdar://78621784
d2a52ab
to
ebd9c21
Compare
@swift-ci please test |
Build failed |
@swift-ci please test macos platform |
Build failed |
@swift-ci please smoke test macos platform |
1 similar comment
@swift-ci please smoke test macos platform |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, thanks!
MetadataReader can be given corrupt or garbage data and we need to be able to handle it gracefully. When reading metadata, the full size to read is calculated from partial data. When we're given bad data, these calculated sizes can be enormous, up to 4GB. Trying to read that much data can cause address space exhaustion which leads to unpleasantness.
To avoid this, fix a limit of 1MB on metadata sizes, and fail early if the size is larger. No real-world metadata should ever be that large.
We also switch these potentially large calls to use the readBytes variant that returns a unique_ptr, rather than allocating a buffer and reading into it. Our clients typically implement that as the primitive, so this avoids an unnecessary extra data copy and extra address space usage for them. Clients that implement reading into a provided buffer as the primitive should see the same performance as before.
rdar://78621784