Skip to content

Rewrite read_u32v_be() and write_u32_be() to make sure that all memory access is properly aligned #16191

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Aug 3, 2014

Conversation

DaGenix
Copy link

@DaGenix DaGenix commented Aug 2, 2014

When I originally wrote the read_u32v_be() and write_u32_be() functions, I didn't consider memory alignment requirements of various architectures. Unfortunately, the current implementations may result in unaligned reads and writes. This doesn't impact x86 / x86_64, but it can cause a compiler crash on ARM. This pull requests rewrites those functions to make sure that all memory access is always correctly aligned.

This fix is a little bit academic - due to the way that LLVM aligns the structures that are passed as arguments to these functions, I believe that the end result is that all memory access happens to be aligned anyway. However, there is nothing in that code that actually enforces that, at least not explicitly. The new implementations are definitely slower than the existing ones. However, I don't believe that these functions are all that significant when looking at the overall performance of the compiler. I think getting rid of some unsafe code and removing a potential portability landmine justifies a very slight decrease in raw performance.

bors added a commit that referenced this pull request Aug 3, 2014
When I originally wrote the read_u32v_be() and write_u32_be() functions, I didn't consider memory alignment requirements of various architectures. Unfortunately, the current implementations may result in unaligned reads and writes. This doesn't impact x86 / x86_64, but it can cause a compiler crash on ARM. This pull requests rewrites those functions to make sure that all memory access is always correctly aligned.

This fix is a little bit academic - due to the way that LLVM aligns the structures that are passed as arguments to these functions, I believe that the end result is that all memory access happens to be aligned anyway. However, there is nothing in that code that actually enforces that, at least not explicitly. The new implementations are definitely slower than the existing ones. However, I don't believe that these functions are all that significant when looking at the overall performance of the compiler. I think getting rid of some unsafe code and removing a potential portability landmine justifies a very slight decrease in raw performance.
@bors bors closed this Aug 3, 2014
@bors bors merged commit fd69365 into rust-lang:master Aug 3, 2014
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants