-
-
Notifications
You must be signed in to change notification settings - Fork 444
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Block RNGs: remove unaligned memory cast #783
Conversation
These specialisations relied on casting a u8 byte slice to a u32 or u64 slice, which is UB due to alignment requirements.
It's not worth it to use |
This isn't about unaligned reads, it's about unaligned writes masquerading as aligned ones (because of cast to |
Finding the aligned subset of I'll leave this until tomorrow for any further review, then we can merge the PRs. |
Maybe we should add an API that requires aligned slices? |
As in What I don't understand is why everyone keeps suggesting complicated ways to keep this (very) small highly-specific optimisation. |
It's not worth complicating the trait over this. lol |
Would it be possible to backport this bugfix to But I can also understand if this is too much work. I just wanted to ask ^_^ |
Sure, I guess it's possible. |
Fix #779. Review please @RalfJung.
The point of the removed specialisations was to avoid one copy. Since the output type may not have the correct alignment and we wish to copy bytes in the same order, we have no choice but to use a buffer anyway. (We could complicate things by checking the alignment at run-time, but I don't think it's worthwhile given the very small performance cost of the extra copy.)
Benchmarking with 10*1024 byte buffer does show a small impact:
The second commit cleans up some warnings in the benches.