-
Notifications
You must be signed in to change notification settings - Fork 133
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(interrupts): add compiler fences to enable and disable #436
Conversation
Without this, the compiler may reorder memory accesses over the sti and cli calls. Signed-off-by: Martin Kröning <[email protected]>
71375b1
to
659c871
Compare
I'm not yet sure that this is a bug: AFAIK the general consensus is that interrupt handlers are modeled as second threads (even though they execute on the same CPU thread). Under that model sharing a Can you provide an example of code where reordering with the |
My understanding of compiler fences was based on its documentation. It's unfortunate that they are not properly specified yet.
Coming from the embedded space, I was thinking that synchronization through compiler fences together with interrupt masking would be sound on the same hardware thread, that normal execution and interrupt handlers share. AFAIK, the whole embedded Rust ecosystem is built on this assumption: Concurrency—The Embedded Rust Book. See the
The
I took that wording straight from the
Assuming that this may not be used to synchronize Related issues: I just found rust-lang/rust#30003, which describes the same issue, specifically to Finally, it is probably an opsem question if using compiler fences for synchronization like this is fine. On one hand, it seems difficult to ensure the I am a bit lost where to track this. Is this a new opsem issue? Does it fit to one that already exists? Maybe @RalfJung can help? Leaving opsem aside for the moment, would you consider merging this even if the opsem question is unresolved? It does make Alternatively, it might even be necessary to introduce atomic fences instead. If we handle interrupt handlers as different threads, the main thread might unlock a spinlock, enable interrupts and the interrupt handler may lock the spinlock. Without an atomic fence, the unlocking may be reordered to after interrupt enabling. Sorry for such a big comment, this is quite a big topic. 👀 |
Oops, I wasn't aware the
Yup, this one seems closed to the problem presented here.
rust-lang/rust#30003 seems like the closest match to me, but it's old and hasn't receive much attention. rust-lang/unsafe-code-guidelines#444 and rust-lang/unsafe-code-guidelines#347 are also relevant though neither of them talk about behavior around
I agree that in practice this might improve things with how people are writing code and even though it's not guaranteed to be a perfect solution I don't think there are significant downsides to this either.
Another alternative would be to remove the
No worries, it's also an important topic. I image this issue must have been hell to debug. |
Yeah rust-lang/unsafe-code-guidelines#347 still mostly summarizes what I know about compiler fences. Pretty much the only situation where I can make sense of them is for signal handlers, which in C++ are other threads kind of -- they are other threads except that compiler fences are sufficient for synchronization. Arguably on a single-core CPU in the kernel space with preemption disabled, interrupts are like signal handlers. So it could actually make sense to use them here. I do lack all the context to evaluate the details though. |
Thanks so much for clarifying!
That's precisely what I was hoping for! 🥳
That's mostly our use case. In our case, we might also have multiple cores, but each core has independent interrupts and is only expected to synchronize with those. So that should be fine then. |
Yes, that sounds about right. But this is a bit outside my comfort zone so I suggest asking on Zulip. :) |
This actually bit me on AArch64, but this should apply to x86-64 as well:
Without this PR, the compiler may reorder memory accesses over these
asm!
calls. Whileinterrupts::enable
andinterrupts::disable
don't make claims about synchronizations, the docs strongly imply synchronization of the calling thread with itself.Essentially, without compiler fences, code after
interrupts::disable
may still execute while interrupts are enabled. Conversely, code beforeinterrupts::enable
may only execute after interrupts have been enabled. This also translates towithout_interrupts
: code inside the closure may actually execute with interrupts.This bug may require very specific conditions to show up. In my case (the Hermit kernel), some aggressive inlining resulted in improper access to a
RefCell
from an interrupt handler.What do you think? :)