-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BEAM] The meaning of Send and Sync in the context of interrupt handlers #116
Conversation
``` | ||
|
||
**Claim**: this program is well-defined / sound if and only if `Type` | ||
implements the `Sync` trait. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This LGTM.
- Can these programs be misoptimized by the compiler? | ||
|
||
- Should we redefine `Send` and `Sync` (as documented in doc.rust-lang.org/std) | ||
in terms of preemption? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see why this would be necessary. Interrupts are just a concurrency mechanism, and whether they are executed in the same thread or in a different one matters for the exact synchronization primitive that you need to avoid data-races (e.g. atomics vs compiler_fence, Arc vs some compiler-fenced Rc-variant, etc.) but I don't think it matters for Sync
or Send
.
For instance, consider: main starts on thread0, thread0 gets paused, thread1 executes interrupt0, thread1 gets paused, thread2 executes interrupt1, ..., thread1 terminates, thread0 is resumed. Whether this is how execution happens (using N concurrent threads), or whether it happens in a single thread, does not matter AFAICT for how the code running on each "thread" will be optimized. If accesses are not synchronized, the optimizer will optimize under the assumption that data-races cannot happen. If accesses are synchronized (e.g. using compiler_fence
) the optimizer will optimize under the assumption that they are properly synchronized.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or in other words, we could define that interrupts are executed in a different "thread of execution", and just note that this thread happens to be scheduled in the same OS thread in all currently supported platforms, such that everything that applies to threads, would apply for interrupts as well.
@RalfJung have you thought about how to model interrupts in miri ? (e.g. under the assumption that we could make them deterministic, e.g., via a CLI option that deterministically raises a signal at some fixed point in the program execution? )
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No I have not.^^
I also agree with @gnzlbg that it is probably easier to define preempted interrupts in terms of threads than vice versa. So all interrupts with the same priority would run in the same thread, and as such do not have to worry about Send
/Sync
between themselves. Interrupts are the niche case here so if we can avoid it I'd rather not fit the definition to to that -- fitting it to the common case is much likely to be understood correctly by crate authors.
Also note that once you have multiple cores, we have true parallelism and not just preempted to deal with.
That said -- what kind of formulation in terms of preemption did you have in mind?
Closing due to inactivity, see #169 for tracking. |
Please read #111 first, if you haven't already.
Rendered discussion document