You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Right now, the current behavior for a task on stack exhaustion is to attempt to unwind. This "works" because a task allocates itself 10k more stack in order to begin unwinding, and then prays that destructors don't re-overflow the stack.
Right now re-overflowing results in an abort of the runtime, but this is not the best behavior. There's a few routes we could take:
We have "unlimited stacks" in theory (good ol' 64-bit address spaces), so perhaps just allocating an extra megabyte for this use case would be "enough"
Continue to fail everything
"Taint" the current task on second overflow, immediately context-switch away, and then leak the entire task (no destructors are run).
The text was updated successfully, but these errors were encountered:
… r=xFrednet
Extend `needless_borrowed_reference` to structs and tuples, ignore _
changelog: [`needless_borrowed_reference`]: Lint struct and tuple patterns, and patterns containing `_`
Now lints patterns like
```rust
&(ref a, ref b)
&Tuple(ref a, ref b)
&Struct { ref a, ref b }
&(ref a, _)
```
Right now, the current behavior for a task on stack exhaustion is to attempt to unwind. This "works" because a task allocates itself 10k more stack in order to begin unwinding, and then prays that destructors don't re-overflow the stack.
Right now re-overflowing results in an abort of the runtime, but this is not the best behavior. There's a few routes we could take:
The text was updated successfully, but these errors were encountered: