-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Defer future execution to the next event loop tick #954
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the review delay!
crates/futures/src/lib.rs
Outdated
}; | ||
|
||
// Use `setTimeout` to place our execution onto the next turn of the | ||
// event loop, enqueueing our poll operation. We don't currently |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we want to use setTimeout
-- that will incure larger-than-necessary latency.
Instead, we should do the equivalent of Promise.resolve().then(() => executeOurStuff())
which will execute on the micro-task queue, and not bounce all the way out to the browser's main event loop.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In stdweb we use MutationObserver
(falling back to Promise.prototype.then
on Nodejs).
The reason we do this is because Promise.prototype.then
allocates (to create the new Promise), whereas with MutationObserver
we can schedule the next microtask tick without any allocation (well, except the allocation for the array of MutationRecord
, I guess).
We also handle everything in Rust as much as possible:
-
When there are multiple pending tasks, it ensures that it only schedules a single microtask tick.
-
When the microtask tick finishes, it then loops over the pending tasks.
This ensures that no matter how many tasks are pending, it only needs to schedule one microtask tick, which made a huge difference for performance (the old implementation was similar to wasm-bindgen and spun up a new microtask tick for each task).
There's a few reasons for the big performance boost:
-
By keeping everything in Rust as much as possible, it can make use of Rust's great data structures and optimizations.
-
Calling between wasm and JS is slow (at least in stdweb), so this minimizes the amount of wasm<->JS marshalling (because it only schedules one microtask tick per cycle, rather than one microtask tick per task).
-
It cuts back massively on the amount of garbage that JS produced (which was causing a lot of garbage collector thrashing).
We're talking huge differences here: with my animation demo the old executor got ~10 FPS (and generated a lot of garbage), but when I changed it to use a
VecDeque
it jumped up to ~60 FPS (and generated almost no garbage).
In any case, I agree that it should be using the microtask queue (MutationRecord
or Promises).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Previously whenever a future readiness notification came in we would immediately start polling a future. This ends up having two downsides, however: * First, the stack depth may run a risk of getting blown. There's no recursion limit to defer execution to later, which means that if futures are always ready we'll keep making the stack deeper. * Second, and more worrisome in the near term, apparently future adapaters in the `futures` crate (namely the unsync oneshot channel) doesn't actually work if you immediately poll on readiness. This may or may not be a bug in the `futures` crate but it's good to fix it here anyway. As a result whenever a future is ready to get polled again we defer its polling to the next turn of the event loop. This should ensure that the current call stack is always drained and we're effectively enqueueing the future to be polled in the near future.
1a3f6bb
to
a1da85a
Compare
Previously whenever a future readiness notification came in we would
immediately start polling a future. This ends up having two downsides,
however:
First, the stack depth may run a risk of getting blown. There's no
recursion limit to defer execution to later, which means that if
futures are always ready we'll keep making the stack deeper.
Second, and more worrisome in the near term, apparently future
adapaters in the
futures
crate (namely the unsync oneshot channel)doesn't actually work if you immediately poll on readiness. This may
or may not be a bug in the
futures
crate but it's good to fix ithere anyway.
As a result whenever a future is ready to get polled again we defer its
polling to the next turn of the event loop. This should ensure that the
current call stack is always drained and we're effectively enqueueing
the future to be polled in the near future.