Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Defer future execution to the next event loop tick #954

Merged
merged 1 commit into from
Oct 23, 2018

Conversation

alexcrichton
Copy link
Contributor

Previously whenever a future readiness notification came in we would
immediately start polling a future. This ends up having two downsides,
however:

  • First, the stack depth may run a risk of getting blown. There's no
    recursion limit to defer execution to later, which means that if
    futures are always ready we'll keep making the stack deeper.

  • Second, and more worrisome in the near term, apparently future
    adapaters in the futures crate (namely the unsync oneshot channel)
    doesn't actually work if you immediately poll on readiness. This may
    or may not be a bug in the futures crate but it's good to fix it
    here anyway.

As a result whenever a future is ready to get polled again we defer its
polling to the next turn of the event loop. This should ensure that the
current call stack is always drained and we're effectively enqueueing
the future to be polled in the near future.

Copy link
Member

@fitzgen fitzgen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the review delay!

};

// Use `setTimeout` to place our execution onto the next turn of the
// event loop, enqueueing our poll operation. We don't currently
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we want to use setTimeout -- that will incure larger-than-necessary latency.

Instead, we should do the equivalent of Promise.resolve().then(() => executeOurStuff()) which will execute on the micro-task queue, and not bounce all the way out to the browser's main event loop.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In stdweb we use MutationObserver (falling back to Promise.prototype.then on Nodejs).

The reason we do this is because Promise.prototype.then allocates (to create the new Promise), whereas with MutationObserver we can schedule the next microtask tick without any allocation (well, except the allocation for the array of MutationRecord, I guess).

We also handle everything in Rust as much as possible:

  1. It maintains a VecDeque of pending tasks.

  2. When there are multiple pending tasks, it ensures that it only schedules a single microtask tick.

  3. When the microtask tick finishes, it then loops over the pending tasks.

This ensures that no matter how many tasks are pending, it only needs to schedule one microtask tick, which made a huge difference for performance (the old implementation was similar to wasm-bindgen and spun up a new microtask tick for each task).

There's a few reasons for the big performance boost:

  • By keeping everything in Rust as much as possible, it can make use of Rust's great data structures and optimizations.

  • Calling between wasm and JS is slow (at least in stdweb), so this minimizes the amount of wasm<->JS marshalling (because it only schedules one microtask tick per cycle, rather than one microtask tick per task).

  • It cuts back massively on the amount of garbage that JS produced (which was causing a lot of garbage collector thrashing).

    We're talking huge differences here: with my animation demo the old executor got ~10 FPS (and generated a lot of garbage), but when I changed it to use a VecDeque it jumped up to ~60 FPS (and generated almost no garbage).

In any case, I agree that it should be using the microtask queue (MutationRecord or Promises).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh good idea, thanks for the info @fitzgen! @Pauan I think we can eventually go towards a more complicated world like that, but I'd prefer to have a benchmark and/or example which motivates it to avoid too much premature complexity.

Previously whenever a future readiness notification came in we would
immediately start polling a future. This ends up having two downsides,
however:

* First, the stack depth may run a risk of getting blown. There's no
  recursion limit to defer execution to later, which means that if
  futures are always ready we'll keep making the stack deeper.

* Second, and more worrisome in the near term, apparently future
  adapaters in the `futures` crate (namely the unsync oneshot channel)
  doesn't actually work if you immediately poll on readiness. This may
  or may not be a bug in the `futures` crate but it's good to fix it
  here anyway.

As a result whenever a future is ready to get polled again we defer its
polling to the next turn of the event loop. This should ensure that the
current call stack is always drained and we're effectively enqueueing
the future to be polled in the near future.
@alexcrichton alexcrichton merged commit c8f2f77 into rustwasm:master Oct 23, 2018
@alexcrichton alexcrichton deleted the fix-futures branch October 23, 2018 14:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants