-
Notifications
You must be signed in to change notification settings - Fork 953
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Convert map_async from being async to being callback based #2698
Conversation
This one is a review party, as we're all stakeholders here :) |
85e7ff7
to
3933334
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is a great idea and definitely makes the async control flow around poll
a lot more explicit. That's really important, because the control flow around buffer mapping regularly causes confusion, especially when trying to wait for one of the futures to resolve inside on an existing event loop.
@jimblandy already mentioned it on Matrix but it might be nice if we could consistently use callbacks or futures across the API. Using callbacks everywhere could be interesting because it would probably allow a lot of projects to avoid async/executors entirely (i.e. assuming they don't use futures anywhere besides wgpu).
The biggest downsides to me are that it feels like we'd be another step removed from the web API, so it's slightly harder to cross-reference JS examples/tutorials, and poll
is still required for now (i.e. unless we address the async reactor issue somehow).
Another minor downside is that it seems like a lot of Rust ecosystem is moving towards futures (from callback-based APIs), so moving towards callbacks might be a little surprising to people.
Overall I slightly prefer to go ahead with this approach and addressing the downsides however we can (e.g. great documentation explaining how it would map to the equivalent web API calls, still consider ways to eliminate poll
, etc.).
It is a non-trivial task to poll a future once, because you need to set up all the noop waker stuff to be able to call poll.
Agreed, the current indirection through wakers feels a bit heavy. It's really nice to be able to await
to wait for mapped buffers in general though, so using channels to accomplish approximately the same thing seems reasonable.
These functions being async, because of how the async ecosystem generally works, implies there is a reactor somewhere to make the future resolve. This is not the case in wgpu and it being a callback makes it clearer that this is called by other code.
I was hoping we'd eventually have some kind of reactor/integration to drive polling, so we could remove poll
from the majority of user-code (except when really granular control is necessary). I like that approach because most people wouldn't have to worry about this difference between native/web (even if automatic polling had some kind of opt-out for the overhead of a thread or similar).
The underlying apis for both wgpu-core and wasm are callback based, so we just expose this directly without getting some fairly complicated infrastructure involved.
The promise<->future mapping seems pretty common in Rust code working with web APIs that return Promise
s. Mapping promises to Rust callbacks is probably not typical, but might be appropriate here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I love this change. It's such a relief to see the Rust code able to just pass a closure naturally.
I have some concerns about the unsafe
handling.
e7dbfed
to
0858a18
Compare
Alright, I think I've applied all the direct code comments, thank you for the thorough review both of you! As for the more abstract stuff: (cc @grovesNL)
I've always pushed really hard against this as it makes it way too easy to write really terrible code without knowing what you're doing: let (sender, receiver) = flume::bounded(1);
let mapping = buffer.slice(..).map_async(.., |_| sender.send(()));
receiver.recv().unwrap(); With any kind of automatic polling this will work and we've just invented a slightly more verbose glReadBuffer. I'm kinda unhappy with the web that it forces the runtime to poll for you, but I get why you need to on the web. It's not that much more code to do the obviously anti-pattern thing. let (sender, receiver) = flume::bounded(1);
let mapping = buffer.slice(..).map_async(.., |_| sender.send(()));
device.poll(Maintain::Wait(None)) // very clearly a hard wait.
receiver.recv().unwrap(); I don't want to stop people from committing code sins, I just want them to know the sins they are committing. There's absolutely a balance to be struck here. I don't want to make this artificially hard, but I want users to understand the consequences of the code they write and gpu -> cpu communication has big consequences for performance if done wrong. I think the route forward is just stellar documentation. I am planning on writing a "so you want to read data from the gpu" article on the wiki that we can link in the docs once it's written.
This is an interesting question, because it's basically "how much do we want to warp our api to allow the default native first code to work on wasm" and there are good arguments in both directions, but probably outside the scope of this PR. |
0858a18
to
33600cd
Compare
Yeah definitely, I can appreciate wanting to make the performance trade-off more explicit. I think we could achieve that through documentation and examples instead of requiring extra function calls though. People get confused about when/how to call My concern is mostly that Either way both approaches should work fine with callbacks instead of |
Minor changes are needed to the `mapAsync` implementation due to: gfx-rs/wgpu#2698 Differential Revision: https://phabricator.services.mozilla.com/D147805
Minor changes are needed to the `mapAsync` implementation due to: gfx-rs/wgpu#2698 Differential Revision: https://phabricator.services.mozilla.com/D147805
Connections
This is vaguely related to all sorts of polling issues.
Description
This removes the async from the map_async and (future) on_submitted_work_done api. We removed this for a couple reasons.
This is the first in a couple PRs improving our async processing and backpressure story.
Testing
Ran test cases.