-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Executing futures in wasm #1126
Comments
I'm not sure it's actually possible to execute futures in wasm without the support of JS? Executing futures involves some degree of blocking but JS/wasm can't block right now I think? Do you have a particular API in mind though for this? |
I'm playing with creating an indexeddb library. Eventually, I'd like to make it integrate into a virtual DOM library so an action like a click is processed into an immediate DOM update (loading), and a delayed DOM update (once the promises have resolved).
You should be able to get this behavior by specifying your state, and functions describing how the state changes when the future is ready or rejected. This applies equally to I'm just struggling to get my head around how js and wasm interact here. Do we need an executor in wasm using something like I always find the lack of determinism in js a bit frustrating (w.r.t. the order that things will happen in), and I feel like futures would help me get a handle on it, but the big picture is just eluding me. |
I'm thinking I probably have to pass the function I want to run into the promise, so that JS knows what code to run. Maybe this means I can't use futures (at least not the standard rust ones). |
An example of the code I'd like to be able to write: (pseudo code) #[derive(Default, Debug, ...)]
struct State {
name: Option<String>,
age: Option<u32>,
loading: bool,
}
enum Action {
LoadData,
LoadFailed { reason: String },
LoadSucceeded {
name: String,
age: u32
}
}
fn merge(action: Action, state: &mut State) {
match action {
Action::LoadData => {
state.loading = true;
}
Action::LoadFailed { reason } => {
eprintln!("loading failed: {}", reason);
}
Action::LoadSucceeded { name, age } => {
state.name = Some(name);
state.age = Some(age);
state.loading = false;
}
}
}
fn on_click() {
state.merge(Action::LoadData);
indexeddb::open("my_db", 1)
.then(|db| db.start_transaction())
.then(|(db, trans)| db.object_store("people"))
.then(|store| store.get(0))
.then(|record| {
state.merge(Action::LoadSucceeded {
name: record.name,
age: record.age
});
}); // todo pass this future somewhere so it gets run.
} |
The way that this is handled in stdweb is that it provides a spawn_local(some_rust_future); You can then write your code like this: fn on_click() {
state.merge(Action::LoadData);
spawn_local(indexeddb::open("my_db", 1)
.then(|db| db.start_transaction())
.then(|(db, trans)| db.object_store("people"))
.then(|store| store.get(0))
.then(|record| {
state.merge(Action::LoadSucceeded {
name: record.name,
age: record.age
});
}));
} Or even easier with fn on_click() {
state.merge(Action::LoadData);
spawn_local(async {
let db = await!(indexeddb::open("my_db", 1))?;
let trans = await!(db.start_transaction())?;
let store = await!(db.object_store("people"))?;
let record = await!(store.get(0))?;
state.merge(Action::LoadSucceeded {
name: record.name,
age: record.age
});
});
} Internally The same sort of function should be implementable in wasm-bindgen as well. |
@Pauan do you have to be running an event loop for that to work? |
@derekdreery In asynchronous code, an event queue is mandatory (though it's hidden from the user). JavaScript provides two built-in event queues: macrotask and microtask (Promises always use the microtask event queue). You can read more here: https://jakearchibald.com/2015/tasks-microtasks-queues-and-schedules/ So the question is whether spawning Futures directly uses the JS microtask queue, or whether it uses a Rust dequeue. Using the JS microtask queue is a lot simpler, but using a Rust dequeue is multiple orders of magnitude faster. Here is a very old stdweb Executor which doesn't use a Rust dequeue, it spawns Futures directly using the JS Promise microtask queue. It was written for Futures 0.1, it's quite old, probably buggy, and quite slow, but it is also simple and easy to understand, so hopefully it's educational. |
All of this event queue stuff is an implementation detail: the user just calls I say "unknown point", but it's usually very fast (a few milliseconds at most). The "unknown" part just means you can't rely upon it running at a deterministic point in time, since it's asynchronous. |
Thanks for the microtask reading - I feel like I understand all this stuff much better now! |
I am currently writing a small HttpRequest crate and ended up creating a wrapper struct with a custom
Now the browser is doing most of the work. Usage:
|
@lcnr Running an asynchronous Future in the If the goal is to cancel a pub struct Request<'a> {
url: &'a str,
init: RequestInit,
}
impl<'a> Request<'a> {
pub fn new(url: &'a str) -> Self {
Self {
url,
init: RequestInit::new(),
}
}
pub fn send(mut self) -> RequestFuture {
let controller = AbortController::new().unwrap();
let init = self.init.signal(Some(&controller.signal()));
let future = window().unwrap().fetch_with_str_and_init(self.url, &init).into();
RequestFuture {
controller,
future,
}
}
}
pub struct RequestFuture {
controller: AbortController,
future: JsFuture,
}
impl Drop for RequestFuture {
#[inline]
fn drop(&mut self) {
self.controller.abort();
}
}
impl Future for RequestFuture {
type Output = Result<JsValue, JsValue>;
#[inline]
fn poll(self: Pin<&mut Self>, waker: &LocalWaker) -> Poll<Self::Output> {
self.future.poll_unpin(waker);
}
} Untested, but it should be close to correct. |
If your goal is instead to workaround the lack of Instead the correct solution is for us to add in |
@Pauan I am not even sure what exactly I am doing. 😆 I want to make requests during which I return control of the main thread back to to browser/javascript and don't help to store the future anywhere. This means I am currently using a global state.
jup |
That happens naturally as part of the Promises/Futures system, you don't need to do anything special for that. The best thing to do is to have #[wasm_bindgen(start)]
pub fn main() {
future_to_promise(
Request::new(Method::Get, "example.org/test")
.header("Accept", "text/plain").send()
.and_then(|resp_value: JsValue| {
let resp: Response = resp_value.dyn_into().unwrap();
resp.text()
})
.and_then(|text: Promise| {
JsFuture::from(text)
})
.and_then(|body| {
println!("Response: {}", body.as_string().unwrap());
future::ok(JsValue::UNDEFINED)
})
);
} No global state needed. And then when It's even nicer with async/await: #[wasm_bindgen(start)]
pub fn main() {
future_to_promise(async {
let resp_value = await!(
Request::new(Method::Get, "example.org/test")
.header("Accept", "text/plain").send()
)?;
let resp: Response = resp_value.dyn_into().unwrap();
let body = await!(JsFuture::from(resp.text()?))?;
println!("Response: {}", body.as_string().unwrap());
Ok(JsValue::UNDEFINED)
});
} Naturally you don't want to put everything into async fn get_text(url: &str) -> Result<String, JsValue> {
let resp_value = await!(
Request::new(Method::Get, url).header("Accept", "text/plain").send()
)?;
let resp: Response = resp_value.dyn_into().unwrap();
let body = await!(JsFuture::from(resp.text()?))?;
Ok(body.as_string().unwrap())
}
async fn do_something() -> Result<(), JsValue> {
let body = await!(get_text("example.org/test"))?;
println!("Response: {}", body);
Ok(())
}
#[wasm_bindgen(start)]
pub fn main() {
future_to_promise(async {
await!(do_something())?;
Ok(JsValue::UNDEFINED)
});
} When you use So in the above example,
Using async/await in Rust is similar to async/await in JavaScript. |
P.S. async/await support will require #1105 to be fixed first (or I guess you can use the 0.1 to 0.3 Futures compatibility shim to make it work?). Even without async/await, my point about using |
FWIW JS at the fundamental level can't block, so it's almost always queueing up callbacks to execute at some later date. If you do blocking work at the base level there's likely some callback that gets invoked when the operation is finished (either successfully or not). In that sense we can queue up callbacks to run on events, and those callbacks could drive another event queue in Rust (much like futures work today with tokio and such). Some of this may belong in the wasm-bindgen-futures crate, but otherwise much of this is largely stock futures and other crates which in theory already work. Are there still points though that want to be clarified before closing this? |
@alexcrichton what do you think about @Pauan's suggestion to add a |
It may be a good idea! I'll admit though that I don't fully understand the motivation after skimming over this issue again. Could you remind me the motivation though for adding a function like that? |
(er also good to mention the context that |
@alexcrichton It's just a way to spawn a Future. So it is indeed almost identical to In particular, it would have this signature: pub fn spawn_local<F>(future: F) where F: Future<Output = ()> + 'static I'm assuming Futures 0.3 (it'll be a bit different with Futures 0.1) This makes it clear to any readers what it is doing, compared to |
Ok just wanted to confirm. That seems reasonable to me to add to |
I'll have a go at implementing the Queue and see what it looks like. |
Very naive implementation in #1148. |
This comment was marked as abuse.
This comment was marked as abuse.
Caveat emptor I'm not a genius this may be incorrect:
impl Future for IdbOpenDbRequest {
type Item = Db;
type Error = JsValue;
fn poll(&mut self) -> Poll<Self::Item, Self::Error> {
use web_sys::IdbRequestReadyState as ReadyState;
match self.inner.ready_state() {
ReadyState::Pending => {
let success_notifier = task::current();
let error_notifier = success_notifier.clone();
// If we're not ready set up onsuccess and onerror callbacks to notify the
// executor.
let onsuccess = Closure::wrap(Box::new(move || {
success_notifier.notify();
}) as Box<FnMut()>);
self.inner
.set_onsuccess(Some(onsuccess.as_ref().unchecked_ref()));
self.onsuccess.replace(onsuccess); // drop the old closure if there was one
let onerror = Closure::wrap(Box::new(move || {
error_notifier.notify();
}) as Box<FnMut()>);
self.inner
.set_onerror(Some(&onerror.as_ref().unchecked_ref()));
self.onerror.replace(onerror); // drop the old closure if there was one
Ok(Async::NotReady)
}
ReadyState::Done => match self.inner.result() {
Ok(val) => Ok(Async::Ready(Db {
inner: val.unchecked_into(),
})),
Err(_) => match self.inner.error() {
Ok(Some(e)) => Err(e.into()),
Ok(None) => unreachable!("internal error polling open db request"),
Err(e) => Err(e),
},
},
_ => panic!("unexpected ready state"),
}
}
} in my case I have a
|
Currently no, and executing Futures requires scheduling them on the JS microtask event loop, so it is necessary for it to internally use Promises (or another technique like MutationObserver). However, once
Normally you would use the oneshot channels for this. First, let's make it easier to create event listeners: use web_sys::EventTarget;
use wasm_bindgen::JsCast;
use wasm_bindgen::convert::FromWasmAbi;
pub struct EventListener<'a, A> {
node: EventTarget,
kind: &'a str,
callback: Closure<FnMut(A)>,
}
impl<'a, A> EventListener<'a, A> where A: FromWasmAbi + 'static {
#[inline]
pub fn new<F>(node: &EventTarget, kind: &'a str, f: F) -> Self where F: FnMut(A) + 'static {
let callback = Closure::wrap(Box::new(f) as Box<FnMut(A)>);
node.add_event_listener_with_callback(kind, callback.as_ref().unchecked_ref()).unwrap();
Self {
node: node.clone(),
kind,
callback,
}
}
}
impl<'a, A> Drop for EventListener<'a, A> {
#[inline]
fn drop(&mut self) {
self.node.remove_event_listener_with_callback(self.kind, self.callback.as_ref().unchecked_ref()).unwrap();
}
} Now you can use use web_sys::{HtmlImageElement, UiEvent};
use futures::Poll;
use futures::sync::oneshot::{Receiver, channel};
pub struct Image {
img: Option<HtmlImageElement>,
_on_load: EventListener<'static, UiEvent>,
receiver: Receiver<HtmlImageElement>,
}
impl Image {
pub fn new(width: u32, height: u32, url: &str) -> Self {
let (sender, receiver) = channel();
let img = HtmlImageElement::new_with_width_and_height(width, height).unwrap();
img.set_src(url);
let _on_load = EventListener::new(&img, "load", {
let mut sender = Some(sender);
let img = img.clone();
move |_| {
sender.take().unwrap().send(img.clone()).unwrap();
}
});
Self { img: Some(img), _on_load, receiver }
}
}
impl Drop for Image {
#[inline]
fn drop(&mut self) {
if let Some(ref img) = self.img {
// Cancels the image download
img.set_src("");
}
}
}
impl Future for Image {
type Item = HtmlImageElement;
type Error = JsValue;
fn poll(&mut self) -> Poll<Self::Item, Self::Error> {
self.receiver.poll().map(|x| {
if x.is_ready() {
// Prevents the image from being cancelled
self.img = None;
}
x
}).map_err(|_| unreachable!())
}
} However, in your case you need to send from two different callbacks, so that doesn't work. So instead, like @derekdreery mentioned, you need to manually use the But rather than mucking around directly with use std::sync::{Arc, Mutex};
use futures::task::{Task, current};
use futures::{Async, Poll};
// TODO should use oneshot::Inner
#[derive(Debug)]
struct Inner<T, E> {
completed: bool,
value: Option<Result<T, E>>,
task: Option<Task>,
}
impl<T, E> Inner<T, E> {
#[inline]
fn new() -> Self {
Self {
completed: false,
value: None,
task: None,
}
}
}
pub fn result_channel<T, E>() -> (ResultSender<T, E>, ResultReceiver<T, E>) {
let inner = Arc::new(Mutex::new(Inner::new()));
(
ResultSender {
inner: inner.clone(),
},
ResultReceiver {
inner: inner,
},
)
}
#[derive(Debug, Clone)]
pub struct ResultSender<T, E> {
inner: Arc<Mutex<Inner<T, E>>>,
}
impl<T, E> ResultSender<T, E> {
fn send(&self, value: Result<T, E>) {
let mut lock = self.inner.lock().unwrap();
if !lock.completed {
lock.completed = true;
lock.value = Some(value);
if let Some(task) = lock.task.take() {
drop(lock);
task.notify();
}
}
}
#[inline]
pub fn ok(&self, value: T) {
self.send(Ok(value));
}
#[inline]
pub fn err(&self, value: E) {
self.send(Err(value));
}
}
#[derive(Debug)]
pub struct ResultReceiver<T, E> {
inner: Arc<Mutex<Inner<T, E>>>,
}
impl<T, E> Future for ResultReceiver<T, E> {
type Item = T;
type Error = E;
fn poll(&mut self) -> Poll<Self::Item, Self::Error> {
let mut lock = self.inner.lock().unwrap();
if lock.completed {
lock.value.take().unwrap().map(Async::Ready)
} else {
lock.task = Some(current());
Ok(Async::NotReady)
}
}
} Now finally we can define the use web_sys::{HtmlImageElement, UiEvent};
use js_sys::Error;
enum ImageState<'a> {
Initial {
width: u32,
height: u32,
url: &'a str,
},
Pending {
img: HtmlImageElement,
receiver: ResultReceiver<HtmlImageElement, JsValue>,
_on_load: EventListener<'static, UiEvent>,
_on_error: EventListener<'static, UiEvent>,
},
Complete,
}
pub struct Image<'a> {
state: ImageState<'a>,
}
impl<'a> Image<'a> {
#[inline]
pub fn new(width: u32, height: u32, url: &'a str) -> Self {
Self { state: ImageState::Initial { width, height, url } }
}
}
impl<'a> Drop for Image<'a> {
#[inline]
fn drop(&mut self) {
if let ImageState::Pending { ref img, .. } = self.state {
// Cancels the image download
img.set_src("");
}
}
}
impl<'a> Future for Image<'a> {
type Item = HtmlImageElement;
type Error = JsValue;
fn poll(&mut self) -> Poll<Self::Item, Self::Error> {
match self.state {
ImageState::Initial { width, height, url } => {
let (sender, receiver) = result_channel();
let img = HtmlImageElement::new_with_width_and_height(width, height).unwrap();
img.set_src(url);
let _on_load = EventListener::new(&img, "load", {
let sender = sender.clone();
let img = img.clone();
move |_| {
sender.ok(img.clone());
}
});
let _on_error = EventListener::new(&img, "error", move |_| {
sender.err(Error::new("Failed to load image").into());
});
self.state = ImageState::Pending { img, receiver, _on_load, _on_error };
Ok(Async::NotReady)
},
ImageState::Pending { ref mut receiver, .. } => {
let output = receiver.poll();
match output {
Ok(Async::Ready(_)) | Err(_) => {
self.state = ImageState::Complete;
},
_ => {},
}
output
},
ImageState::Complete => {
panic!("Image polled after completion");
},
}
}
}
I'm not sure. The interactions between Rust panics and JS exceptions are currently pretty weird, so it may just always panic. I suggest not relying upon the panic handling, since it will almost certainly change in the future.
There are no silent errors. When you use In order to run the Future using If you instead use Unlike some other languages, Rust and JS don't have silent errors (which is a wonderful thing). |
This comment was marked as abuse.
This comment was marked as abuse.
In stdweb there is an You use it like this: spawn_local(unwrap_future(some_future)); In other words, all you need to do is slap So it's not hard at all to handle errors correctly, we just need a helper function like that in wasm-bindgen.
My understanding is that there is one of two possibilities:
So the only way it can be silently ignored is if you intentionally ignore it by using |
This comment was marked as abuse.
This comment was marked as abuse.
This comment was marked as abuse.
This comment was marked as abuse.
i don't know the difference between a micro- and macrotask, so maybe that's just part of my misunderstanding. isn't the current impl creating a new edit: a microtask seems to be a task, which is added to the running/current task/executed directly after the current task finishes(?) |
There's some rather deep and subtle reasons for this. The Futures system is designed to be asynchronous, so when you call This isn't specific to wasm, all Future Executors on all platforms must be that way (including tokio). It's a part of the Executor contract, and various Futures rely upon that contract (so they would break on non-compliant Executors). So since we are in wasm, we need some way to schedule a wakeup in the future. There are only two ways to do that: microtask and macrotask. We could use macrotasks (e.g. In addition, the browser rendering is based on macrotasks, so we might end up racing with the renderer! In other words, the browser might render the page before the But we don't want that: we want to guarantee that the Future runs before rendering, so we can avoid the dreaded "flash of unstyled content" (and similar issues). If we use microtasks (e.g. Promises), then all those problems go away: microtasks have zero latency (they are delayed, but run immediately after the JS code), and they are guaranteed to run before rendering. This ensures the maximum performance and the minimum issues. It isn't very surprising that a mechanism specifically designed for asynchronous values (Promises) would also be a good match for Futures. There's another reason why we need let x = Arc::new(Mutex::new(0));
let y = x.clone();
spawn_local(some_future.map(move |_| {
*y.lock().unwrap() = 5;
}));
println!("{}", *x.lock().unwrap()); The question is: does it print |
Promises execute long before RAF (RAF has very high latency, Promises have zero latency). The difference between microtasks and macrotasks is rather subtle, this page probably explains it best.
Yes, basically. Microtasks have priority over macrotasks, so they run first. |
Async is sometimes called cooperative multitasking. It's called this because it requires the futures to play by the rules. With threads, a bad thread will still only get their share of the cpu. With futures, we can block everything up if we want. |
This thread should probably be turned into a blogpost. There is loads of great information here! |
According to the Future contract, it is valid to immediately call It might still synchronously call You can read more here: |
This comment was marked as abuse.
This comment was marked as abuse.
@dakom yeah, but that's a broken future impl, it would break pretty much every executor, i guess. |
This comment was marked as abuse.
This comment was marked as abuse.
Yes, however, it won't stop other Futures from running. That is the key difference between an event loop and running Even if we used the macrotask queue, it would still livelock, which isn't much better than deadlock. Interestingly, it's possible to create a combinator which forces other Futures to run on the macrotask queue, thus converting deadlock into livelock.
Yeah, it's the same. The microtask queue behaves the same as most event loop implementations (including Rust event loops). The macrotask queue is... different. |
This comment was marked as abuse.
This comment was marked as abuse.
This comment was marked as abuse.
This comment was marked as abuse.
this could throw an exception on the js side if the rust struct is dropped? also i think, that it's not guaranteed that a edit: yeah, i deleted that comment, i thought you're storing the closure inside of the |
This comment was marked as abuse.
This comment was marked as abuse.
This comment was marked as abuse.
This comment was marked as abuse.
This comment was marked as abuse.
This comment was marked as abuse.
This comment was marked as abuse.
This comment was marked as abuse.
@dakom That's very cool, and exactly what we need to remove the |
@Pauan Today its not only standardized but also well supported https://caniuse.com/?search=queueMicrotask |
the
wasm-bindgen-futures
crate provides for consuming promises as futures, and returning futures to JS as promises, but not executing futures in wasm.I'm currently experimenting with borrowing the strategy for passing futures to js for my use-case.
The text was updated successfully, but these errors were encountered: