-
Notifications
You must be signed in to change notification settings - Fork 333
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use tasks to manipulate streams #1270
base: main
Are you sure you want to change the base?
Conversation
Helps with #1246. The main problem with this PR (which is why I marked it WIP) is that the stream itself is created in parallel and I don't see a good way to avoid that. If we had low-level streams this would be different.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the stream created in parallel? It's created in the first portion of HTTP-network fetch, but maybe HTTP-network fetch itself was called in parallel?
Another architecture which you could explore is to do everything in the main thread except for the "wait for bytes or finished signal from the network" step, and then queue a task whenever you get bytes (regardless of whether it's bytes or a finished signal). I.e., spend the minimum amount of time possible in parallel.
<a lt=terminated for=fetch>terminate</a> the ongoing fetch. | ||
<ol> | ||
<li><p><a for=ReadableStream>Enqueue</a> a {{Uint8Array}} wrapping an {{ArrayBuffer}} | ||
containing <var>bytes</var> into <var>stream</var>. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FWIW this can be made rigorous these days with https://heycam.github.io/webidl/#arraybufferview-create ([=ArrayBufferView/new=] {{Uint8Array}} given |bytes| in some realm
).
<li><p>Otherwise, if the bytes transmission for <var>response</var>'s message body is done | ||
normally and <var>stream</var> is <a for=ReadableStream>readable</a>, then | ||
<a for=ReadableStream>close</a> <var>stream</var>, <a for=/>finalize response</a> for | ||
<var>fetchParams</var> and <var>response</var>. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Closing should also be done in a task right?
Yeah, after main fetch everything is in parallel. I think it really shows that JavaScript streams are just not that suitable for this. And even if we used an implementation-defined realm we'd still have to do everything in the same in parallel thread somehow, which we don't really have a mechanism for. |
Ok. What do you think of my alternate model suggestion to address that? |
I'm not sure I understand it. How could we stay in the main thread? |
Only go in parallel for the steps that actually receive bytes from the network, since those are the only steps that need to block on some external action. Then immediately queue a task back to the main thread after you're done getting those bytes. |
That would mean you'd always have to invoke fetch from an event loop thread, right? I'm not sure that works for Background Fetch. And it would also require restructuring other existing callers, though that is needed either way to be fair. And it would mean you'd have to queue a task to resume the fetch algorithm (to deal with redirects and such) which I guess could work, but it seems like a pretty large undertaking. |
Yeah, it's definitely a large undertaking. I agree it doesn't work for background fetch; you'd need some sort of special casing for that. (Or a fake event loop, eugh.) It might be best to take a step back and ask how implementations handle this. Maybe we could model that more closely. Presumably they create any ReadableStream objects only (?) for fetch() calls or service worker Request/Response construction, on the main thread. And then queue tasks to manipulate them as appropriate. |
Helps with #1246.
The main problem with this PR (which is why I marked it WIP) is that the stream itself is created in parallel and I don't see a good way to avoid that. If we had low-level streams this would be different.
(See WHATWG Working Mode: Changes for more details.)
Preview | Diff