-
Notifications
You must be signed in to change notification settings - Fork 164
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Putting a chunk back in the readable stream queue #275
Conversation
@@ -159,6 +159,42 @@ export function IsReadableStream(x) { | |||
return true; | |||
} | |||
|
|||
export function PutBackIntoReadableStream(stream, chunk) { | |||
if (stream._state === 'closed') { | |||
throw new TypeError('stream is closed'); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a bit worried about if not supporting this corner case would be problematic for any use case.
Maybe it happens for the MSE's case, too? All data is read, but we have remaining data we want to put back.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you're right. I'm not sure what the right solution would be though :(. Maybe, after reading the last chunk from the queue, delay for a turn before deciding to close the stream? That gives you a one-turn grace period.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, but that kind of sucks because what if you want to asynchronously make the decision to put it back or not?
Maybe we need .read({ preventClose: true })
?? But then how do you close it later? .cancel()
I guess??? Icky.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems the MSE issue should be addressed by ReadableByteStream. I.e. specifying the size
argument based on maxSize
?
Is there a reason we can't use the "peek" concept instead of "read and put back"? |
@wanderview I was initially hoping to do that but the MSE case, and any like it, don't work. (MSE wants to put back a portion of the chunk.) Maybe @tyoshino is right and ReadableByteStream works better for that particular case... I'm worried about cases like it though. |
@domenic Can't it "peek", see how much data it would actually consume, and then "read" just that many bytes? As far as I can tell its not changing the order of any bytes here. I guess this is more conducive to the ReadableByteStream concept. Alternatively, we could just force corner cases likes to be implemented as a wrapper. It doesn't put back the data on the underlying source stream, but locally buffers the remaining the data. It then consults this local buffer first on its next pass through its algorithm. |
Or for the chunk-oriented ReadableStream, offer "peek" and the MSE consumer then keeps state about "skip this many bytes in the next chunk" to simulate the put-back. |
For RBSs that are not ArrayBuffer-queue-backed, we could add an attribute, say @wanderview If the ReadableByteStream is queue-backed (and ArrayBuffers are stored in the queue) like ReadableStream, then we want to look at the ArrayBuffers queued as-is than reading into the ArrayBuffer we brought with. Maybe this is what motivated you to add The benefits of
(1) can be realized even by For non-ArrayBuffer-queue-backed RBSs,
In my old W3C Streams API spec, I allowed users to control how much bytes to pull precisely than replenishing quota for all the bytes in the output of I guess addition of |
One more point to consider about |
I'm not insisting on peek(). I just think it might be cleaner than trying to implement an I think I agree with your statement that a way to read a precise number of bytes (readInto+amountReadable) is sufficient for ReadableByteStream. Most byte-stream algorithms will work just fine with this since its how bsd socket streams, etc, work today. I just dislike the
So what about removing our version of Of course we could also survey npm packages to see how often their stream |
Yeah. Given that unshift() doesn't even work for the MSE case I think it's best to drop it. I'm not even sure we need peek(). We don't really have a use case yet---everything is better handled by ReadableByteStream.
On the other hand, this is a good point. And I still haven't even checked how Node's unshift interacts with Node streams closing. So I should do a quick survey of this kind of thing. |
Created a PR #279 to add readableAmount getter |
For the record Node gives you one turn after calling close() in which you can resurrect the stream with an unshift. Closing since this is probably a dead-end. |
See #3, and especially the real-world use case in the MSE spec, step 9.
This doesn't contain any spec updates or examples yet, just prototype in the reference implementation---including some fairly-exhaustive tests. Suggestions for more tests welcome!
Thoughts:
read()
instead ofdequeue()
anyway). One idea that I came up with and am starting to like isrs.unRead(chunk)
. (Or should it beunread
, no capital-R?) I might go with that if nobody objects or comes up with something better.enqueue
, right nowputBack
does not return any backpressure-indicating value. We could easily add it, but I am not sure it should be the consumers responsibility to worry about whether they're putting back too much data? putBack is supposed to be a pretty rare, out-of-band occurrence, and not part of the normal backpressure negotation protocol.