Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement streaming in the browser #1033

Closed
ariutta opened this issue Jul 23, 2016 · 21 comments
Closed

Implement streaming in the browser #1033

ariutta opened this issue Jul 23, 2016 · 21 comments

Comments

@ariutta
Copy link

ariutta commented Jul 23, 2016

The current use of chunked transfer encoding is based on xhr.responseText. There are now more efficient methods available for Firefox (moz-chunked-text) and Chrome (ReadableByteStream). Would there be any interest in using those methods when possible?

It would be awesome if superagent supported streaming in the browser along the lines of what this library offers. I'd be willing to contribute if there's any interest.

@kornelski
Copy link
Contributor

I think that would be a good addition.

We currently don't have an API to expose this functionality. The browser version doesn't support streaming at all. OTOH in Node we have .pipe() and node's streams.

Since ReadableByteStream is the standard, I think it should be used in the browser version. Is it possible to emulate ReadableByteStream in browsers that don't support it natively yet?

I wonder how to have first-class support for node's streams, and first-class support for the Streams Standard without confusing and duplicated APIs. Do you have any proposal for the API?

@ariutta
Copy link
Author

ariutta commented Jul 25, 2016

That's a tough question. I love superagent's clean API and minimal file size. And I also support building on standards. But I'm not sure how to join the two.

Emulating/polyfilling Node streams or ReadableByteStream definitely is possible, but wouldn't this option eliminate superagent's file size advantage?

This issue for stream-http has a comment suggesting it would be possible to create a minimal stream library to replace the Node stream dependency to reduce file size. But, yes, creating yet another stream library would pollute superagent's clean API.

@ariutta
Copy link
Author

ariutta commented Jul 25, 2016

I ran some quick tests on the minified, bundled sizes of the latest versions of related modules for comparison:

module size
axios 16K
chunked-request 7.2K
fetch 435K
fetch-stream 91K
whatwg-fetch 7.1K
grab-http 8.1K
hyperquest 134K
nets 32K
request 1.0M
requests 12K
simple-get 90K
stream-http 87K
superagent 13K
xhr 6.1K
xhr-request 9.8K

Here was the bundle command:

browserify -g uglifyify ./size.js --standalone size | uglifyjs -c -o ./size.bundle.js

@ariutta
Copy link
Author

ariutta commented Jul 27, 2016

Maybe the best option would be to minimally polyfill the stream standard with a notice that the full stream standard will be used when it lands in most major browsers. The minimal polyfill would be limited to event emitters (data, error, end) and basic functions (.write, .end). Polyfilling the entire stream standard, including backpressure, etc., would drastically increase the size of the library.

@kornelski
Copy link
Contributor

Yes, I think a minimal polyfill is a good way to go. With XHR we don't even have ability to do proper backrpessure.

@ariutta
Copy link
Author

ariutta commented Jul 29, 2016

Hmm... WHAT-WG streams actually remove data events and also split the Node pipe method into pipeTo(writable) and pipeThrough(transform). I'm hesitant to try creating a minimal polyfill of the streams standard when it's not yet in production anywhere (didn't see it on icanuse.com) and is significantly different from Node streams.

@ariutta
Copy link
Author

ariutta commented Jul 29, 2016

Also, there is the observable standard, which is probably less suited for the purpose of this library, but it does offer yet another way of working with multiple chunks over time. If for some reason the observable standard gets traction and the stream standard doesn't, then it would make sense to use the observable standard here.

@ariutta
Copy link
Author

ariutta commented Jul 30, 2016

But if we continue with the web streams standard, the web-streams-polyfill could serve as a start. It basically just turns the reference implementation into an npm package. I got it down to a minified size of 47K by using uglifyify and unassert. To reduce size further, we could look at dropping code from readable-stream.js.

@kornelski
Copy link
Contributor

If the polyfill is heavy I see a couple of possible workarounds:

  • don't bundle the polyfill with superagent, and inform users that they should polyfill the global ReadableByteStream (and other objects) if they want backwards-compatibility. Users could choose which implementation they want.
    or
  • in superagent provide only hooks for plugins for streaming, and leave streams implementation up to plugins.

@ariutta
Copy link
Author

ariutta commented Aug 2, 2016

I like the second idea. Would the hooks be event emitters and .write/.end methods? If so, pull-through and pull-stream might be useful. 9.8K bundled together and minified.

In this issue, there's some interesting discussion on lightweight transforms for web streams. Would the hooks be easy to use with transducers?

@kornelski
Copy link
Contributor

Currently superagent, in node version, has a rather ad-hoc mechanism for "parsers"

https://github.com/visionmedia/superagent/blob/master/lib/node/parsers/text.js

which boils down to "data" and "end" events. Is that equivalent of .write/.end methods you've had in mind?

@ariutta
Copy link
Author

ariutta commented Aug 11, 2016

Yes, that sounds perfect.

@kornelski kornelski changed the title Implement efficient chunked transfer encoding Implement streaming in the browser Aug 22, 2016
@XeniaSiskaki
Copy link

So, currently if the server is sending the response in chunks, there is no way to receive it accordingly on the browser?

@kornelski
Copy link
Contributor

Depends what you mean by chunks. If you mean chunked HTTP/1.1, then you can, but superagent will wait until the whole response is received.

@XeniaSiskaki
Copy link

Well, on the server I subscribe to a stream which is emitting data when they are available (that's what I mean by chunks). But because sending the whole response takes long (fetching one year's data), the client is always throwing a ERR_INCOMPLETE_CHUNKED_ENCODING, even though the server didn't finish sending the data.

@kornelski
Copy link
Contributor

Streaming in the browser will not solve your problem, as the error is due to your web server cutting the connection server-side.

@XeniaSiskaki
Copy link

Is there any way to verify that? The server seems to be sending data normally, without any errors.

@kornelski
Copy link
Contributor

Try using raw XmlHTTPRequest or curl to verify whether the server sends data fully.

@XeniaSiskaki
Copy link

I tried using an XmlHTTPRequest but still get the error. Even after getting the error on the browser, the server continues to send data and at some point ends (res.end()) with no error. So I'm not sure if the problem is on the server side.

@kornelski
Copy link
Contributor

If you get error in XmlHTTPRequest then there's nothing we can do. The browser would report the same error via streaming API too, and stop streaming early.

It's most likely that it's a problem outside of browser, e.g. in your CDN, your reverse proxy, your web server, or middleware managing node.

@kornelski
Copy link
Contributor

fetch() is a thing now and it works well with streaming, so if you need streams, use fetch().

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants