Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request for example: streaming response bodies #407

Closed
rw opened this issue Feb 18, 2020 · 6 comments
Closed

Request for example: streaming response bodies #407

rw opened this issue Feb 18, 2020 · 6 comments

Comments

@rw
Copy link

rw commented Feb 18, 2020

When writing streaming data for an http response, I had expected to write chunks to a Writer. Instead, it seems that the http-service and http crates expect an async Stream of chunks (an inversion of control).

Could we add an example for implementing streaming response bodies to the tide examples directory? Thank you! :-]

@rw
Copy link
Author

rw commented Feb 24, 2020

Bump!

@gauteh
Copy link

gauteh commented Mar 4, 2020

Hi, I would be really interested in this as well!

@yoshuawuyts
Copy link
Member

Hi, thanks for opening this. You're indeed right that we've somewhat inverted control as compared to many other frameworks. Each Tide endpoint receives a request, and returns a response. Your question seems to be: "but if we've returned a response, how can we continue to stream a body?"

We've made progress on that as part of #414, which switches us over to http-types. This includes a Body type that can be constructed from any AsyncBufRead. So for example to stream a file as the body response you can do:

app.at("/").get(|_| async move {
    let res = Response::new(200);
    let body = Body::from_reader(File::open("./my-file").await?);
    res.set_body(body);
    Ok(res)
});

What's nice about this is that any error that occurs inside the request handler will influence the returned HTTP status code. Requests come in, responses go out.

However if an error occurs while writing the body itself, the framework becomes responsible for handling that. But that's okay since at that point a status code can no longer be set. That's more or less the direction we're heading.

Manually writing to a stream

The example above works well when you have a clear stream that can be exhausted. But that's not always the case: for example websockets/sse create a persistent stream that other endpoints may want to write to. So even after a response is returned, it's desirable to keep writing to it.

I've written about a full channel model for tide here, though that covers some of the higher-level APIs we may want to include for Tide. At a lower level the solution is a combination of channel, AsyncRead and task::spawn.

I recently wrote an SSE implementation that works with this model (async-sse):

use async_sse::{decode, encode, Event};
use async_std::prelude::*;
use async_std::io::BufReader;
use async_std::task;

#[async_std::main]
async fn main() -> http_types::Result<()> {
    // Create an encoder + sender pair and send a message.
    let (sender, encoder) = encode();
    task::spawn(async move {
        sender.send("cat", b"chashu", None).await;
    });

    // Decode messages using a decoder.
    let mut reader = decode(BufReader::new(encoder));
    let event = reader.next().await.unwrap()?;
    // Match and handle the event

    Ok(())
}

The way this works is that the stream has two parts: a sender and a receiver. We pass the receiver (encoder) as the http body. But we move the sender into a new task, and keep that running. Alternatively the sender could be stored in local state and accessed from other endpoints (which is how we plan to implement the API outlined in Tide Channels). The http body is then returned from the task, and will then be ready to start encoding the messages sent by the Sender.

Conclusion

Thanks again for asking! Hopefully this post sheds some light on how streaming in Tide works, how it relates to error handling, and some of the benefits that come with that. We're working on building convenient APIs for known persistent streaming APIs, so that should significantly improve in the future. But for now once we merge #414 the common APIs for "read a file", or "echo a request" should be straightforward. Thanks heaps!

@gauteh
Copy link

gauteh commented Mar 8, 2020

Thanks for the extensive answer !

@pickfire
Copy link
Contributor

Is this related to doing processing the data after responding to the request first?

@rw
Copy link
Author

rw commented Mar 12, 2020

Thank you so much! Closing.

@rw rw closed this as completed Mar 12, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants