-
Notifications
You must be signed in to change notification settings - Fork 325
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Request for example: streaming response bodies #407
Comments
Bump! |
Hi, I would be really interested in this as well! |
Hi, thanks for opening this. You're indeed right that we've somewhat inverted control as compared to many other frameworks. Each Tide endpoint receives a request, and returns a response. Your question seems to be: "but if we've returned a response, how can we continue to stream a body?" We've made progress on that as part of #414, which switches us over to http-types. This includes a app.at("/").get(|_| async move {
let res = Response::new(200);
let body = Body::from_reader(File::open("./my-file").await?);
res.set_body(body);
Ok(res)
}); What's nice about this is that any error that occurs inside the request handler will influence the returned HTTP status code. Requests come in, responses go out. However if an error occurs while writing the body itself, the framework becomes responsible for handling that. But that's okay since at that point a status code can no longer be set. That's more or less the direction we're heading. Manually writing to a streamThe example above works well when you have a clear stream that can be exhausted. But that's not always the case: for example websockets/sse create a persistent stream that other endpoints may want to write to. So even after a response is returned, it's desirable to keep writing to it. I've written about a full channel model for tide here, though that covers some of the higher-level APIs we may want to include for Tide. At a lower level the solution is a combination of I recently wrote an SSE implementation that works with this model (async-sse): use async_sse::{decode, encode, Event};
use async_std::prelude::*;
use async_std::io::BufReader;
use async_std::task;
#[async_std::main]
async fn main() -> http_types::Result<()> {
// Create an encoder + sender pair and send a message.
let (sender, encoder) = encode();
task::spawn(async move {
sender.send("cat", b"chashu", None).await;
});
// Decode messages using a decoder.
let mut reader = decode(BufReader::new(encoder));
let event = reader.next().await.unwrap()?;
// Match and handle the event
Ok(())
} The way this works is that the stream has two parts: a sender and a receiver. We pass the receiver (encoder) as the http body. But we move the sender into a new task, and keep that running. Alternatively the sender could be stored in local state and accessed from other endpoints (which is how we plan to implement the API outlined in Tide Channels). The http body is then returned from the task, and will then be ready to start encoding the messages sent by the Sender. ConclusionThanks again for asking! Hopefully this post sheds some light on how streaming in Tide works, how it relates to error handling, and some of the benefits that come with that. We're working on building convenient APIs for known persistent streaming APIs, so that should significantly improve in the future. But for now once we merge #414 the common APIs for "read a file", or "echo a request" should be straightforward. Thanks heaps! |
Thanks for the extensive answer ! |
Is this related to doing processing the data after responding to the request first? |
Thank you so much! Closing. |
When writing streaming data for an http response, I had expected to write chunks to a Writer. Instead, it seems that the http-service and http crates expect an async Stream of chunks (an inversion of control).
Could we add an example for implementing streaming response bodies to the tide examples directory? Thank you! :-]
The text was updated successfully, but these errors were encountered: