Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

client side of server streaming call does not always drain HTTP response body #427

Closed
jhump opened this issue Dec 20, 2022 · 4 comments · Fixed by #536
Closed

client side of server streaming call does not always drain HTTP response body #427

jhump opened this issue Dec 20, 2022 · 4 comments · Fixed by #536
Labels
bug Something isn't working

Comments

@jhump
Copy link
Member

jhump commented Dec 20, 2022

When invoking a server or bidi stream using the Connect or gRPC-web protocols, the response stream is only read to the end of the "end of stream" frame. It is never verified that the body contain no more. The standard gRPC protocol does not exhibit this issue, since it must drain the body fully and read trailers in order to get the RPC status.

This is an issue when trying to use HTTP middleware that wraps the body reader. The middleware will never detect that the response is finished, because the underlying reader is never fully drained (e.g. read until it returns a non-nil error, typically io.EOF). This also means that it is possible for the server to write additional content, and thus send back a corrupt/invalid response body, and the RPC client will not notice. (Unclear what action should be taken in this case -- like whether this should result in an RPC wire error, especially if the call otherwise succeeded.)

@jhump jhump added the bug Something isn't working label Dec 20, 2022
@jhump jhump changed the title server streaming call does not always drain HTTP response body client side of server streaming call does not always drain HTTP response body Dec 20, 2022
@akshayjshah
Copy link
Member

Think this is closely related to #397.

@jhump
Copy link
Member Author

jhump commented Dec 21, 2022

So one potential thing I discovered was my own misuse of the APIs. If the client always calls Close on a server stream or CloseResponse on a bidi stream, the response body is drained.

So perhaps this really is just a documentation issue, that nothing in the docs (in the Go docs or on connect.build/docs) describes how correctly use the stream types, such as a requirement that a close method be used to cleanup. (I'm too used to the gRPC interfaces, which do not have such methods and only require the client to drain the response stream [or cancel the context] for proper cleanup.)

Although I think it may still be useful to drain more eagerly to verify that there's no invalid trailing data after the "end stream" message. If there's an appetite for that sort of fix, I have a change here: bufbuild/connect-go@main...jhump:jh/exhaust-response (it also includes a test that demonstrates the issue/fails without the non-test changes).

@emcfarlane
Copy link
Contributor

@jhump tried looking up your fix but can't see it. Is it meant to be on a branch here? https://github.com/jhump/connect-go/branches

@jhump
Copy link
Member Author

jhump commented Jun 27, 2023

@emcfarlane, sorry about that. Try again.

emcfarlane referenced this issue Jul 11, 2023
For connect and gRPC-web on receiving end of stream payloads ensure no
extra data is written by draining the reader and erroring on more data.

Extension of https://github.com/bufbuild/connect-go/pull/533

Fixes https://github.com/bufbuild/connect-go/issues/427
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants