Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

For HTTP server use channelReadComplete() to flush writes #177

Closed
NiteshKant opened this issue Jul 4, 2014 · 0 comments
Closed

For HTTP server use channelReadComplete() to flush writes #177

NiteshKant opened this issue Jul 4, 2014 · 0 comments
Milestone

Comments

@NiteshKant
Copy link
Member

Our current recommendation for writing a HTTP Request handler is:

        RxNetty.createHttpServer(9999, new RequestHandler<ByteBuf, ByteBuf>() {
            @Override
            public Observable<Void> handle(HttpServerRequest<ByteBuf> request, HttpServerResponse<ByteBuf> response) {
                response.writeString("Hello!");
                return response.close();
            }
        });

response.close() above does an explicit flush() on the response which in turn does a flush on the channel.

This model works well for a single request-reponse scenario on a connection. However, it is sub-optimal for Http pipelining requests where we can do a single flush for more than one responses on the same channel.

Netty provides a callback ChannelInboundHandler.channelReadComplete() which is invoked when the eventloop has read all the available data on the channel for a single loop iteration. The "read all the available data" here also means thats the pipeline is executed for the read data, which in rxnetty means that the requests have been processed, unless the processing is done in a different thread than the eventloop.
For non HTTP pipeline requests this read will constitute of a single request but for HTTP pipelining requests this data can constitute of multiple HTTP requests.
Now, if we only do a flush() on ChannelInboundHandler.channelReadComplete() then netty will do a "gathering write" for all responses (written till now) on the channel. This as opposed to a flush() on every response will be efficient.

Does this mean no one should do a flush() explicitly?

Not really, but if one does not care about when the data should be flushed, it can be left to RxNetty to decide when to flush. For streaming response, the request handler would decide when to flush.

How will the RequestHandler code look like?

The request handlers currently piggy back on close() returning an Observable that can be returned as is from RequestHandler.handle(). This makes the code neat because they do not have to create a new Observable.
One way to handle this would be to provide a HttpServerResponse.close(boolean flush) method which returns an Observable which when flush is requested, completes on flush complete or completes immediately if flush is not requested.

Does this mean some responses will never get flushed?

No unless you are processing the request in a different thread. In which case, the flush should be explicit from the user.

Does this introduces latency for responses as opposed to flushing per response as one would queue up the responses on the socket?

The answer depends on which is more costly, multiple flushes or latency caused due to buffering.
I do not have an answer now but we will share the benchmark on this.

@NiteshKant NiteshKant added this to the 0.3.8 milestone Jul 4, 2014
@NiteshKant NiteshKant self-assigned this Jul 4, 2014
NiteshKant pushed a commit to NiteshKant/RxNetty that referenced this issue Jul 5, 2014
- Provided a ChannelWriter.close(boolean flush) method that provides a close without flush functionality.
- Added a flush in onChannelReadComplete() for ServerRequestResponseConverter.

With this change, if the server's RequestHandler calls response.close(false) then the flush will only be done when the channel read completes.
NiteshKant added a commit that referenced this issue Jul 5, 2014
@NiteshKant NiteshKant removed their assignment Aug 19, 2014
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant