You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
response.close() above does an explicit flush() on the response which in turn does a flush on the channel.
This model works well for a single request-reponse scenario on a connection. However, it is sub-optimal for Http pipelining requests where we can do a single flush for more than one responses on the same channel.
Netty provides a callback ChannelInboundHandler.channelReadComplete() which is invoked when the eventloop has read all the available data on the channel for a single loop iteration. The "read all the available data" here also means thats the pipeline is executed for the read data, which in rxnetty means that the requests have been processed, unless the processing is done in a different thread than the eventloop.
For non HTTP pipeline requests this read will constitute of a single request but for HTTP pipelining requests this data can constitute of multiple HTTP requests.
Now, if we only do a flush() on ChannelInboundHandler.channelReadComplete() then netty will do a "gathering write" for all responses (written till now) on the channel. This as opposed to a flush() on every response will be efficient.
Does this mean no one should do a flush() explicitly?
Not really, but if one does not care about when the data should be flushed, it can be left to RxNetty to decide when to flush. For streaming response, the request handler would decide when to flush.
How will the RequestHandler code look like?
The request handlers currently piggy back on close() returning an Observable that can be returned as is from RequestHandler.handle(). This makes the code neat because they do not have to create a new Observable.
One way to handle this would be to provide a HttpServerResponse.close(boolean flush) method which returns an Observable which when flush is requested, completes on flush complete or completes immediately if flush is not requested.
Does this mean some responses will never get flushed?
No unless you are processing the request in a different thread. In which case, the flush should be explicit from the user.
Does this introduces latency for responses as opposed to flushing per response as one would queue up the responses on the socket?
The answer depends on which is more costly, multiple flushes or latency caused due to buffering.
I do not have an answer now but we will share the benchmark on this.
The text was updated successfully, but these errors were encountered:
- Provided a ChannelWriter.close(boolean flush) method that provides a close without flush functionality.
- Added a flush in onChannelReadComplete() for ServerRequestResponseConverter.
With this change, if the server's RequestHandler calls response.close(false) then the flush will only be done when the channel read completes.
Our current recommendation for writing a HTTP Request handler is:
response.close()
above does an explicitflush()
on the response which in turn does a flush on the channel.This model works well for a single request-reponse scenario on a connection. However, it is sub-optimal for Http pipelining requests where we can do a single flush for more than one responses on the same channel.
Netty provides a callback
ChannelInboundHandler.channelReadComplete()
which is invoked when the eventloop has read all the available data on the channel for a single loop iteration. The "read all the available data" here also means thats the pipeline is executed for the read data, which in rxnetty means that the requests have been processed, unless the processing is done in a different thread than the eventloop.For non HTTP pipeline requests this read will constitute of a single request but for HTTP pipelining requests this data can constitute of multiple HTTP requests.
Now, if we only do a
flush()
onChannelInboundHandler.channelReadComplete()
then netty will do a "gathering write" for all responses (written till now) on the channel. This as opposed to aflush()
on every response will be efficient.Does this mean no one should do a
flush()
explicitly?Not really, but if one does not care about when the data should be flushed, it can be left to RxNetty to decide when to flush. For streaming response, the request handler would decide when to flush.
How will the RequestHandler code look like?
The request handlers currently piggy back on
close()
returning anObservable
that can be returned as is fromRequestHandler.handle()
. This makes the code neat because they do not have to create a newObservable
.One way to handle this would be to provide a
HttpServerResponse.close(boolean flush)
method which returns an Observable which when flush is requested, completes on flush complete or completes immediately if flush is not requested.Does this mean some responses will never get flushed?
No unless you are processing the request in a different thread. In which case, the flush should be explicit from the user.
Does this introduces latency for responses as opposed to flushing per response as one would queue up the responses on the socket?
The answer depends on which is more costly, multiple flushes or latency caused due to buffering.
I do not have an answer now but we will share the benchmark on this.
The text was updated successfully, but these errors were encountered: