-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimize buffer release in HttpConnection #12239
Comments
To guide any optimization, I think we need a policy of when we believe it is optimal to hold onto an empty buffer and when it is not. My thoughts are that a connection should hold an empty request buffer if there is a reasonable expectation that a read will be done in the near future. To achieve this our current policy is to always hold the empty buffer during request handling, and only release it if the handling thread is exiting (async handling) or if the response is not persistent. Given that the vast majority of request for many applications do not carry request bodies, it may be better to not hold the buffer during handling if we know there is no body to the request. This would make sense as pipelining is rarely done and thus any reuse of the buffer requires a round trip to the client. So currently for bodyless requests we:
This could become:
But this only works if we call fillInterest directly without first trying to read the next request.... since any attempt to read the next request requires a buffer. If the request has a body, then we should not release before handling, and then we could try reading again before fill interest. |
I think this should be a 12.1 only thing. |
Release request buffer before handling when there is no content --------- Signed-off-by: Simone Bordet <[email protected]> Co-authored-by: Simone Bordet <[email protected]>
Jetty version(s)
12
Description
After the work in #12237, there may be spots where we could aggressively release the buffer when we know it is empty.
For example, in
HttpConnection.onFillable()
just before callingonRequest.run()
.However, we need to carefully avoid races with the application (the buffer cannot be released after we call
onRequest.run()
because it would be in a race with application threads trying to read).Another spot to investigate would be
HttpConnection.parseAndFillForContent()
, where if a content chunk consumes the whole request buffer, we could technically release the request buffer, but then everyread()
would pay the cost of acquiring a buffer from the pool.Perhaps here the "normal" case is that applications would continue to read, and the buffer can be reused across reads without releasing/acquiring it from the pool for each read, therefore reducing the churn on the pool.
The text was updated successfully, but these errors were encountered: