-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HTTP routers should use bounded buffering. #287
Comments
It's not actually just as simple as switching to a channel, since there's a race between poll_ready and call. We'll need to adopt an mpsc variant that reserves a slot in the channel in a way that makes this race impossible. |
I think the reason it is unbounded is because the router doesn't currently know what to do if the underlying service isn't ready. They're essentially So, if we put a limit on the buffer, then we need figure out what to do when that limit is hit. Respond with a 503? |
The main issue is the main point of Now, services that are |
If I understand @carllerche, it seems like we don't need to use I'm nominating this for 0.3 since it's a quality issue that affects one the central selling points of Conduit (reliability and resource consumption). |
@briansmith I don't recall all the details of the existing stack, but I believe that some flavors of middleware require the concept of If all |
@carllerche the issue is that things like Reconnect are not clone, and it would probably be complicated to make them Clone |
Ah yes... making |
So, I believe that the strategy is to limit the max number of in-flight requests. As far as I can tell, there are two options:
What should the strategy be? |
@carllerche we should ideally be able to enforce this limit per endpoint so that slow endpoints can't DoS the whole proxy |
@olix0r Then, the strategy would be to "load shed" (drop) requests to slow endpoints. If that is OK, then the implementation is relatively straightforward. |
In progress: tower-rs/tower#49 |
How do we want to set the max in-flight requests per endpoint? |
probably as part of the environment-defined configuration. I would also accept a |
What would be a good default value? |
@carllerche at a guess, 10_000? |
Currently, the max number of in-flight requests in the proxy is unbounded. This is due to the `Buffer` middleware being unbounded. This is resolved by adding an instance of `InFlightLimit` around `Buffer`, capping the max number of in-flight requests for a given endpoint. Currently, the limit is hardcoded to 10,000. However, this will eventually become a configuration value. Closes linkerd#287
Currently, the max number of in-flight requests in the proxy is unbounded. This is due to the `Buffer` middleware being unbounded. This is resolved by adding an instance of `InFlightLimit` around `Buffer`, capping the max number of in-flight requests for a given endpoint. Currently, the limit is hardcoded to 10,000. However, this will eventually become a configuration value. Closes linkerd#287 Signed-off-by: Carl Lerche <[email protected]>>
Currently, the max number of in-flight requests in the proxy is unbounded. This is due to the `Buffer` middleware being unbounded. This is resolved by adding an instance of `InFlightLimit` around `Buffer`, capping the max number of in-flight requests for a given endpoint. Currently, the limit is hardcoded to 10,000. However, this will eventually become a configuration value. Fixes linkerd#287 Signed-off-by: Carl Lerche <[email protected]>>
Currently, the max number of in-flight requests in the proxy is unbounded. This is due to the `Buffer` middleware being unbounded. This is resolved by adding an instance of `InFlightLimit` around `Buffer`, capping the max number of in-flight requests for a given endpoint. Currently, the limit is hardcoded to 10,000. However, this will eventually become a configuration value. Fixes linkerd#287 Signed-off-by: Carl Lerche <[email protected]>
Currently, the max number of in-flight requests in the proxy is unbounded. This is due to the `Buffer` middleware being unbounded. This is resolved by adding an instance of `InFlightLimit` around `Buffer`, capping the max number of in-flight requests for a given endpoint. Currently, the limit is hardcoded to 10,000. However, this will eventually become a configuration value. Fixes linkerd#287 Signed-off-by: Carl Lerche <[email protected]>
Currently, the max number of in-flight requests in the proxy is unbounded. This is due to the `Buffer` middleware being unbounded. This is resolved by adding an instance of `InFlightLimit` around `Buffer`, capping the max number of in-flight requests for a given endpoint. Currently, the limit is hardcoded to 10,000. However, this will eventually become a configuration value. Fixes linkerd#287 Signed-off-by: Carl Lerche <[email protected]>
Currently, the max number of in-flight requests in the proxy is unbounded. This is due to the `Buffer` middleware being unbounded. This is resolved by adding an instance of `InFlightLimit` around `Buffer`, capping the max number of in-flight requests for a given endpoint. Currently, the limit is hardcoded to 10,000. However, this will eventually become a configuration value. Fixes #287 Signed-off-by: Carl Lerche <[email protected]>
Currently, the max number of in-flight requests in the proxy is unbounded. This is due to the `Buffer` middleware being unbounded. This is resolved by adding an instance of `InFlightLimit` around `Buffer`, capping the max number of in-flight requests for a given endpoint. Currently, the limit is hardcoded to 10,000. However, this will eventually become a configuration value. Fixes linkerd#287 Signed-off-by: Carl Lerche <[email protected]>
Currently, the
Inbound
andOutbound
HTTP routers use unbounded buffers:Each produces services wrapped in
tower_buffer::Buffer
At a glance, it's not immediately obvious to me why
Buffer
needs to usempsc::unbounded
-- if we usedmpsc::channel
, I'd thinkBuffer::poll_ready
could callmpsc::Sender::poll_ready
. Is it more complicated than that? @carllerche @seanmonstarThe text was updated successfully, but these errors were encountered: