-
Notifications
You must be signed in to change notification settings - Fork 524
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
v2: Investigate which limits/configs are useful for intake v2 #1299
Comments
Due to a technical limitation in the Go http.Server, @graphaelli proposed having We could introduce a new setting
As mentioned, with v2, we will read up to
to summarize:
Backend:
RUM:
|
Thanks for the great writeup @roncohen ! I agree that we cannot change default values for v1. But I also think we should not change semantics of config values between v1 and v2 (e.g. read_timeout). Thus, I like the idea to introduce new config options wherever necessary and suggest to deprecate the v1 config options in
+1 on
|
Since the goal of
I think this is ok and a natural compliment to |
Maybe I overlooked something, but if a user sets:
it would mean the max size per event is 30mb when data comes in through v2, while the full payload is limited to 30mb if the data comes in through v1. If we change the default +1 on calling it |
Clarified IRL, recapping here: I intended to propose removal of the max http payload size for v2, that is the size of the entire stream of data, as v2 is most efficient when using a long running persistent connection, and instead apply that per-event, current per ndjson size. I agree it would be more clear if a new setting was introduced |
summary of an offline conversation we just had: These are the types of limits we'll apply for v2: Backend:
RUM:
I should add that the |
after working on #1352 and having though about this some more: One of the things we discussed above was to protect the APM Server to make it difficult to get the APM Server to run out of memory by sending specific payloads. For example, by sending very big events. At the moment, we'll read We decided to limiting the size of each even + limiting the number of concurrent requests per IP. So the memory that any IP address can cause the APM Server to allocate is bound by Coming up with a If we're aiming to set a memory bound per IP, we could also consider simply doing that. If we had |
The idea of making Referring to your former comment where you summarize our offline discussions I assume that with a
|
@simitt we discussed offline how to progress here. It seems to me that an |
* Enabled for RUM endpoints with default value 5000. * Deny request if rate limit is hit when http request is established. * Throttle read if http request is successfully established, but rate limit is hit while reading the request. * Keep rate limiting for v1 unchanged. part of elastic#1299
* Enabled for RUM endpoints with default value 5000. * Deny request if rate limit is hit when http request is established. * Throttle read if http request is successfully established, but rate limit is hit while reading the request. * Keep rate limiting for v1 unchanged. part of elastic#1299
* Enabled for RUM endpoints with default value 5000. * Deny request if rate limit is hit when http request is established. * Throttle read if http request is successfully established, but rate limit is hit while reading the request. * Keep rate limiting for v1 unchanged. part of elastic#1299
* Enabled for RUM endpoints with default value 5000. * Deny request if rate limit is hit when http request is established. * Throttle read if http request is successfully established, but rate limit is hit while reading the request. * Keep rate limiting for v1 unchanged. part of elastic#1299
* Enabled for RUM endpoints with default value 5000. * Throttle read if rate limit is hit while reading the request. * Check rate limit in batches of 10. * Keep rate limiting for v1 unchanged. part of elastic#1299
* Enabled for RUM endpoints with default value 5000. * Deny request if rate limit is hit when http request is established. * Throttle read if http request is successfully established, but rate limit is hit while reading the request. * Keep rate limiting for v1 unchanged. part of elastic#1299
* Enabled for RUM endpoints with default value 5000. * Throttle read if rate limit is hit while reading the request. * Check rate limit in batches of 10. * Keep rate limiting for v1 unchanged. part of elastic#1299
opening up this discussion again: I think we should be fine to start with those. We can re-open this if it turns out there's more needed at a later stage. WDYT @simitt @graphaelli ? |
Agreed that those are sufficient to start with and we should revisit after more testing and real world usage. |
SGTM |
* Enabled for RUM endpoints with default value 5000. * Throttle read if rate limit is hit while reading the request. * Check rate limit in batches of 10. * Keep rate limiting for v1 unchanged. part of elastic#1299
* Enabled for RUM endpoints with default value 5000. * Throttle read if rate limit is hit while reading the request. * Check rate limit in batches of 10. * Keep rate limiting for v1 unchanged. part of elastic#1299
* Enabled for RUM endpoints with default value 300, burst multiplier 3 * Throttle read if rate limit is hit while reading the request, deny request if limit is already hit when the request is started. * Check rate limit in batches of 10. * Use eviction handling for LRU cache holding the rate limiters. * Keep rate limiting for v1 unchanged. partly implements #1299
* Enabled for RUM endpoints with default value 300, burst multiplier 3 * Throttle read if rate limit is hit while reading the request, deny request if limit is already hit when the request is started. * Check rate limit in batches of 10. * Use eviction handling for LRU cache holding the rate limiters. * Keep rate limiting for v1 unchanged. partly implements elastic#1299
* Enabled for RUM endpoints with default value 300, burst multiplier 3 * Throttle read if rate limit is hit while reading the request, deny request if limit is already hit when the request is started. * Check rate limit in batches of 10. * Use eviction handling for LRU cache holding the rate limiters. * Keep rate limiting for v1 unchanged. partly implements elastic#1299
* Enabled for RUM endpoints with default value 300, burst multiplier 3 * Throttle read if rate limit is hit while reading the request, deny request if limit is already hit when the request is started. * Check rate limit in batches of 10. * Use eviction handling for LRU cache holding the rate limiters. * Keep rate limiting for v1 unchanged. partly implements elastic#1299
* Enabled for RUM endpoints with default value 300, burst multiplier 3 * Throttle read if rate limit is hit while reading the request, deny request if limit is already hit when the request is started. * Check rate limit in batches of 10. * Use eviction handling for LRU cache holding the rate limiters. * Keep rate limiting for v1 unchanged. partly implements elastic#1299
* Enabled for RUM endpoints with default value 300, burst multiplier 3 * Throttle read if rate limit is hit while reading the request, deny request if limit is already hit when the request is started. * Check rate limit in batches of 10. * Use eviction handling for LRU cache holding the rate limiters. * Keep rate limiting for v1 unchanged. partly implements elastic#1299
* Enabled for RUM endpoints with default value 300, burst multiplier 3 * Throttle read if rate limit is hit while reading the request, deny request if limit is already hit when the request is started. * Check rate limit in batches of 10. * Use eviction handling for LRU cache holding the rate limiters. * Keep rate limiting for v1 unchanged. partly implements elastic#1299
* Enabled for RUM endpoints with default value 300, burst multiplier 3 * Throttle read if rate limit is hit while reading the request, deny request if limit is already hit when the request is started. * Check rate limit in batches of 10. * Use eviction handling for LRU cache holding the rate limiters. * Keep rate limiting for v1 unchanged. partly implements elastic#1299
* Enabled for RUM endpoints with default value 300, burst multiplier 3 * Throttle read if rate limit is hit while reading the request, deny request if limit is already hit when the request is started. * Check rate limit in batches of 10. * Use eviction handling for LRU cache holding the rate limiters. * Keep rate limiting for v1 unchanged. partly implements elastic#1299
* Enabled for RUM endpoints with default value 300, burst multiplier 3 * Throttle read if rate limit is hit while reading the request, deny request if limit is already hit when the request is started. * Check rate limit in batches of 10. * Use eviction handling for LRU cache holding the rate limiters. * Keep rate limiting for v1 unchanged. partly implements #1299
We should investigate the behavior and applicability of configurations settings like
MaxUnzippedSize
,Read/WriteTimeout
andRateLimit
for v2 and see if there are new limits that make sense to apply instead.The text was updated successfully, but these errors were encountered: