-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add body_size_limit option to http module #836
Conversation
config/config.go
Outdated
@@ -287,6 +288,11 @@ func (s *HTTPProbe) UnmarshalYAML(unmarshal func(interface{}) error) error { | |||
if err := unmarshal((*plain)(s)); err != nil { | |||
return err | |||
} | |||
|
|||
if s.MaxResponseLength < 0 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
underlying go code threats that as valid, I think we should too
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After looking at the body_size_limit code in Prometheus, for consistency's sake, I copied the behavior implemented there: anything less than 0 is reinterpreted as MaxInt64
.
I'm not sure I like that, but it's consistent.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we match Prometheus behaviour? Prometheus looks at the uncompressed response.
# An uncompressed response body larger than this many bytes will cause the
# scrape to fail. 0 means no limit. Example: 100MB.
# This is an experimental feature, this behaviour could
# change or be removed in the future.
[ body_size_limit: <size> | default = 0 ]
95cdc57
to
52427fb
Compare
OK, I added that for consistency. I used the same name and syntax, too. |
This option limits the maximum body length that will be read from the HTTP server. It's meant to prevent misconfigured servers from causing the probe to use too many resources, even if temporarily. It's not an additional check on the response, for that, use the resulting metrics (probe_http_content_length, probe_http_uncompressed_body_length, etc). Signed-off-by: Marcelo E. Magallon <[email protected]>
52427fb
to
61c5cf6
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm 👍🏼
ps: PR title is outdated after rename to body_size_limit
Co-authored-by: Julien Pivotto <[email protected]>
Does it actually work with compressed data? |
I wonder if we still store the entire body in memory when it is compressed. |
Do you mean whether it limits the amount of data that is read? Yes, it does: the stack of readers that is built (body -> decompressor -> limiter -> byte counter) they all just forward their read calls to the next reader. The decompressor in particular decompresses on the fly and if the limiter gets more data than it's willing to accept, it will error out and this will cause a chain of Close calls. |
That depends on whether or not regular expressions are used. If no regular expressions are involved, then it only streams (decompresses on the fly). If regular expressions are involved, that part of the code does a full read before trying to apply the regular expression (because we allow for multiple regular expressions, so the body has to be buffered to try to match multiple times). |
This option limits the maximum body length that will be read from the
HTTP server. It's meant to prevent misconfigured servers from causing
the probe to use too many resources, even if temporarily. It's not an
additional check on the response, for that, use the resulting metrics
(probe_http_content_length, probe_http_uncompressed_body_length, etc).
Signed-off-by: Marcelo E. Magallon [email protected]