-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kibana keeps open sockets for timed out requests #30058
Comments
Pinging @elastic/kibana-platform |
Re So it may happen that in Hey @elastic/stack-monitoring and @tsullivan,
|
I expect an aborted request to release the socket, but we're using keepalive sockets so rather than closing the socket it is probably just keeping it in the free socket pool. The default |
Thanks @spalger. Yeah, I believe your theory is correct, I was just not sure whether it's something we want to keep as not configurable. I think the problem is that request timeout in Cloud is 5 minutes (per @alexbrasetvik) and default stats collection interval is 10s, so during 5 minutes one idling Kibana instance (just monitoring) can send 5m/10s * 13 requests = ~390 requests. These sockets aren't subject to 1m interval since they are busy-waiting during these 5 minutes. After 6 minutes sockets will be disposed (I guess 390 - 256 = 134 sockets will be disposed as soon as requests are aborted, the rest - after 1 minute), but Kibana will keep bombarding ES in the meantime. In any case it feels like we should fix stats-collection and any other code we may have that doesn't adapt to ES availability. |
Hi, my takes on the questions brought up:
Sounds like a perfectly fine suggestion to me! I don't recall any specific reason why we went with setInterval
Looks like that is by mistake |
Related to https://github.com/elastic/support-dev-help/issues/5765
It looks like when request times out (based on ES JS client timeout setting) the socket stays open for a while even though we abort request. E.g. when running Kibana on master, I see ~5 open sockets in "idle" state with short spikes for stats-collection or Canvas pads, but if I slow down the ES connection speed (e.g. via
nginx
rate_limit
to ~10b/sec), open socket count grows significantly to ~50-60-70-80-90+ and stays on this level until connection speed reaches the acceptable level. It can probably go even higher with our default values formaxSockets
(Infinity) andmaxFreeSockets
(256).@spalger do you know if it's how it's supposed to work or sockets should be immediately destroyed for aborted requests?
Looks like
stats-collection
from monitoring is the one who initiates that many requests, I'm going to check whether it does everything properly.The text was updated successfully, but these errors were encountered: