-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: Limit HAProxy maximum concurrent connections #3115
fix: Limit HAProxy maximum concurrent connections #3115
Conversation
/test pull-kind-build |
/lgtm /assign @BenTheElder |
@aojea Thanks for reviewing! |
(I looked at the CI failures. They failures appear unrelated to the change. They also appear for other PRs, so I think they may be flakes. Unfortunately, I don't have enough context to dive deeper.) |
CI doesn't exercise haproxy /retest |
If the limit is not configured, HAProxy derives it from the file descriptor limit. The higher the limit, the more memory HAProxy allocates. That limit can be so high on modern Linux distros that HAproxy allocates all available memory.
bc78a97
to
47dfc9e
Compare
I fixed a typo in the commit subject and force-pushed. |
/lgtm this is on the same ballpark that the example of the official docs https://www.haproxy.com/blog/protect-servers-with-haproxy-connection-limits-and-queues/ |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: aojea, dlipovetsky The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
If the limit is not configured, HAProxy derives it from the file descriptor limit. The higher the limit, the more memory HAProxy allocates. That limit can be so high on modern Linux distros that HAproxy allocates all available memory.
I think this change is preferable to #3028. Please see my explanation in #2954 (comment). 🙏
Why 100000 (10^5) connections? I assume that most kind clusters do not see 100000 concurrent connections. But I must admit that I have no evidence.
Increasing the limit to 1000000 (10^6) causes memory usage* to go from 18MB to 128MB (for a reproducible demonstration, see https://gist.github.com/dlipovetsky/23443bef17371a56acd8cf0579e3f6b4, which I've also linked in my issue comment). If the extra connections go unused, I think 110MB is too much additional overhead.
Fixes: #2954