-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
num of worker_processes set to max num of cores of cluster node with cgroups-v2 #11518
Comments
duplicate #9665 |
/retitle num of worker_processes set to max num of cores of cluster node with cgroups-v2 |
we need to update our support for cgroups v2, to my knowledge this is the package that figures out CPUs, and its not been updated it in 6 years |
This is stale, but we won't close it automatically, just bare in mind the maintainers may be busy with other tasks and will reach your issue ASAP. If you have any question or request to prioritize this, please reach |
Hi @strongjz Is there any plan to fix this bug for cgroup v2? |
Also ran into this issue. I saw a related PR (#11778) that seems to attempt to resolve the issue here, and it says this was included in the After upgrading to that version and adding a CPU limit to the pods, I saw that I believe this issue is resolved. |
Same here, ran to the same issue version 1.11.3 fixes the number of workers to make sense with the cpu limit of the pod and fixed the issue. |
Am I holding it wrong?
I'm reading a comment on here, which says to adjust worker process to no more than 24, mine is automatically adjusted to 128, which causes weird things to happen.
#3574 (comment)
The problem goes away, when I set
worker_processes
in the helm chart.Where is this documented? I've tried to search around for comments on ulimits and ingress-nginx, but I'm not finding a lot.
What happened:
From the logs of the
ingress-nginx-controller
I'm reading..This all went away when I configured
worker_processes 24
in the helm chart.Maybe this is related to #7107?
What you expected to happen:
NGINX automagically configures a proper number of worker process'.
I expect this has something to do with the 128 cores..
When I'm running
ulimit
inside the container, I'm getting quite low values,Despite having configured the host,
And also having configured containerd:
I also tried using an initContainer with the helm chart, to no avail..
I'm "pretty sure" all of the machines in our cluster will have at least 24 cores, so this is "probably" not a problem to configure statically.
NGINX Ingress controller version (exec ...):
NGINX Ingress controller
Release: v1.10.1
Build: 4fb5aac
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.25.3
Kubernetes version (use
kubectl version
):Client Version: v1.26.0
Kustomize Version: v4.5.7
Server Version: v1.29.0
Environment:
Bare metal, super micro, AMD EPYC 7763 64-Core Processor, 256G RAM
Kernel (e.g.
uname -a
):Linux b-w-3 5.15.0-113-generic #123-Ubuntu SMP Mon Jun 10 08:16:17 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
Install tools:
helm ls -A | grep -i ingress
The text was updated successfully, but these errors were encountered: