-
-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(nginx): use 1 worker process #10901
Conversation
The value 'auto' will create a worker process for each CPU thread, which can be problematic in containerised environments with CPU limits. A single NGINX worker process should be more than sufficient. Fixes: blakeblackshear#10898
✅ Deploy Preview for frigate-docs ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
this is not correct. Nginx recommends this config and we specifically added it versions ago to improve hls and other issues that we were seeing #4688 |
What is not correct about it? Could you please refer me to the issues users were seeing with a single worker process? NGINX is very well optimised and unlikely to ever be the bottleneck for an application like this. I hope you can agree that 128 NGINX processes for a container with a single CPU core is undesirable? What would you propose instead? |
what is not correct is your statement that one should be more than enough. It isn't, especially when using HLS. a lot of time was spent improving and optimizing performance for nginx like I said.
you should just bind mount a custom nginx config that works for your specific use case, as this does not match the use case of the majority of frigate users |
Could you please refer me to issues or benchmarks which show that a single worker process with 1024 connections is not enough? Many other edits were made in #4688 which will have helped and I am not sure changing the number of worker processes will have played a large role in that. As mentioned in the associated issue, a large number of worker processes will actually be detrimental to performance because of CPU throttling and resource contention... To reiterate, this is not an uncommon deployment. |
I spent multiple days testing, benchmarking, etc. each of the changes and some even ended up being reverted because they did not help. It seems like you have decided any prior knowledge, experience, etc. of the frigate maintainers is irrelevant and the change should just be made because it doesn't make sense in your particular use case.
in terms of frigate users it is, the vast majority of frigate users run on thin intel clients / nucs not enterprise cpus. For now I would suggest making the change locally, in the future this may be made more easily configurable along with other things in nginx and perhaps the default value can change based on signals that frigate detects during startup (like total hardware thread count) |
If nginx is attempting to spawn 128 worker threads in a container with @uhthomas for your specific case, you may consider deploying your docker containers with lxcfs (which emulates /proc inside the container to match the cgroups limits): https://github.com/lxc/lxcfs?tab=readme-ov-file#using-with-docker If that's really the case, a better fix would be automatically infer the number of CPUs (based on cgroups, not /proc) during initialization and inject this into nginx config (or pass in as a flag when starting the nginx process). |
@felipecrs Thanks for your insight. I agree, nginx should probably be cgroup aware. That said, I don't see an open issue for it and unaware of how long it will take to trickle down to this project. I am deploying in Kubernetes, so I don't think lxcfs can help here. Even a smaller number of default workers like 2 or 4 would be a huge improvement IMO. I don't see a way to set the number of worker threads for the nginx process with the Frigate container at all. Guidance would be appreciated. |
Like Felipe described, a good fit would be that before starting nginx the S6 process reads the number of available CPUs from cgroups and updates the nginx configuration using something like created #10969 to capture this idea |
It's actually even easier to use lxcfs in k8s than in docker. I use it everyday at work: https://artifacthub.io/packages/helm/lxcfs-on-kubernetes/lxcfs-on-kubernetes After installing the chart, all it takes it to label the namespace. After that, all pods deployed in the namespace will already profit from /proc emulation. |
The value 'auto' will create a worker process for each CPU thread, which can be problematic in containerised environments with CPU limits. A single NGINX worker process should be more than sufficient.
Fixes: #10898