-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ingress nginx OOM #4703
Comments
I checked profiler and found that metrics can be possible source of problem Regards, |
There was indeed an increase in memory usage after we upgraded 0.26.1. Nginx pods are consuming 700-800 Mi on average with 0 qps. |
I'm getting sudden timeouts when nginx-ingress is running for a few days (6-7) with no apparent error in the logs, like if the requests were not being processed at all. This behaviour started to show up after upgrading to 0.26.1. I rolled back to version 0.24.1 and everything works smooth. Not sure how I can provide data/information that would allow you to debug that. |
Having the same big issue right now in PROD.
|
Having same issue with 0.24.1 :( It doesn't always happens, it randomly starts bumping high gb/30 minutes and then server collapses and stabilizes again. |
Please test |
Hello, Regards, |
@aledbf In my case I'm still having same issue :( As soon I try to telnet a specific port nginx it suddenly starts loopping saying this port is not reachable and goes OOM after some minutes. And yes, if I close the connection it keeps telling and logging port is not reachable and I must kill manually the pod. |
Hello, confirmed - no memory problems Regards, |
Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/.): already asked in slack channel - no answer
What keywords did you search in NGINX Ingress controller issues before filing this one? (If you have found any duplicates, you should instead reply there.): memory, OOM, nginx, nginx-ingress
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG
NGINX Ingress controller version: 0.26.1
Kubernetes version (use
kubectl version
):Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:05:50Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Environment:
uname -a
): 4.15.0-65-genericWhat happened: memory start leaking and after few hours container was killed by OOM killer
What you expected to happen: no memory leaks
How to reproduce it (as minimally and precisely as possible): ~10-15k RPS
Anything else we need to know:
The main process begins to use more and more memory until it is killed by the OOM killer. I added a location to check the garbage collection (#3314 (comment)). It shows 1-5 MB. No errors or warnings were observed in nginx log.
The text was updated successfully, but these errors were encountered: