-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fails to add status to ingress after running "for some time" #3180
Comments
@cjohansen if you see this behavior again, please update the issue running:
and post the output of:
This can help us to narrow where this issue is being triggered. |
Thanks! I haven't seen this again, but I still have one of the old (presumed "bad") pods running. Would it be any use to get the output of this now? |
Yes please :) |
http://localhost:10254/debug/pprof/
http://localhost:10254/debug/pprof/goroutine?debug=1
http://localhost:10254/debug/pprof/block?debug=1
|
I also had to restart the nginx-ingress-controller. Previously it was cycling through reloads and not finding any active Endpoints, as described above. This was on a fresh install on Docker for Mac following the installation steps on your Deploy page (mandatory and cloud-provider yamls). Unfortunately I restarted it before reading your request for debug information. |
NGINX Ingress controller version: 0.18.0
Kubernetes version (use
kubectl version
): 1.10.3Environment:
uname -a
): 4.4.121-k8sWhat happened
It's about a month since I set up Kubernets with the ingress controller. At the time I created a few services, and everything worked as expected. Sat down to deploy a new service today, and noticed that no load balancer was connected to the ingress. Checked the ingress controller's logs and found:
Checked
kubectl get ep
and saw:Double and triple checked my template, deleted all the resources and retried. Unfortunately I am not entirely sure if I made any changes that matter at this point. Still no luck. Log now reads:
I eventually decided that this simply was not correct, so I decided to try to kill one of two ingress pods. Shortly after the new one booted up, the ingress was properly configured. The new pod's log now reads:
What you expected to happen:
Not needing to reboot the nginx ingress controller in order for my ingresses to be correctly configured.
How to reproduce it (as minimally and precisely as possible):
Unfortunately I do not know. Possibly: leave the ingress controller running for 20+ days without any new events to respond to, then try to create a new service+ingress.
Anything else we need to know:
I'm sorry for reporting something that is probably very hard to follow up on. I'm not 100% sure this wasn't some sort of mistake on my behalf, but I'm reporting anyway since rebooting one of the ingress-nginx pods fixed the problem, which was somewhat surprising. I'm hoping someone who know the internals better might have a eureka moment from this report...
I still have one of the old pods running, and would happily provide more logs etc if needed.
The text was updated successfully, but these errors were encountered: