-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Readiness and Liveness probe failed: HTTP probe failed with statuscode: 500 #2171
Comments
Seeing the same problem, as above.. However I also see this message in the log: Tested with 0.10.2 and 0.11.0 |
I'm seeing the same issue, here are the logs with v=10
<notice it was stuck here for 5s - which is livenessProve.timeoutSeconds I configured>
Release: 0.10.2 |
I am seeing the same issue with 0.14.0 as well. |
Having the same issue with 0.15.0 |
Same issue with 0.14.0, 0.15.0, but not 0.9.0. |
Having same issue with 0.9.0, 0.10.0, 0.15.0. Using K8 version 1.8.11 |
Having same issue with 0.14.0, K8s version 1.8.4 |
Same issue with 0.15.0 |
@keslerm can you update your image to current master? |
@aledbf i built the image from master and that did the trick, looks good now. Anything I can provide that might help? |
Closing. Please update to 0.16.0 |
Hi! I am having the same issues with 0.24.0
|
@michaelkunzmann-sap if the log ends there it means the pod cannot reach the apiserver. |
I have the same problem,And just solved it In my question, I tried to delete the ingress that references nginx ingress, then delete nginx-ingress-controller , reinstall it Finally succeeded, no more reported unhealthy |
I havint the same issue with 0.25 version |
I have a similar issue with ingress-Nginx. Do you mind sharing your configuration which is working? |
I'm having the same issues with my minikube, with the nginx-ingress-controller 0.25 version; as subject stated, it's a 500 error code from the "describe pod" command: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 17m default-scheduler Successfully assigned ingress-nginx/nginx-ingress-controller-79f6884cf6-qj65t to minikube Normal Started 17m (x2 over 17m) kubelet, minikube Started container nginx-ingress-controller Warning Unhealthy 16m (x6 over 17m) kubelet, minikube Liveness probe failed: HTTP probe failed with statuscode: 500 Normal Killing 16m (x2 over 17m) kubelet, minikube Container nginx-ingress-controller failed liveness probe, will be restarted Normal Pulled 16m (x3 over 17m) kubelet, minikube Container image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.1" already present on machine Normal Created 16m (x3 over 17m) kubelet, minikube Created container nginx-ingress-controller Warning Unhealthy 7m40s (x35 over 17m) kubelet, minikube Readiness probe failed: HTTP probe failed with statuscode: 500 Warning BackOff 2m43s (x44 over 12m) kubelet, minikube Back-off restarting failed container The nginx-ingress-controller pod also went in status CrashLoopBackOff (I guess for too many fails): NAME READY STATUS RESTARTS AGE nginx-ingress-controller-79f6884cf6-qj65t 0/1 CrashLoopBackOff 11 28m |
Any progress here? We have the same problem with 0.26.1. Nginx config looks good |
Possibly related to #3993. Eventually we fixed this by upgrading the nodes to 1.14.7-gke.10. After that the |
I am also getting this issue: Events: Normal Scheduled 13m default-scheduler Successfully assigned jenkins/nginx-ingress-controller-6d9c6d875b-8h98z to ip-192-168-150-176.ec2.internal I am using quay.io/kubernetes-ingress-controller/nginx-ingress-controller image. |
删除引用ingress的,然后删除pod,在重新安装即可 |
NGINX Ingress controller version:
quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.11.0
installed with helm using stable chart.Kubernetes version (use
kubectl version
):1.8.4
Environment:
What happened: nginx-ingress-controller pod Readiness and Liveness
probe failed: HTTP probe failed with statuscode: 500
. The pod is terminated and restarted. This happens 2-5 times until it starts successfully.What you expected to happen: Pod to start successfully without failing Readiness and Liveness probe.
How to reproduce it (as minimally and precisely as possible): We are running the nginx-ingress-controller as a daemonset so whenever a new node is created we see this problem.
Anything else we need to know: This issue has been opened before:
Here are the events from the
nginx-ingress-controller
pod:Here is the default probe config:
Here is the helm chart values we use: https://gist.github.com/max-rocket-internet/ba6b368502f58bc7061d3062939b5dca
I have logs from pod with
--v=10
argument set but there is a lot of output and some of it is sensitive. Here is an excerpt but let me know if need more:The text was updated successfully, but these errors were encountered: