You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Version 0.9.0 works fine in my version 1.8.6-gke.0 Kubernetes Engine clusters. However when upgrading to 0.10.0, 0.10.1 or 0.10.2 the liveness and readiness probes fail. Curling the healthz endpoints throws the following error:
$ kubectl exec nginx-ingress-controller-7985b8c588-7755s -n ingress-nginx -- curl -v http://localhost:10254/healthz
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 10254 (#0)
> GET /healthz HTTP/1.1
> Host: localhost:10254
> User-Agent: curl/7.52.1
> Accept: */*
>
< HTTP/1.1 500 Internal Server Error
< Content-Type: text/plain; charset=utf-8
< X-Content-Type-Options: nosniff
< Date: Fri, 26 Jan 2018 13:18:07 GMT
< Content-Length: 84
<
{ [84 bytes data]
* Curl_http_done: called premature == 0
100 84 100 84 0 0 15029 0 --:--:-- --:--:-- --:--:-- 16800
* Connection #0 to host localhost left intact
[+]ping ok
[-]nginx-ingress-controller failed: reason withheld
healthz check failed
The logs from the nginx ingress controller don't show anything out of the ordinary. Just some regular lines and then after failing the liveness probe some errors during shutdown.
nginx-ingress-controller-7985b8c588-7755s nginx-ingress-controller I0126 13:26:25.357693 7 backend_ssl.go:68] adding secret ***/*** to the local store
nginx-ingress-controller-7985b8c588-7755s nginx-ingress-controller I0126 13:26:25.358440 7 backend_ssl.go:68] adding secret ***/*** to the local store
nginx-ingress-controller-7985b8c588-7755s nginx-ingress-controller I0126 13:26:25.359204 7 backend_ssl.go:68] adding secret ***/*** to the local store
nginx-ingress-controller-7985b8c588-7755s nginx-ingress-controller I0126 13:28:42.815170 7 main.go:150] Received SIGTERM, shutting down
nginx-ingress-controller-7985b8c588-7755s nginx-ingress-controller I0126 13:28:42.815382 7 nginx.go:321] shutting down controller queues
nginx-ingress-controller-7985b8c588-7755s nginx-ingress-controller I0126 13:28:42.815421 7 nginx.go:329] stopping NGINX process...
nginx-ingress-controller-7985b8c588-7755s nginx-ingress-controller 2018/01/26 13:28:42 [notice] 34#34: signal process started
nginx-ingress-controller-7985b8c588-7755s nginx-ingress-controller 2018/01/26 13:28:42 [error] 34#34: open() "/run/nginx.pid" failed (2: No such file or directory)
nginx-ingress-controller-7985b8c588-7755s nginx-ingress-controller nginx: [error] open() "/run/nginx.pid" failed (2: No such file or directory)
nginx-ingress-controller-7985b8c588-7755s nginx-ingress-controller I0126 13:28:42.823090 7 main.go:154] Error during shutdown exit status 1
nginx-ingress-controller-7985b8c588-7755s nginx-ingress-controller I0126 13:28:42.823122 7 main.go:158] Handled quit, awaiting pod deletion
nginx-ingress-controller-7985b8c588-7755s nginx-ingress-controller I0126 13:28:52.823284 7 main.go:161] Exiting with 1
The only change versus the deployment scripts is that I have the ingress service set to externalTrafficPolicy: Cluster, added cloudflare's source ips as loadBalancerSourceRanges and have the below settings in the nginx ingress configmap:
whitelist-source-range: "****"
forwarded-for-header: "X-Forwarded-For"
# trust internal ranges and cloudflare to provide client ip (https://www.cloudflare.com/ips-v4)
proxy-real-ip-cidr: "10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,${CLOUDFLARE_IP_RANGES}"
The text was updated successfully, but these errors were encountered:
When upgrading to version 0.11.0 it's no longer an issue. Either because there was an issue that's now fixed or because I increased resources in the meantime.
BUG REPORT
Version 0.9.0 works fine in my version 1.8.6-gke.0 Kubernetes Engine clusters. However when upgrading to 0.10.0, 0.10.1 or 0.10.2 the liveness and readiness probes fail. Curling the healthz endpoints throws the following error:
The logs from the nginx ingress controller don't show anything out of the ordinary. Just some regular lines and then after failing the liveness probe some errors during shutdown.
NGINX Ingress controller version:
0.10.2
Kubernetes version (use
kubectl version
):Environment:
uname -a
):What happened:
Deploying version 0.10.0 fails to get the pods into a readiness state. They get restarted whenever the liveness probe duration is passed.
What you expected to happen:
Nginx ingress to pass liveness and readiness checks.
How to reproduce it (as minimally and precisely as possible):
No idea, I have a vanilla deployment following the steps at https://github.com/kubernetes/ingress-nginx/tree/nginx-0.10.2/deploy
Anything else we need to know:
The only change versus the deployment scripts is that I have the ingress service set to
externalTrafficPolicy: Cluster
, added cloudflare's source ips asloadBalancerSourceRanges
and have the below settings in the nginx ingress configmap:The text was updated successfully, but these errors were encountered: