-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HTTPS via ELB termination broken in 0.22 #3690
Comments
@jaredstehler without digging deeper I'd first consider this breaking change introduced in
The comment is not complete as I suggest you try
in the configmap. NOTE that there was a reason why we disable trusting forwarded headers by default. Trusting all the headers imposes potential security issues. I suggest you review at least what you set to |
I can confirm that if you're upgrading to
I missed this detail and I'd like to suggest that we update the changelog to warn users that have their ingress behind an ELB or L7 load balancer. The current documentation on
|
BTW, that change was actually introduced in 0.19.0 via #2616. It just didn't get partially documented until 0.22.0. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle rotten |
it's clear from the documentation how we should slove cases of L7 ELB |
Passing of the x-forwarded-proto=https from ELB to applications appears to still be broken. I downgraded from 0.25 to 0.21 to get this working. Config files: apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-west-2:xxxxxxxxxxxxxxx:certificate/xxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxxx
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
name: modest-garfish-nginx-ingress-controller
namespace: ingress-nginx
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: http apiVersion: v1
data:
proxy-real-ip-cidr: 10.0.0.0/16
use-forwarded-headers: "true"
use-proxy-protocol: "false"
kind: ConfigMap
metadata:
name: ingress-controller-leader-nginx
namespace: ingress-nginx apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
name: app1
namespace: default
spec:
rules:
- host: xxxxxxxxxxx.com
http:
paths:
- backend:
serviceName: app1
servicePort: 80 |
It is working for me with kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: kube-ingress
labels:
k8s-addon: ingress-nginx.addons.k8s.io
annotations:
# Enable Cross Zone
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
# replace with the correct value of the generated certificate in the AWS console
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:us-east-1:xxxxxxxxxxx:certificate/yyyyyyyyyyyyyy"
# Increase the ELB idle timeout to avoid issues with WebSockets or Server-Sent Events.
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '3600'
# Avoid 400 The plain HTTP request was sent to HTTPS port
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
spec:
type: LoadBalancer
selector:
app: ingress-nginx
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: http apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: http-dev
annotations:
kubernetes.io/ingress.allow-http: "false"
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/connection-proxy-header: "keep-alive"
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "10m"
nginx.ingress.kubernetes.io/client-body-buffer-size: "256k"
nginx.ingress.kubernetes.io/enable-modsecurity: "true"
spec:
rules:
- host: zzzzzz.ttttttttttt.com
http:
# [...] |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
adding
|
@jaredstehler :- Its working man thanks for uploading configmap . |
Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/.):
What keywords did you search in NGINX Ingress controller issues before filing this one? (If you have found any duplicates, you should instead reply there.):
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
NGINX Ingress controller version:
0.22
Kubernetes version (use
kubectl version
):v1.10.12
Environment:
uname -a
): 4.4.148-k8sWhat happened:
After upgrading from ingress controller 0.21 to 0.22, https recognition was broken on my ingresses.
Http to https redirect was still happening correctly, but when hitting the https endpoint, it was also just redirecting to the same url. To me this seems like nginx is improperly detecting https inbound as http.
I am configuring it with SSL termination at AWS ELB:
and ingress:
What you expected to happen:
request to https work correctly, and not simply loop to same url.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know:
downgrading to 0.21 resolved this issue for me.
The text was updated successfully, but these errors were encountered: