Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HTTPS via ELB termination broken in 0.22 #3690

Closed
jaredstehler opened this issue Jan 22, 2019 · 12 comments
Closed

HTTPS via ELB termination broken in 0.22 #3690

jaredstehler opened this issue Jan 22, 2019 · 12 comments
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@jaredstehler
Copy link

Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/.):

What keywords did you search in NGINX Ingress controller issues before filing this one? (If you have found any duplicates, you should instead reply there.):


Is this a BUG REPORT or FEATURE REQUEST? (choose one):

NGINX Ingress controller version:
0.22

Kubernetes version (use kubectl version):
v1.10.12

Environment:

  • Cloud provider or hardware configuration: AWS
  • OS (e.g. from /etc/os-release): Debian GNU/Linux 8 (jessie)
  • Kernel (e.g. uname -a): 4.4.148-k8s
  • Install tools:
  • Others:

What happened:
After upgrading from ingress controller 0.21 to 0.22, https recognition was broken on my ingresses.

Http to https redirect was still happening correctly, but when hitting the https endpoint, it was also just redirecting to the same url. To me this seems like nginx is improperly detecting https inbound as http.

I am configuring it with SSL termination at AWS ELB:

    # replace with the correct value of the generated certificate in the AWS console
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "${CLUSTER_SSL_CERT_ARN}"
    # the backend instances are HTTP
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
    # Map port 443
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
    # Ensure the ELB idle timeout is less than nginx keep-alive timeout. By default,
    # NGINX keep-alive is set to 75s. If using WebSockets, the value will need to be
    # increased to '3600' to avoid any potential issues.
    service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"

    # make aws elb internal
    service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
spec:
  type: LoadBalancer
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
  ports:
    - name: http
      port: 80
      targetPort: http
    - name: https
      port: 443
      targetPort: http

and ingress:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
  labels:
    app: spin
  name: spin-deck
  namespace: spinnaker
spec:
  rules:
  - host: spinnaker.${NAME}
    http:
      paths:
      - backend:
          serviceName: spin-deck
          servicePort: 9000
        path: /

What you expected to happen:
request to https work correctly, and not simply loop to same url.

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know:
downgrading to 0.21 resolved this issue for me.

@ElvinEfendi
Copy link
Member

ElvinEfendi commented Jan 22, 2019

@jaredstehler without digging deeper I'd first consider this breaking change introduced in 0.22.0:

By default do not trust any client to extract true client IP address from X-Forwarded-For header using realip module (use-forwarded-headers: "false")

The comment is not complete as use-forwarded-headers is not only about X-Forwarded-For header but also about X-Forwarded-Scheme. That means ingress-nginx will now ignore that by default.

I suggest you try

use-forwarded-headers: "true"

in the configmap.

NOTE that there was a reason why we disable trusting forwarded headers by default. Trusting all the headers imposes potential security issues. I suggest you review at least what you set to proxy-real-ip-cidr (this will at least let you to avoid client IP spoofing) before enabling it back.

@zacblazic
Copy link

I can confirm that if you're upgrading to 0.22.0 and are using an ELB with SSL termination, without setting use-forwarded-headers: "true", ingress will be broken due to the lack of the X-Forwarded-Proto header.

By default do not trust any client to extract true client IP address from X-Forwarded-For header using realip module (use-forwarded-headers: "false")

The comment is not complete as use-forwarded-headers is not only about X-Forwarded-For header but also about X-Forwarded-Scheme. That means ingress-nginx will now ignore that by default.

I missed this detail and I'd like to suggest that we update the changelog to warn users that have their ingress behind an ELB or L7 load balancer.

The current documentation on use-forwarded-headers is pretty clear:

If true, NGINX passes the incoming X-Forwarded-* headers to upstreams. Use this option when NGINX is behind another L7 proxy / load balancer that is setting these headers.

If false, NGINX ignores incoming X-Forwarded-* headers, filling them with the request information it sees. Use this option if NGINX is exposed directly to the internet, or it's behind a L3/packet-based load balancer that doesn't alter the source IP in the packets.

@timdorr
Copy link

timdorr commented Feb 8, 2019

BTW, that change was actually introduced in 0.19.0 via #2616. It just didn't get partially documented until 0.22.0.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 9, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 8, 2019
@timdorr
Copy link

timdorr commented Jun 8, 2019

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jun 8, 2019
@assafShapira
Copy link

it's clear from the documentation how we should slove cases of L7 ELB
what about L4 ELB?
is setting use-forwarded-headers to "true" relevant also in that case?

@gsf
Copy link
Contributor

gsf commented Aug 12, 2019

Passing of the x-forwarded-proto=https from ELB to applications appears to still be broken. I downgraded from 0.25 to 0.21 to get this working. Config files:

apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
    service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-west-2:xxxxxxxxxxxxxxx:certificate/xxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxxx
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
  name: modest-garfish-nginx-ingress-controller
  namespace: ingress-nginx
spec:
  type: LoadBalancer
  externalTrafficPolicy: Local
  ports:
  - name: http
    port: 80
    targetPort: http
  - name: https
    port: 443
    targetPort: http
apiVersion: v1
data:
  proxy-real-ip-cidr: 10.0.0.0/16
  use-forwarded-headers: "true"
  use-proxy-protocol: "false"
kind: ConfigMap
metadata:
  name: ingress-controller-leader-nginx
  namespace: ingress-nginx
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
  name: app1
  namespace: default
spec:
  rules:
  - host: xxxxxxxxxxx.com
    http:
      paths:
      - backend:
          serviceName: app1
          servicePort: 80

@danigosa
Copy link

It is working for me with 0.25.1 with:

kind: Service
apiVersion: v1
metadata:
  name: ingress-nginx  
  namespace: kube-ingress
  labels:
    k8s-addon: ingress-nginx.addons.k8s.io
  annotations:
    # Enable Cross Zone
    service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
    # replace with the correct value of the generated certificate in the AWS console
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:us-east-1:xxxxxxxxxxx:certificate/yyyyyyyyyyyyyy"
    # Increase the ELB idle timeout to avoid issues with WebSockets or Server-Sent Events.
    service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '3600'
    # Avoid 400 The plain HTTP request was sent to HTTPS port
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
spec:
  type: LoadBalancer
  selector:
    app: ingress-nginx
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: http
  - name: https
    port: 443
    protocol: TCP
    targetPort: http
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: http-dev
  annotations:
    kubernetes.io/ingress.allow-http: "false"
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
    nginx.ingress.kubernetes.io/connection-proxy-header: "keep-alive"
    nginx.ingress.kubernetes.io/enable-cors: "true"
    nginx.ingress.kubernetes.io/proxy-body-size: "10m"
    nginx.ingress.kubernetes.io/client-body-buffer-size: "256k"
    nginx.ingress.kubernetes.io/enable-modsecurity: "true"
spec:
  rules:
  - host: zzzzzz.ttttttttttt.com
    http:
# [...]

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 17, 2019
@jaredstehler
Copy link
Author

adding use-forwarded-headers: "true" to the config map fixed the issue for me

nginx-patch-configmap-l7.yaml:

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
data:
  use-proxy-protocol: "false"
  use-forwarded-headers: "true"
---

@techs2resolve
Copy link

@jaredstehler :- Its working man thanks for uploading configmap .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

10 participants