Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Path redirection appends port after the hostname #5222

Closed
kponichtera opened this issue Mar 6, 2020 · 8 comments
Closed

Path redirection appends port after the hostname #5222

kponichtera opened this issue Mar 6, 2020 · 8 comments
Labels
kind/support Categorizes issue or PR as a support question.

Comments

@kponichtera
Copy link

NGINX Ingress controller version: 0.30.0

Kubernetes version (use kubectl version): 1.17.0

Environment:

  • Cloud provider or hardware configuration: Bare metal, amd64
  • OS (e.g. from /etc/os-release): CentOS 7
  • Kernel (e.g. uname -a): 3.10.0-1062.4.1.el7.x86_64
  • Install tools: Rancher Kubernetes Engine (RKE)
  • Others:

What happened:

I've updated nginx Ingress controller Helm Chart from version 1.25.0 (controller version 0.26.1) to version 1.33.4 (controller version 0.30.0). My ingress controller is deployed as a Daemon Set with host ports of 30080 and 30443.

Having my Ingress exposed on the path /app and the host myapp.com, all the requests to https://myapp.com/app result in 302 response with redirect to https://myapp.com/app/. Notice the trailing slash and the fact there was no redirection from HTTP to HTTPS, cause the latter one was already used to make the request. Starting with the new version, the 302 response tries to redirect me to https://followmytrader.de:30443/app/. This is the problem, due to the fact that website isn't actually exposed on this port to the internet - 30443 is the internal port we use between the controller instances and the L4 load balancer (diagram and description in the section below). Accessing the Ingress-exposed pages without triggering the redirect (eg. by entering the page with the trailing slash in the link) works without any problem.

What you expected to happen:

Not having the port appended to the Location header of my redirect response, especially since #3787 went live back in 0.23.0 and I didn't explicitly configure that behavior - to be fair, I can't even find this option anywhere now, it's not the command line argument of the Ingress controller anymore although it seemed to be so, unless I looked it up wrong.

I've checked the changelogs between 0.26.1 and 0.30.0, as well as the history of changes of the Helm Chart and couldn't find any change that could legitimize that behavior, hence the bug report instead of the question.

How to reproduce it:

  1. Deploy the Ingress Controller Helm Chart in version 1.33.4 on the minikube with the same values as in the section below (using host ports 30080 and 30443 for respectively HTTP and HTTPS).
  2. Deploy locally (eg. on docker) the single instance nginx to act as L4 load balancer for the minikube with the configuration similar to this one:
stream {
  upstream ingress_http {
	least_conn;
	server <MINIKUBE_IP>:30080 max_fails=2 fail_timeout=3s;
  }
  upstream ingress_https {
	least_conn;
	server <MINIKUBE_IP>:30443 max_fails=2 fail_timeout=3s;
  }
  server {
	listen     80;
	proxy_pass ingress_http;
        proxy_protocol	on;
  }
  server {
	listen     443;
	proxy_pass ingress_https;
        proxy_protocol	on;
  }
}
  1. Deploy simple application, serving the static webpage, as well as the Ingress with the /app path.
  2. Try to open the page https://localhost/app

Anything else we need to know:

My Helm Chart configuration is as follows:

controller:
  ingressClass: "nginx-public"
  kind: DaemonSet
  service:
    enabled: false
  daemonset:
    useHostPort: true
    hostPorts:
      http: 30080
      https: 30443
  config:
    use-proxy-protocol: "true"
  extraArgs:
    disable-catch-all: "true"
    default-ssl-certificate: "ingress-nginx/ingress-default-cert"
  podLabels:
    ingress: "public"
  metrics:
    enabled: true
    serviceMonitor:
      enabled: true

My Ingress resource looks as follows:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: myapp-ingress
  labels:
    app: myapp
    heritage: Helm
    release: myapp
    chart: myapp-1.0.0
  annotations:
    kubernetes.io/ingress.class: "nginx-public"
    nginx.ingress.kubernetes.io/proxy-body-size: "0"
spec:
  rules:
    - host: "myapp.com"
      http:
        paths:
          - path: /app
            backend:
              serviceName: myapp
              servicePort: 8000

Here is also a brief look on how the network looks like. Users make requests to the firewall which performs port forwarding of ports 80 and 443 respectively to 30080 and 30443 of the internal load balancer. The load balancer works on the network layer 4 (TCP) with the PROXY protocol in order to preserve the IP addresses of the users and proxies the requests to the Kubernetes workers which have the ingress controller deployed as a Daemon Set with ports 30080 and 30443 exposed as the host ports.

module_diagram_plantuml

/kind bug

@kponichtera kponichtera added the kind/bug Categorizes issue or PR as related to a bug. label Mar 6, 2020
@aledbf
Copy link
Member

aledbf commented Mar 7, 2020

Not having the port appended to the Location header of my redirect response, especially since #3787 went live back in 0.23.0 and I didn't explicitly configure that behavior

This is disabled by default. The annotation to enable this is use-port-in-redirects: "true".

@aledbf
Copy link
Member

aledbf commented Mar 7, 2020

Keep in mind that the annotation is related to redirects created by nginx, not your application.

@kponichtera
Copy link
Author

Thank you for the information about the annotation, will apply it to my ingresses once I get back to my PC and check it with the latest controller version.

The redirect is not done by my application - it exposes port 8000 and has no idea about presence of 30080 and 30443 on the ingress controller. Actually, on the old controller version after reaching /app/ it does its own redirect to /app/login and doesn't append the port in progress. It's the redirect from /app to /app/ which does so.

What makes me wonder is the behavior of appending the port after the hostname, in spite of the fact I didn't configure the use-port-in-redirects option anywhere, as you can see in the posted configuration snippets. Indeed, according to the source code of the controller the option should be false by default:

// Enables or disables the specification of port in redirects
// Default: false
UsePortInRedirects bool `json:"use-port-in-redirects"`

@kponichtera
Copy link
Author

Not having the port appended to the Location header of my redirect response, especially since #3787 went live back in 0.23.0 and I didn't explicitly configure that behavior

This is disabled by default. The annotation to enable this is use-port-in-redirects: "true".

I've added thenginx.ingress.kubernetes.io/use-port-in-redirects annotation, set it to false explicity on the Ingress resource and upgraded the Ingress Controller to 0.30.0 with the latest Helm Chart version of 1.33.5 and it didn't help - the port is still appended on the redirection from /app to /app/.

@kponichtera
Copy link
Author

I've managed to track and solve the problem. I didn't mention that our load balancer is nginx in L4 mode (stream {} block) that has been deployed with Docker Swarm with the container instances in host network mode in order to not lose the IP address of the end user.

The nginx.conf file part looked as follows:

stream {

  // upstream configurations

  server {
        listen     30080;
        proxy_pass public_ingress_http;
        proxy_protocol  on;
  }

  server {
        listen     30443;
        proxy_pass public_ingress_https;
        proxy_protocol  on;
  }

}

The swarm deployment YAML file was:

version: '3.7'

services:
  public:
    image: nginx:1.17.9-alpine
    deploy:
      replicas: 2
    ports:
      - published: 30080
        target: 30080
        protocol: tcp
        mode: host
      - published: 30443
        target: 30443
        protocol: tcp
        mode: host
    configs:
      - source: public-config
        target: /etc/nginx/nginx.conf

Notice how nginx instances expose ports 30080 and 30443 which are then exposed as the same ports on the load balancer machines. I fixed the redirect problem by making nginx instances expose traditional ports 80 and 443 while the Swarm takes care of publishing 30080 and 30443 (we couldn't publish 80 and 443 because they're used by other load balancer deployment which is running on the same Swarm):

server {
     listen     80;
     proxy_pass public_ingress_http;
     proxy_protocol  on;
}

server {
      listen     443;
      proxy_pass public_ingress_https;
      proxy_protocol  on;
  }
ports:
  - published: 30080
    target: 80
    protocol: tcp
    mode: host
  - published: 30443
    target: 443
    protocol: tcp
    mode: host

I'm sorry for the confusion, I didn't expect this part of the network stack to cause a problem - I hope that my finding will help somebody who stumbled into similar issue due to juggling the ports around the environment like I did. :)

However, that finding doesn't explain why those wild redirects started to appear after upgrading the Ingress controller to 0.30.0 - neither the version nor configuration of the L4 load balancer were changed then and all LB instances worked with the old settings, exposing ports 30080/30443, yet on the version 0.26.1 of the controller the redirect didn't occur. Adding that piece to the puzzle, any idea what could be the reason?

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 29, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 29, 2020
@aledbf
Copy link
Member

aledbf commented Jul 29, 2020

Closing. You need to disable the port in redirect in your nginx. It's enabled by default
http://nginx.org/en/docs/http/ngx_http_core_module.html#port_in_redirect
(what the use-port-in-redirects does in ingress-nginx)

@aledbf aledbf closed this as completed Jul 29, 2020
@aledbf aledbf added kind/support Categorizes issue or PR as a support question. and removed kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Jul 29, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question.
Projects
None yet
Development

No branches or pull requests

4 participants