Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Session affinity using cookie does not work when "path" is not set in the Ingress - Wrong generated Nginx configuration #1980

Closed
sylmarch opened this issue Jan 25, 2018 · 13 comments · Fixed by #2244
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines.

Comments

@sylmarch
Copy link

NGINX Ingress controller version: 0.10.0

Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:57:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.3-rancher3", GitCommit:"772c4c54e1f4ae7fc6f63a8e1ecd9fe616268e16", GitTreeState:"clean", BuildDate:"2017-11-27T19:51:43Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: Bare metal, Kubernetes was installed using Rancher v1.6
  • OS (e.g. from /etc/os-release): CentOS 7
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

What happened:
Session affinity using a cookie does not work, even if these 3 annotations is set on the ingress:

  • nginx.ingress.kubernetes.io/affinity: "cookie"
  • nginx.ingress.kubernetes.io/session-cookie-name: "route"
  • nginx.ingress.kubernetes.io/session-cookie-hash: "sha1"

I think the Nginx configuration, generated from the Ingress resource, is incorrect (see below).

What you expected to happen:
A cookie "route" should be set in response to the 1st request (that isn't the case).
Then, all other requests should provide this cookie.
The IngressController should then forward all these requests to the same backend pod.

How to reproduce it (as minimally and precisely as possible):

Hereby a complete configuration.

  1. create an echo service:
#######################################################################################################################
# Deployment with at least 2 pods used as backend servers.
#######################################################################################################################
apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: echo-server
spec:
  # at least 2 backends to test sticky session
  replicas: 2
  selector:
    matchLabels:
      app: echo-server
  template:
    metadata:
      labels:
        app: echo-server
    spec:
      containers:
      - name: echo-server
        image: gcr.io/google_containers/echoserver:1.8
        ports:
          - containerPort: 8080

#######################################################################################################################
# Service to access backend pods
#######################################################################################################################

---
apiVersion: v1
kind: Service
metadata:
  name: echo-server
spec:
  # service only expose internally. Using an Ingress to access it.
  type: ClusterIP
  ports:
    - name: http
      port: 8080

  selector:
    app: echo-server
  1. create the IngressController and all other mandatory resources:
# IngressControler officiel basé sur Nginx
# https://github.com/kubernetes/ingress-nginx/blob/0.10.0/deploy/README.md

#######################################################################################################################
# Create namespace
# Source : https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/namespace.yaml
#######################################################################################################################
apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx


#######################################################################################################################
# Create default backend deployment and service
# Source : https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/default-backend.yaml
#######################################################################################################################
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: default-http-backend
  labels:
    app: default-http-backend
  namespace: ingress-nginx
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: default-http-backend
    spec:
      terminationGracePeriodSeconds: 60
      containers:
      - name: default-http-backend
        # Any image is permissable as long as:
        # 1. It serves a 404 page at /
        # 2. It serves 200 on a /healthz endpoint
        image: gcr.io/google_containers/defaultbackend:1.4
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 30
          timeoutSeconds: 5
        ports:
        - containerPort: 8080
        resources:
          limits:
            cpu: 10m
            memory: 20Mi
          requests:
            cpu: 10m
            memory: 20Mi
---

apiVersion: v1
kind: Service
metadata:
  name: default-http-backend
  namespace: ingress-nginx
  labels:
    app: default-http-backend
spec:
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: default-http-backend


#######################################################################################################################
# Create ConfigMap with Nginx configuration.
# Source : https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/configmap.yaml
#######################################################################################################################
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app: ingress-nginx

#######################################################################################################################
# Create ConfigMap with Nginx configuration for TCP services.
# Source : https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/tcp-services-configmap.yaml
#######################################################################################################################
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx


#######################################################################################################################
# Create ConfigMap with Nginx configuration for UDP services.
# Source : https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/udp-services-configmap.yaml
#######################################################################################################################
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx


#######################################################################################################################
# Create IngressController without RBAC (= Role Based Access Control).
# Source : https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/without-rbac.yaml
#######################################################################################################################
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ingress-nginx
  template:
    metadata:
      labels:
        app: ingress-nginx
      annotations:
        prometheus.io/port: '10254'
        prometheus.io/scrape: 'true'
    spec:
      initContainers:
      - command:
        - sh
        - -c
        - sysctl -w net.core.somaxconn=32768; sysctl -w net.ipv4.ip_local_port_range="1024 65535"
        image: alpine:3.6
        imagePullPolicy: IfNotPresent
        name: sysctl
        securityContext:
          privileged: true
      containers:
        - name: nginx-ingress-controller
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.10.0
          args:
            - /nginx-ingress-controller
            - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --annotations-prefix=nginx.ingress.kubernetes.io
            # only process Ingress annotated with this class
            - --ingress-class=nginx
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
          - name: http
            containerPort: 80
          - name: https
            containerPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1


#######################################################################################################################
# Expose the IngressController as a "NodePort" service.
# Statically set the published ports (notePort attributes) : 30080 for HTTP / 30443 for HTTPS.
# Source : https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/baremetal/service-nodeport.yaml
#######################################################################################################################
---
apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    targetPort: 80
    # publish HTTP on port 30080
    nodePort: 30080
    protocol: TCP
  - name: https
    port: 443
    targetPort: 443
    # publish HTTPS on port 30443
    nodePort: 30443
    protocol: TCP
  selector:
    app: ingress-nginx
  1. create the Ingress to access the echo-service through this IngressController:
#######################################################################################################################
# Ingress to access echo service.
#######################################################################################################################
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: echo-server
  annotations:
    # define the class so that this Ingress is only proceed by the IngressController named "nginx-ingress-controller".
    kubernetes.io/ingress.class: "nginx"
    # define sticky session annotations as describe here:
    # https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/affinity/cookie
    nginx.ingress.kubernetes.io/affinity: "cookie"
    nginx.ingress.kubernetes.io/session-cookie-name: "route"
    nginx.ingress.kubernetes.io/session-cookie-hash: "sha1"

spec:
  rules:
    - host: echo-server
      http:
        paths:
        - backend:
            serviceName: echo-server
            servicePort: 8080
  1. Declare "echo-server" in your /etc/hosts file. The IP address should be one of your node:
11.22.33.44  echo-server
  1. In your browser, open a web developer console to see request and response headers.

  2. Access the URL http://echo-server:30080 and check the headers :

  • There is no "Set-Cookie" header in the response
  • There is no "Cookie" header in the request

Anything else we need to know:
You can retrieve the generated Nginx configuration like this:

  1. Find your IngressController pod:
$ kubectl get po -n ingress-nginx
NAME                                       READY     STATUS    RESTARTS   AGE
default-http-backend-66b447d9cf-tswgb      1/1       Running   0          30m
nginx-ingress-controller-8fcd569fc-r5sk4   1/1       Running   0          30m
  1. Dump the nginx configuration on your computer:
$ kubectl exec nginx-ingress-controller-8fcd569fc-r5sk4 -n ingress-nginx -- cat /etc/nginx/nginx.conf > /tmp/nginx.conf
  1. In the /tmp/nginx.conf you can see the following upstream servers:
    upstream sticky-default-echo-server-8080 {
        sticky hash=sha1 name=route  httponly;

        keepalive 32;

        server 10.42.210.114:8080 max_fails=0 fail_timeout=0;
        server 10.42.31.243:8080 max_fails=0 fail_timeout=0;

    }

    upstream default-echo-server-8080 {

        # Load balance algorithm; empty for round robin, which is the default
        least_conn;

        keepalive 32;

        server 10.42.210.114:8080 max_fails=0 fail_timeout=0;
        server 10.42.31.243:8080 max_fails=0 fail_timeout=0;

    }

But no configuration uses the "sticky-default-echo-server-8080" in the server section:

    ## start server echo-server
    server {
        server_name echo-server ;

        listen 80;

        listen [::]:80;

        set $proxy_upstream_name "-";

        location / {
            port_in_redirect off;

            set $proxy_upstream_name "default-echo-server-8080";

            set $namespace      "default";
            set $ingress_name   "echo-server";
            set $service_name   "";

            client_max_body_size                    "1m";

            proxy_set_header Host                   $host;

            # Pass the extracted client certificate to the backend

            proxy_set_header ssl-client-cert        "";
            proxy_set_header ssl-client-verify      "";
            proxy_set_header ssl-client-dn          "";

            # Allow websocket connections
            proxy_set_header                        Upgrade           $http_upgrade;
            proxy_set_header                        Connection        $connection_upgrade;

            proxy_set_header X-Real-IP              $the_real_ip;

            proxy_set_header X-Forwarded-For        $the_real_ip;

            proxy_set_header X-Forwarded-Host       $host;
            proxy_set_header X-Forwarded-Port       $pass_port;
            proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
            proxy_set_header X-Original-URI         $request_uri;
            proxy_set_header X-Scheme               $pass_access_scheme;

            # Pass the original X-Forwarded-For
            proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;

            # mitigate HTTPoxy Vulnerability
            # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
            proxy_set_header Proxy                  "";

            # Custom headers to proxied server

            proxy_connect_timeout                   5s;
            proxy_send_timeout                      60s;
            proxy_read_timeout                      60s;

            proxy_buffering                         off;
            proxy_buffer_size                       "4k";
            proxy_buffers                           4 "4k";
            proxy_request_buffering                 "on";

            proxy_http_version                      1.1;

            proxy_cookie_domain                     off;
            proxy_cookie_path                       off;

            # In case of errors try the next upstream server before returning an error
            proxy_next_upstream                     error timeout invalid_header http_502 http_503 http_504;

            proxy_pass http://default-echo-server-8080;

            proxy_redirect                          off;

        }

    }
    ## end server echo-server

I never use Nginx but I think the issue is here.

@lorenz
Copy link

lorenz commented Jan 29, 2018

Hitting the same issue, seems to be related to TLS. My two TLS ingresses work with sticky sessions, a new one without doesn't.

@lorenz
Copy link

lorenz commented Jan 29, 2018

Nevermind, works on 0.10.2, just be sure to actually have more than one endpoint for testing (it doesn't send out cookies if you don't).

@sylmarch
Copy link
Author

It does not work with 0.10.2 too.
Note that my example does not use TLS.
Besides, 2 endpoints are set (see "replicas: 2" for Deployment named "echo-server").

Verification : IngressController has been updated to v0.10.2:

# IngressController is updated to v0.10.2 :
$ kubectl describe po/nginx-ingress-controller-58b498d76c-zxzfd -n ingress-nginx
...
Containers:
  nginx-ingress-controller:
    Container ID:  docker://bb661a20bb275c1649953d135aa0ffe9c0b5c1846039a5f0bc28dc0b8a865633
    Image:         quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.10.2
...

Verification : 2 endpoints are set for the service:

$ kubectl describe svc/echo-server | grep Endpoints
Endpoints:         10.42.114.158:8080,10.42.37.143:8080

Verification : Nginx configuration is still wrong. Upstream configuration "sticky-default-echo-server-8080" is never referenced in the server "echo-server" block.

$ kubectl exec nginx-ingress-controller-58b498d76c-zxzfd -n ingress-nginx -- cat /etc/nginx/nginx.conf 
    ...
    
    upstream sticky-default-echo-server-8080 {
        sticky hash=sha1 name=route  httponly;

        keepalive 32;

        server 10.42.37.143:8080 max_fails=0 fail_timeout=0;
        server 10.42.114.158:8080 max_fails=0 fail_timeout=0;

    }

    upstream default-echo-server-8080 {

        # Load balance algorithm; empty for round robin, which is the default
        least_conn;

        keepalive 32;

        server 10.42.37.143:8080 max_fails=0 fail_timeout=0;
        server 10.42.114.158:8080 max_fails=0 fail_timeout=0;

    }
    
    ...
    
    
    ## start server echo-server
    server {
        server_name echo-server ;

        listen 80;

        listen [::]:80;

        set $proxy_upstream_name "-";

        location / {
            port_in_redirect off;

            set $proxy_upstream_name "default-echo-server-8080";

            set $namespace      "default";
            set $ingress_name   "echo-server";
            set $service_name   "";

            client_max_body_size                    "1m";

            proxy_set_header Host                   $host;

            # Pass the extracted client certificate to the backend

            proxy_set_header ssl-client-cert        "";
            proxy_set_header ssl-client-verify      "";
            proxy_set_header ssl-client-dn          "";

            # Allow websocket connections
            proxy_set_header                        Upgrade           $http_upgrade;
            proxy_set_header                        Connection        $connection_upgrade;

            proxy_set_header X-Real-IP              $the_real_ip;

            proxy_set_header X-Forwarded-For        $the_real_ip;

            proxy_set_header X-Forwarded-Host       $host;
            proxy_set_header X-Forwarded-Port       $pass_port;
            proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
            proxy_set_header X-Original-URI         $request_uri;
            proxy_set_header X-Scheme               $pass_access_scheme;

            # Pass the original X-Forwarded-For
            proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;

            # mitigate HTTPoxy Vulnerability
            # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
            proxy_set_header Proxy                  "";

            # Custom headers to proxied server

            proxy_connect_timeout                   5s;
            proxy_send_timeout                      60s;
            proxy_read_timeout                      60s;

            proxy_buffering                         off;
            proxy_buffer_size                       "4k";
            proxy_buffers                           4 "4k";
            proxy_request_buffering                 "on";

            proxy_http_version                      1.1;

            proxy_cookie_domain                     off;
            proxy_cookie_path                       off;

            # In case of errors try the next upstream server before returning an error
            proxy_next_upstream                     error timeout invalid_header http_502 http_503 http_504;

            proxy_pass http://default-echo-server-8080;

            proxy_redirect                          off;

        }

    }
    ## end server echo-server

@jfpucheu
Copy link

Hello,

It works for me from 0.10.2.

Jeff

@sylmarch
Copy link
Author

@jfpucheu : Jeff, does it work when you apply my sample configuration (just change image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.10.0 by image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.10.2) or did you change something?

If it works, what is your environment (Cloud provider VS hardware configuration ? OS ? Install tools) and Kubernetes version?

@icereval
Copy link

I too am consistently running into this issue. In my scenario it appears the nginx.conf file is built properly w/ the sticky-... upstream, and the pod's nginx console log outputs the $proxy_upstream_name correctly (e.g. sticky-...). (details below)

Though all configurations seem correct, a cookie is no longer returned. To ensure the cookie was not simply being dropped by an intermediary, I've evaluated a curl response from the pod itself, and similarly no cookie is returned.

At this point I'm running out of ideas, maybe a possible load order?, maybe even a compilation/build specific issue with the plugin? Thoughts?

nginx configuration

http {
    ...

    upstream sticky-stage-1-s1cas-cas-8080 {
        sticky hash=md5 name=INGRESSCOOKIE  httponly;

        keepalive 32;

        server 10.44.69.2:80 max_fails=0 fail_timeout=0;

    }

    ...

    server {
        ...

        set $proxy_upstream_name "-";

        ...

        location / {
            port_in_redirect off;

            set $proxy_upstream_name "sticky-stage-1-s1cas-cas-8080";

            set $namespace      "stage-1";
            set $ingress_name   "s1cas-cas";
            set $service_name   "s1cas-cas";

            ...

console logs

$ kubectl --namespace stage-1 logs -f --tail 100 s1cashttps-nginx-ingress-controller-759b6c89cc-glcjm`

73.xxx.xxx.xxx - [73.xxx.xxx.xxx] - - [31/Jan/2018:23:11:59 +0000] "GET /login HTTP/2.0" 200 3424 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.119 Safari/537.36" 281 0.008 [sticky-stage-1-s1cas-cas-8080] 10.44.69.2:80 3424 0.008 200
73.xxx.xxx.xxx - [73.xxx.xxx.xxx] - - [31/Jan/2018:23:12:01 +0000] "GET /login HTTP/2.0" 200 3410 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.119 Safari/537.36" 22 0.007 [sticky-stage-1-s1cas-cas-8080] 10.44.69.2:80 3410 0.007 200

curl from within the pod

$ kubectl --namespace stage-1 exec -it s1cashttps-nginx-ingress-controller-759b6c89cc-glcjm bash
$ curl -o /dev/null --http1.1 --resolve accounts.domain.tld:443:127.0.0.1:443 https://accounts.domain.tld/login -v

* Server certificate:
...
*  issuer: C=GB; ST=Greater Manchester; L=Salford; O=COMODO CA Limited; CN=COMODO RSA Domain Validation Secure Server CA
*  SSL certificate verify ok.
} [5 bytes data]
> GET /login HTTP/1.1
> Host: accounts.domain.tld
> User-Agent: curl/7.52.1
> Accept: */*
>
{ [5 bytes data]
< HTTP/1.1 200 OK
< Date: Wed, 31 Jan 2018 23:40:07 GMT
< Content-Type: text/html;charset=utf-8
< Content-Length: 8023
< Connection: keep-alive
< Vary: Accept-Encoding
< Pragma: no-cache
< Expires: Thu, 01 Jan 1970 00:00:00 GMT
< Cache-Control: no-cache
< Cache-Control: no-store
< Set-Cookie: JSESSIONID=mpw7r01ef0y4i03jzon87mt;Path=/;Secure;HttpOnly
< Vary: Accept-Encoding
< Strict-Transport-Security: max-age=15724800;
<
{ [3684 bytes data]
* Curl_http_done: called premature == 0
100  8023  100  8023    0     0   144k      0 --:--:-- --:--:-- --:--:--  145k
* Connection #0 to host accounts.domain.tld left intact

@lorenz
Copy link

lorenz commented Feb 1, 2018

@icereval The cookie is only being sent if you have more than 1 endpoint. You only have one.

@icereval
Copy link

icereval commented Feb 1, 2018

@lorenz, that makes sense! I've retested w/ the replicas back at their normal size, and its working as expected now.

I had initially turned the replicas down to simplify troubleshooting, well apparently too far, oops...

Thanks for the quick response, and I can confirm 0.10.2 is working w/ my simple test setup.

$ curl -o /dev/null -s --location -D - https://account.domain.tls/login

HTTP/2 200
server: nginx
date: Thu, 01 Feb 2018 02:32:55 GMT
content-type: text/html;charset=utf-8
content-length: 8043
vary: Accept-Encoding
set-cookie: INGRESSCOOKIE=26b7af4429c0e7f7b19058dfb72886d0; Path=/; HttpOnly
pragma: no-cache
expires: Thu, 01 Jan 1970 00:00:00 GMT
cache-control: no-cache
cache-control: no-store
set-cookie: JSESSIONID=1584kdt4d4867sea3d6d7uhnl;Path=/;Secure;HttpOnly
vary: Accept-Encoding
strict-transport-security: max-age=15724800;

@sylmarch
Copy link
Author

sylmarch commented Feb 1, 2018

Finally, I find why my sample configuration does not work!

I confirm there is a bug with the Nginx IngressController.

The issue is related to the definition of the Ingress and the absence of the directive "path:".

Hereby 2 Ingresses that illustrate the different behaviors :

ingress-without-path.yml (sticky session does not work):

$ cat ingress-without-path.yml 
#######################################################################################################################
# Ingress to access echo service.
#######################################################################################################################
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: echo-server
  annotations:
    # define the class so that this Ingress is only proceed by the IngressController named "nginx-ingress-controller".
    kubernetes.io/ingress.class: "nginx"
    # define sticky session annotations as describe here:
    # https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/affinity/cookie
    nginx.ingress.kubernetes.io/affinity: "cookie"
    nginx.ingress.kubernetes.io/session-cookie-name: "route"
    nginx.ingress.kubernetes.io/session-cookie-hash: "sha1"

spec:
  rules:
    - host: echo-server
      http:
        paths:
        - backend:
            serviceName: echo-server
            servicePort: 8080

ingress-without-path.yml (sticky session works):

$ cat ingress-with-path.yml
#######################################################################################################################
# Ingress to access echo service.
#######################################################################################################################
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: echo-server
  annotations:
    # define the class so that this Ingress is only proceed by the IngressController named "nginx-ingress-controller".
    kubernetes.io/ingress.class: "nginx"
    # define sticky session annotations as describe here:
    # https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/affinity/cookie
    nginx.ingress.kubernetes.io/affinity: "cookie"
    nginx.ingress.kubernetes.io/session-cookie-name: "route"
    nginx.ingress.kubernetes.io/session-cookie-hash: "sha1"

spec:
  rules:
    - host: echo-server
      http:
        paths:
        - path: / 
          backend:
            serviceName: echo-server
            servicePort: 8080

As you can see, the only difference is the presence of the directive 'path: /' in the second one:

$ diff ingress-without-path.yml ingress-with-path.yml 
22c22,23
<         - backend:
---
>         - path: / 
>           backend:

First, let's use the Ingress without the "path" directive:

$ kubectl apply -f ingress-without-path.yml 
ingress "echo-server" created

As we can see, there is no "Set-Cookie" header in the response:

$ curl -I -XGET http://echo-server:30080
HTTP/1.1 200 OK
Server: nginx/1.13.8
Date: Thu, 01 Feb 2018 15:10:38 GMT
Content-Type: text/plain
Transfer-Encoding: chunked
Connection: keep-alive
Vary: Accept-Encoding

Now, let's use the Ingress with the "path" directive:

$ kubectl delete ing echo-server
ingress "echo-server" deleted
$ kubectl apply -f ingress-with-path.yml
ingress "echo-server" created

The "Set-Cookie" header is correcly set!

$ curl -I -XGET http://echo-server:30080
HTTP/1.1 200 OK
Server: nginx/1.13.8
Date: Thu, 01 Feb 2018 15:10:53 GMT
Content-Type: text/plain
Transfer-Encoding: chunked
Connection: keep-alive
Vary: Accept-Encoding
Set-Cookie: route=7df3b7d7fb18d7cb908aad7837dbbfcb600cb7d7; Path=/; HttpOnly

Referring to the official Kubernetes documentation (https://kubernetes.io/docs/concepts/services-networking/ingress/#name-based-virtual-hosting), it should be possible to define an Ingress without the "path" directive :

The following Ingress tells the backing loadbalancer to route requests based on the Host header.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: test
spec:
  rules:
  - host: foo.bar.com
    http:
      paths:
      - backend:
          serviceName: s1
          servicePort: 80
  - host: bar.foo.com
    http:
      paths:
      - backend:
          serviceName: s2
          servicePort: 80

That's why my Ingress ingress-without-path.yml is valid.

And so, the bug is located in the "ingress-nginx" project.

@sylmarch sylmarch changed the title Session affinity using cookie does not work - Wrong generated Nginx configuration Session affinity using cookie does not work when "path" is not set in the Ingress - Wrong generated Nginx configuration Feb 1, 2018
@aledbf aledbf added the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label Feb 1, 2018
oilbeater added a commit to oilbeater/ingress-nginx that referenced this issue Mar 23, 2018
If the origin ingress rule has no field `path`, the default value will be an empty string which will cause issues when rendering template as other place will use `/` as the default value.
Set the default value of path to `/` when retrieve ingress rules from api-server. Thie will fix kubernetes#1980
aledbf pushed a commit that referenced this issue Mar 23, 2018
If the origin ingress rule has no field `path`, the default value will be an empty string which will cause issues when rendering template as other place will use `/` as the default value.
Set the default value of path to `/` when retrieve ingress rules from api-server. Thie will fix #1980
@kwuite
Copy link

kwuite commented May 7, 2018

Thank you for finding this bug! I can finally run a multi-pod cluster with meteor node framework without having problems with image uploads. Images where being uploaded to all nodes but now with session affinity I am without problems.

christopherriley pushed a commit to sagansystems/ingress-nginx that referenced this issue Jun 25, 2018
* Correct typo (kubernetes#2238)

* correct spelling

* correct typo

* fix-link (kubernetes#2239)

* Add missing configuration in kubernetes#2235 (kubernetes#2236)

* to kubernetes (kubernetes#2240)

to kubernetes

* fix: cannot set $service_name if use rewrite (kubernetes#2220)

$path here is the regular expression formatted nginx location not the origin path in ingress rules. Fix kubernetes#2131

* Revert "Get file max from fs/file-max. (kubernetes#2050)" (kubernetes#2241)

This reverts commit d8efd39.

* add http/2

* fix: empty ingress path (kubernetes#2244)

If the origin ingress rule has no field `path`, the default value will be an empty string which will cause issues when rendering template as other place will use `/` as the default value.
Set the default value of path to `/` when retrieve ingress rules from api-server. Thie will fix kubernetes#1980

* Fix grpc json tag name (kubernetes#2246)

* Add EWMA as configurable load balancing algorithm (kubernetes#2229)

* Update go dependencies (kubernetes#2234)

* Add deployment docs for AWS NLB (kubernetes#1785)

* Update annotations.md (kubernetes#2255)

a typo fix

* Update README.md (kubernetes#2267)

It should be "your Ingress targets" in line 7.

* Managing a whitelist for _/nginx_status (kubernetes#2187)

Signed-off-by: Sylvain Rabot <[email protected]>

* Revert deleted assignment in kubernetes#2146 (kubernetes#2270)

* Use SharedIndexInformers in place of Informers (kubernetes#2271)

* clean up tmpl (kubernetes#2263)

The nginx.conf generated now is too messy remove some section only useful when dynamic configure enabled and headers only useful for https.

* Disable opentracing for nginx internal urls (kubernetes#2272)

* Typo fixes in modsecurity.md (kubernetes#2274)

* Update modsecurity.md

Some typo fixes

* Update modsecurity.md

* Update go to 1.10.1 (kubernetes#2273)

* Update README.md (kubernetes#2276)

Small typo fix .

* Fix bug when auth req is enabled(external authentication) (kubernetes#2280)

* set proxy_upstream_name correctly when auth_req module is used

* log a more meaningful message when backend is not found

* Fix nlb instructions (kubernetes#2282)

* e2e tests for dynamic configuration and Lua features and a bug fix (kubernetes#2254)

* e2e tests for dynamic configuration and Lua features

* do not rely on force reload to dynamically configure when reload is needed

* fix misspelling

* skip dynamic configuration in the first template rendering

* dont error on first sync

* Fix flaky e2e tests by always waiting after redeploying the ingress controller (kubernetes#2283)

* Add NoAuthLocations and default it to "/.well-known/acme-challenge" (kubernetes#2243)

* Add NoAuthLocations and default it to "/.well-known/acme-challenge"

* Add e2e tests for no-auth-location

* Improve wording of no-auth-location tests

* Update controller.go (kubernetes#2285)

* Fix custom-error-pages image publication script (kubernetes#2289)

* Update nginx to 1.13.11 (kubernetes#2290)

* Fix HSTS without preload (kubernetes#2294)

* Disable dynamic configuration in s390x and ppc64le (kubernetes#2298)

* Improve indentation of generated nginx.conf (kubernetes#2296)

* Escape variables in add-base-url annotation

* Fix race condition when Ingress does not contains a secret (kubernetes#2300)

* include lua-resty-waf and its dependencies in the base Nginx image (kubernetes#2301)

* install lua-resty-waf

* bump version

* include Kubernetes header

* include the rest of lua-resty-waf dependencies (kubernetes#2303)

* Fix issues building nginx image in different platforms (kubernetes#2305)

* Disable lua waf where luajit is not available (kubernetes#2306)

* Add verification of lua load balancer to health check (kubernetes#2308)

* Configure upload limits for setup of lua load balancer (kubernetes#2309)

* lua-resty-waf controller (kubernetes#2304)

* annotation to ignore given list of WAF rulesets (kubernetes#2314)

* extra waf rules per ingress (kubernetes#2315)

* extra waf rules per ingress

* document annotation nginx.ingress.kubernetes.io/lua-resty-waf-extra-rules

* regenerate internal/file/bindata.go

* run lua-resty-waf in different modes (kubernetes#2317)

* run lua-resty-waf in different modes

* update docs

* Add ingress-nginx survey (kubernetes#2319)

* Fix survey link (kubernetes#2321)

* Update nginx to 1.13.12 (kubernetes#2327)

* Update nginx image (kubernetes#2328)

* Update nginx image

* Update minikube start script

* fix nil pointer when ssl with ca.crt (kubernetes#2331)

* disable lua for arch s390x and ppc64le

LuaJIT is not available for s390x and ppc64le, disable the lua part in nginx.tmpl on these platform.

* Fix buildupstream name to work with dynamic session affinity

* fix make verify-all failures

* Add session affinity to custom load balancing

* Fix nginx template

* Fixed tests

* Sync secrets (SSL certificates) on events

Remove scheduled check for missing secrets.

* Include missing secrets in secretIngressMap

Update secretIngressMap independently from stored annotations, which may
miss some secret references.

* Add test for channel events with referenced secret

* Release nginx ingress controller 0.13.0

* Update owners

* Use same convention, curl + kubectl for GKE

* Correct some returned messages in server_tokens.go

should not exists->should not exist
should exists->should exist

* Typo fix in cli-arguments.md

it's endpoints->its endpoints

* Correct some info in flags.go

Correct some info in flags.go

* Add proxy-add-original-uri-header config flag

This makes it configurable if a location adds an X-Original-Uri header to the backend request. Default is "true", the current behaviour.

* Check ingress rule contains HTTP paths

* Detect if header injected request_id before creating one

* fix: fill missing patch yaml config.

The patch-service yaml missing livenessProbe, readinessProbe and prometheus annotation parts.

* Add vts-sum-key config flag

* Introduce ConfigMap updating helpers into e2e/framework and retain default nginx-configuration state between tests

Group sublogic

* Update nginx image to fix modsecurity crs issues

* Move the resetting logic into framework

Stylistic fixes based on feedback

* Fix leaky test

* fix the default cookie name in doc

* DOCS: Add clarification regarding ssl passthrough

* Remove most of the time.Sleep from the e2e tests

* Accept ns/name Secret reference in annotations

* Document changes to annotations with Secret reference

* Improve speed of e2e tests

* include lua-resty-balancer in nginx image

* Silence unnecessary MissingAnnotations errors

* Ensure dep fix fsnotify

* Update nginx image

* fix flaky dynamic configuration test

* shave off some more seconds

* cleanup redundant code

* Update go dependencies

* Allow tls section without hosts in Ingress rule

* Add test for store helper ListIngresses

* Add tests for controller getEndpoints

* Add busted unit testing framework for lua code

* Add deployment instructions for Docker for Mac (Edge)

* Update nginx-opentracing to 0.3.0

This version includes a new `http.host` header to make searching by
vhost in zipkin or jaeger more trivial.

* Fix golint installation

* add balancer unit tests

* Endpoint Awareness: Read backends data from tmp file as well

Actually read from the file

Logs probably shouldn't assume knowledge of implementation detail

Typos

Added integration test, and dynamic update config refactor

Don't force the 8k default

Minimal test case to make the configuration/backends request body write to temp file

Leverage new safe config updating methods, and use 2 replicas instead of 4

Small refactor

Better integration test, addresses other feedback

Update bindata

* Update nginx image

* automate dev environment build

* Remove unnecessary externalTrafficPolicy on Docker for Mac service

* Apply gometalinter suggestions

* Move all documentation under docs/

* Move miscellaneous tidbits from README to miscellaneous.md and other files

* Fix some document titles

* Move deployment documentation under docs/deploy/

* Remove empty ingress-annotations document; fix up annotations.md's layout slightly

* Configure mkdocs with mkdocs-material and friends

* Move "Customizing NGINX" documentation under "NGINX Configuration"

* Regenerate cli-arguments.md from the actual usage of 0.13

* Remove default-ssl-certificate.md (the content is already in tls.md)

* Move documents related to third-party extensions under third-party-addons

* Add buffer configuration to external auth location config

* make code-generator

* Clean JSON before post request to update configuration

* Add scripts and tasks to publish docs to github pages

* Improve readme file

* Fix broken links in the docs

* Remove data races from tests

* Check ginkgo is installed before running e2e tests

* Update exposing-tcp-udp-services.md

Minor tick missing for syntax highlighting which makes it look ugly on https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/

* Update custom-errors.md

Fix grammatical errors

* Update README.md

Fix broken link to `CONTRIBUTING.md`. 

Also update other links to `CONTRIBUTING.md` for consistency.

* Add annotation to enable rewrite logs in a location

* upstream-hash-by annotation support for dynamic configuraton mode

* luacheck ignore subfolders too

* Release nginx ingress controller 0.14.0

* Use local image name for e2e tests

* Bump echoserver version used in e2e test (1.10)

* Refactor e2e framework for TLS tests

* Add tests for global TLS settings

* improve build-dev-env.sh script

* always use x-request-id

* Add basic security context to deployment YAMLs

* Update GitHub pull request template

* Improve documentation format

* Add google analytics [ci skip]

* Add gRPC annotation doc

* Adjust size of tables and only adjust the first column on mobile

* Assert or install go-bindata before incanting

* Add Getting the Code section to Quick Start

* TLS.md: Move the TLS secret misc bit to the TLS document

* TLS.md: Clarify how to set --default-ssl-certificate

* TLS.md: Remove the frankly useless curl output in the default certificate section

* TLS.md: Reformat and grammar check

* TLS.md: Remove useless manual TOC

* multiple-ingress.md: rework page for clarity and less repetition

* Add upgrade documentation

Closes kubernetes#2458

* Reformat log-format.md

* Add note about changing annotation prefixes

* Clean up annotations.md; extract default backend from miscellaneous

* Index all examples and fix their titles

* Example of using nginx-ingress with gRPC

* Exclude grpc-fortune-teller from go list

Deps are managed by bazel so these will fail to
show up in the vendor tree, triggering false positive build fail.

* Fixed broken link in deploy README

* Change TrimLeft for TrimPrefix on the from-to-www redirect

* use roundrobin from lua-resty-balancer library and refactor balancer.lua

* upstream-hash-by should override load-balance annotation

* add resty cookie

* [ci skip] bump nginx baseimage version

* Add some clarification around multiple ingress controller behavior

* Update go version in fortune teller image

* Refactor update of status removing initial check for loadbalancer

* Add KubeCon Europe 2018 Video to documentation

Adds Make Ingress-Nginx Work for you, and the Community Video to the
documentation.

* force backend sync when worker starts

* Remove warning when secret is used only for authentication

* Fix and simplify local dev workflow and execution of e2e tests

* Release nginx ingress controller 0.15.0
@mtricolici
Copy link

mtricolici commented Nov 1, 2018

I can reproduce this bug in quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.20.0
Also, if I define 'path'.. session affinity works fine :(

@gWOLF3
Copy link

gWOLF3 commented Mar 7, 2019

@sylmarch why do you need to have the default backend deployments and service? seems like a waste of resources.

@sylmarch
Copy link
Author

sylmarch commented Mar 8, 2019

@gWOLF3 it might be useful if your Ingress handles multiple hosts and when all requests for one specific host are routed to only one webapp. So path: / instruction should be implicit in this case.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants