Skip to content
This repository has been archived by the owner on Sep 21, 2021. It is now read-only.

Kubernetes Readiness probe fails if proxy is set #625

Closed
TomLarrow opened this issue Jun 19, 2018 · 4 comments · Fixed by #646
Closed

Kubernetes Readiness probe fails if proxy is set #625

TomLarrow opened this issue Jun 19, 2018 · 4 comments · Fixed by #646

Comments

@TomLarrow
Copy link
Contributor

TomLarrow commented Jun 19, 2018

Because of issue #438 we have to modify the elgalu/selenium image with this several line dockerfile:

FROM elgalu/selenium
ENV http_proxy=http://our.proxy.server.information https_proxy=http://our.proxy.server.information
ENV no_proxy="zalenium hub route, other internal addresses"

While this has worked for quite some time, it seems to now be causing the Kubernetes readiness probe to fail.

Here is the readiness probe that is being generated

  readinessProbe:
    exec:
      command:
        - /bin/sh
        - '-c'
        - >-
          curl -s http://`getent hosts ${HOSTNAME} | awk '{ print $1
          }'`:40000/wd/hub/status | jq .value.ready | grep true
    failureThreshold: 60
    initialDelaySeconds: 5
    periodSeconds: 1
    successThreshold: 1
    timeoutSeconds: 1

This curl statement is looking for the actual ip address of the container, so it is trying to generate a request such as
http://10.1.15.254:40000/wd/hub/status

10.1.15.254 is an internal address to Kubernetes, but because of the proxy settings which we had to add to the image, this is causing the request to be sent to the proxy server as if it were external traffic headed outside our corporate firewall. This is causing all of the readiness probes to fail, and every node to connect to the Zalenium hub, and be killed 10 minutes later when Kubernetes can not successfully complete the readiness check.

Since each pod gets a different ip address each time it is run, there is no way to add every possible address to the no_proxy string, and no_proxy does not seem to be accepting wildcards for ip ranges

If the readiness check were http://localhost:40000/wd/hub/status it would work, as localhost is in the no_proxy settings. It looks as if this can be easily changed by editing line 562 of /src/main/java/de/zalando/ep/zalenium/container/kubernetes/KubernetesContainerClient.java , but that looks like it was changed to resolve issue #584, and I don't want to re-break this issue.

Zalenium Image Version(s):
3.12.0d
Docker Version:
1.12.6
OS:
RHEL 7
Docker Command to start Zalenium:
Zalenium.yml:

apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
  creationTimestamp: '2017-08-18T13:45:20Z'
  generation: 702
  labels:
    app: zalenium
    role: hub
  name: zalenium
  namespace: zalenium
  resourceVersion: '145397929'
  selfLink: /apis/apps.openshift.io/v1/namespaces/zalenium/deploymentconfigs/zalenium
  uid: 7a0602b6-841b-11e7-aa18-005056951563
spec:
  replicas: 1
  selector:
    app: zalenium
  strategy:
    activeDeadlineSeconds: 21600
    resources: {}
    rollingParams:
      intervalSeconds: 1
      maxSurge: 25%
      maxUnavailable: 25%
      timeoutSeconds: 600
      updatePeriodSeconds: 1
    type: Rolling
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: zalenium
        role: hub
    spec:
      containers:
        - args:
            - start
            - '--desiredContainers'
            - '2'
            - '--maxTestSessions'
            - '2'
            - '--maxDockerSeleniumContainers'
            - '46'
            - '--seleniumImageName'
            - '172.30.252.77:5000/openshift/zalenium-selenium:latest'
            - '--timeZone'
            - America/New_York
            - '--screenWidth'
            - '1200'
            - '--screenHeight'
            - '930'
            - '--videoRecordingEnabled'
            - 'false'
          env:
            - name: SEND_ANONYMOUS_USAGE_INFO
              value: 'false'
            - name: LOG_LEVEL
              value: DEBUG
            - name: https_proxy
              value: 'http://proxy_server_info_here:80'
            - name: http_proxy
              value: 'http://proxy_server_info_here:80'
            - name: no_proxy
              value: >-
                localhost,127.0.0.1,zalenium.zalenium.svc,.cluster.local
            - name: JAVA_OPTS
              value: '-Xms2048m -Xmx2048m'
          image: dosel/zalenium
          imagePullPolicy: Always
          name: zalenium
          ports:
            - containerPort: 4444
              protocol: TCP
          resources:
            requests:
              cpu: 300m
              memory: 2Gi
          securityContext:
            privileged: true
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
            - mountPath: /home/seluser/videos
              name: zalenium-videos
            - mountPath: /tmp/mounted
              name: zalenium-shared
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: zalenium
      serviceAccountName: zalenium
      terminationGracePeriodSeconds: 30
      volumes:
        - name: zalenium-shared
          nfs:
            path: /sdc_dkr_sharedspace_t_n/zalenium-shared
            server: nas_server
        - name: zalenium-videos
          nfs:
            path: /sdc_dkr_sharedspace_t_n/zalenium-videos
            server: nas_server
  test: false
  triggers:
    - type: ConfigChange

@mickfeech
Copy link

mickfeech commented Jun 19, 2018

It would seem to me that the dynamic nodes that spin up should inherit the proxy settings from the hub's deployment. Similar to what I had said was needed in #438 and how https://github.com/zalando/zalenium/blob/6942f80e986e504b2cf97f7f86600d28f4aca3b4/docs/_posts/2000-01-05-docker.md reads. I'm not sure why the documentation says it can be done when it cannot.

@pearj
Copy link
Collaborator

pearj commented Jun 22, 2018

It’s an easy fix. Just need to put http_proxy=“” in front of the curl command.

@pearj
Copy link
Collaborator

pearj commented Jun 22, 2018

@mickfeech it only was implemented for docker mode and not kubernetes, that’s why it doesn’t work. See 76c66dd
I’m still of the opinion that the correct (and more powerful) way to set a proxy is via the selenium api. But I can see why people want this as it makes it transparent to the person running tests.
We should probably generalise this proxy support so that it works for docker and kubernetes.

@pearj
Copy link
Collaborator

pearj commented Jul 7, 2018

I had a spare 30 minutes, so I have implemented this fix.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants