Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

slave not connecting to master and no error reported #294

Closed
accessdev opened this issue Jun 17, 2015 · 17 comments
Closed

slave not connecting to master and no error reported #294

accessdev opened this issue Jun 17, 2015 · 17 comments

Comments

@accessdev
Copy link

I m trying to run a distributed load test

on the master host :

locust --master

[2015-06-17 08:22:11,703] locust-master.ubisoft.org/INFO/locust.main: Starting web monitor at *:8089
[2015-06-17 08:22:11,819] locust-master.ubisoft.org/INFO/locust.main: Starting Locust 0.7.2

on the slave host

locust --slave --master-host=10.30.96.32

[2015-06-17 08:21:09,829] locust-slave-1/INFO/locust.main: Starting Locust 0.7.2

I can reach http://10.30.96.32:8089 in my browser, but when I try to launch a test, I cant

the following message is displayed on the master host

[2015-06-17 08:24:57,868] locust-master.ubisoft.org/WARNING/locust.runners: You are running in distributed mode but have no slave servers connected. Please connect slaves prior to swarming.

I resolved the issue, it was a firewall issue, however it would be nice to get a message on slave host saying if the connection to the master failed or suceed

@suikoy
Copy link

suikoy commented Jan 9, 2018

Hi,
from today I'm getting the same error from my Kubernetes cluster deployed on google cloud environment

[2018-01-09 14:17:26,379] locust-master-deployment-262643481-9kfw8/INFO/locust.main: Starting web monitor at *:8089
[2018-01-09 14:17:26,381] locust-master-deployment-262643481-9kfw8/INFO/locust.main: Starting Locust 0.8.1
[2018-01-09 14:18:23,948] locust-master-deployment-262643481-9kfw8/WARNING/locust.runners: You are running in distributed mode but have no slave servers connected. Please connect slaves prior to swarming.

From two weeks I was able to get working Locust cluster without problem using Kubernetes 1.8.1
Yesterday I deleted my locust cluster and today I created a new one with Kubernetes 1.8.4 (1.8.1 is no longer available)
And I always get this damn error.

My deployment is very easy:

  1. a config file for locust task
apiVersion: v1
kind: ConfigMap
metadata:
  name: locust-new-configmap 
data:
  basic.py: |
  
    from locust import HttpLocust, TaskSet, task

    def index(l):
        l.client.get("/")
        
    def contatti(l):
        l.client.get("/contatti")         
        
    def privacy(l):
        l.client.get("/privacy")  
                  
    class UserTasks(TaskSet):
        tasks = [index,contatti,privacy]
        
        
    class WebsiteUser(HttpLocust):
        """
        Locust user class that does requests to the locust web server running on localhost
        """
        host = "http://127.0.0.1:8089"
        min_wait = 2000
        max_wait = 5000
        task_set = UserTasks
  1. a deployment for locust master
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: locust-master-deployment
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: locust
        role: master
    spec:
    
      hostAliases:
      - ip: "123.123.123.123"
        hostnames:
        - "www.mysite.it" 
        
      volumes:
      - name: locust-volume
        configMap:
          name: locust-new-configmap
          
      containers:
        - name: locust
          image: mylocust-image/locust:v0.8.1
          env:
            - name: LOCUST_MODE
              value: master
            - name: LOCUST_LOCUSTFILE_PATH
              value: "/locust-tasks/basic.py"
            - name: LOCUST_TARGET_HOST
              value: "https://www.mysite.it"
          
          volumeMounts:
          - name: locust-volume
            mountPath: /locust-tasks
          
          ports:
            - containerPort: 8089
            - containerPort: 5557
            - containerPort: 5558
  1. a deployment for locust slave
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: locust-slave-deployment
spec:
  replicas: 10
  template:
    metadata:
      labels:
        name: locust
        role: slave
    spec:
    
      hostAliases:
      - ip: "123.123.123.123"
        hostnames:
        - "www.mysite.it" 
         
      volumes:
      - name: locust-volume
        configMap:
          name: locust-new-configmap    
      
      containers:
        - name: locust
          image: mylocust-image/locust:v0.8.1
          env:
            - name: LOCUST_MODE
              value: slave
            - name: LOCUST_MASTER
              value: locust-master
            - name: LOCUST_LOCUSTFILE_PATH
              value: "/locust-tasks/basic.py"
            - name: LOCUST_TARGET_HOST
              value: "https://www.mysite.it"           
              
          volumeMounts:
          - name: locust-volume
            mountPath: /locust-tasks
  1. a load balancer
apiVersion: v1
kind: Service
metadata:
  name: locust-loadbalancer
  labels:
    name: locust
spec:
  type: LoadBalancer
  selector:
    name: locust
    role: master  
  ports:
    - port: 8089
      protocol: TCP
      name: master-web
    - port: 5557
      protocol: TCP
      name: master-port1
    - port: 5558
      protocol: TCP
      name: master-port2

I tried also with Kubernetes 1.7.11 but no success :-(
Some advice?

@heyman
Copy link
Member

heyman commented Jan 9, 2018

@suikoy This definitely sounds like an issue with the Kubernetes setup, and not with Locust.

How do you refer to the hostname/IP of the master node? Is it though the LOCUST_MASTER env variable set to locust-master? If so, I don't see any reference to locust-master in 2. a deployment for locust master. Perhaps it should match the metadata.name set to locust-master-deployment? (I don't have much experience with Kubernetes)

@suikoy
Copy link

suikoy commented Jan 9, 2018

Yes I know that is a problem with Kubernetes but I thought to ask here too in case of some useful suggestions. Anyway the variable LOCUST_MASTER is set with this configuration:

       ....
       containers:
        - name: locust
          image: mylocust-image/locust:v0.8.1
          env:
            - name: LOCUST_MODE
              value: slave
            - name: LOCUST_MASTER
              value: locust-master

(from slave deployment)

I take the code from this project: https://github.com/peter-evans/locust-docker/blob/master/kubernetes/locust-slave.yaml and as already mentioned, this way to send variables to Locust code was working.

In you opinion, the key-value

            - name: LOCUST_MASTER
              value: locust-master

is not correct?

@heyman
Copy link
Member

heyman commented Jan 9, 2018

As I said, I have limited experience with Kubernetes, but looking at the repo you linked (https://github.com/peter-evans/locust-docker/), the environment variable LOCUST_MASTER in the slave config, matches the value of metadata.name in the master config. In your pasted configs they don't match (have you renamed metadata.name to locust-master-deployment?). Perhaps that is the problem?

@suikoy
Copy link

suikoy commented Jan 10, 2018

Hi,
many thanks because your suggestion was correct. I answer here in the case of this discussion can be useful for others.
In order to work with the newer version of Kubernetes is required that the metadata.name of master deployment is equal also in slave deployment and loadbalancer service.
So the code becomes for example:

[master deployment]

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: locust-master-deployment
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: locust
        role: master
        ...

[slave deployment]

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: locust-slave-deployment
spec:
  replicas: 10
  template:
    metadata:
      labels:
        name: locust
        role: slave
    spec:         
      ....
      containers:
        - name: locust
          image: mylocust-image/locust:v0.8.1
          env:
            - name: LOCUST_MODE
              value: slave
            - name: LOCUST_MASTER
              value: locust-master-deployment               
            - name: LOCUST_LOCUSTFILE_PATH
              value: "/locust-tasks/basic.py"
            - name: LOCUST_TARGET_HOST
              value: "https://www.mysite.it"
            ...

[load balancer service]

apiVersion: v1
kind: Service
metadata:
  name: locust-master-deployment        
  labels:
    name: locust
  ...

@mmarquezv
Copy link

I've set LOCUS_MASTER but it complains with the following:
locust: error: Unexpected value for LOCUST_MASTER: 'my-locust-master'. Expecting 'true', 'false', 'yes', 'no', '1' or '0'

@heyman
Copy link
Member

heyman commented Apr 20, 2020

@mmarquezv Did you mean to set LOCUST_MASTER_HOST?

@mmarquezv
Copy link

mmarquezv commented Apr 21, 2020

No, I've set LOCUST_MASTER. When I set LOCUST_MASTER_HOST, the slaves can't see master host. So I thought it was LOCUS_MASTER. Pretty confusing even if the metadata.name of master deployment is equal also in slave deployment and loadbalancer service

@kiranbhadale
Copy link

@mmarquezv Did you get an solution on your below issue:

I've set LOCUS_MASTER but it complains with the following:
locust: error: Unexpected value for LOCUST_MASTER: 'my-locust-master'. Expecting 'true', 'false', 'yes', 'no', '1' or '0'

@mmarquezv
Copy link

Still no luck configuring Locust on Distributed Mode. I've tried a lot of things but nothing seems to work.
I'm sharing my configurations with the community so I hope someone has figured out how to configure Locust on a Kubernetes Cluster.

locust-ingress.yaml:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: services-ingress
  namespace: locust-load-tests
  annotations:
    kubernetes.io/ingress.class: "nginx"
    
spec:  
  rules:
    - host: <<<myowndomain.site.com>>>
      http:
        paths:
          - path: /
            backend:
              serviceName: master-bploadtest-service
              servicePort: 8089

locust-master-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: master-bploadtest-service
  namespace: locust-load-tests
  labels:
    app: master-bploadtest
spec:
  ports:
    - port: 8089
      targetPort: 8089
      protocol: TCP
      name: loc-master-ms
    - port: 5557
      targetPort: 5557
      protocol: TCP
      name: loc-master-p1
    - port: 5558
      targetPort: 5558
      protocol: TCP
      name: loc-master-p2
  selector:
    app: master-bploadtest

locust-master-controller.yaml

apiVersion: "extensions/v1beta1"
kind: "Deployment"
metadata:
  name: master-bploadtest
  namespace: locust-load-tests
  labels:
    name: master-bploadtest
spec:
  replicas: 1
  selector:
    matchLabels:
      app: master-bploadtest
  template:
    metadata:
      labels:
        app: master-bploadtest
    spec:
      containers:
        - name: master-bploadtest
          image: locustio/locust:latest
          env:
            - name: LOCUST_MODE
              value: master
            - name: TARGET_URL
              value: <<my-api-main-url.com>>
            - name: LOCUSTFILE_PATH
              value: /locust/locustfile.py
            - name: LOCUST_MASTER_PORT
              value: "5557"
          volumeMounts:
            - mountPath: /locust
              name: locust-scripts
          ports:
            - name: loc-master-ms
              containerPort: 8089
              protocol: TCP
            - name: loc-master-p1
              containerPort: 5557
              protocol: TCP
            - name: loc-master-p2
              containerPort: 5558
              protocol: TCP
      volumes:
        - name: locust-scripts
          configMap:
            name: scripts-configmap

locust-worker-controller.yaml

apVersion: "extensions/v1beta1"
kind: "Deployment"
metadata:
  name: slave-bploadtest
  namespace: locust-load-tests
  labels:
    name: slave-bploadtest
spec:
  replicas: 5
  selector:
    matchLabels:
      app: slave-bploadtest
  template:
    metadata:
      labels:
        app: slave-bploadtest
    spec:
      containers:
        - name: slave-bploadtest
          image: locustio/locust:latest
          imagePullPolicy: IfNotPresent
          env:
            - name: LOCUST_MODE
              value: worker
            - name: LOCUST_MASTER_HOST
              value: master-bploadtest-service
            - name: TARGET_URL
              value: <<my-api-main-url.com>>
            - name: LOCUSTFILE_PATH
              value: /locust/locustfile.py
            - name: LOCUST_MASTER_PORT
              value: "5557"
          volumeMounts:
            - mountPath: /locust
              name: locust-scripts
      volumes:
        - name: locust-scripts
          configMap:
            name: scripts-configmap

scripts-configmap.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: scripts-configmap
  namespace: locust-load-tests
data:
  locustfile.py: |
    import uuid

    from datetime import datetime
    from locust import HttpLocust, TaskSet, task
    import os
    import time
    import logging
    import json
    from locust import HttpLocust, TaskSet, task
    .......
    << Ommitted the rest of the script for brevity >>
    .......

Prior to these configuration I had problems with master port and worker configuration, lots of CrashLoops and so on.

Still getting:
master-bploadtest-58467c9978-746z7/WARNING/locust.runners: You are running in distributed mode but have no slave servers connected. Please connect slaves prior to swarming.

Right now I'm thinking if there's a better alternative than Locust. I'm sad because I thought this was a good tool for my load tests.

Has anyone solved the "no slaves servers connected" problem?

Hope my scripts help someone else figuring out how to configure Locust inside a Kubernetes Cluster.

@kiranbhadale Hope this solves your question

@kiranbhadale
Copy link

kiranbhadale commented Apr 28, 2020

@mmarquezv
I tried a few permutation combinations with the parameters and finally one worked. I kept the meta-data name same for master and its service. below are my configs and they work like a charm. Since I have a few customizations in my implementation, I am not using k8s locust integrated parameters to run locust file. Instead, I'm using make command to run my locust. My Master and Service configs are in the same yaml file whereas slave is in a separate file. I hope this helps. And for testing purpose, I have executed below files on minikube which shouldn't be problem to replicate on cloud I suppose.

Apart from this, I created a fresh docker image, deleted the performance namespace and started from scratch to avoid any conflicts from my previous builds.

Master and Service:

apiVersion: v1
kind: Namespace
metadata:
  name: locust-perf
  labels:
    name: locust-perf

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: lm-pod
  namespace: locust-perf
  labels:
    name: lm-pod
spec:
  replicas: 1
  selector:
    matchLabels:
      app: lm-pod
  template:
    metadata:
      labels:
        app: lm-pod
    spec:
      containers:
        - name: lm-pod
          image: perf_locust:v0.9.7
          imagePullPolicy: Never
          stdin: true
          tty: true
          securityContext:
            runAsUser: 0
          command: ["/bin/bash","-c"]
          args: [<make command to run my locust file>]
          env:
            - name: LOCUST_MODE
              value: master
            - name: TARGET_HOST
              value: ''
          ports:
            - name: loc-master-web
              containerPort: 8089
              protocol: TCP
            - name: loc-master-p1
              containerPort: 5557
              protocol: TCP
            - name: loc-master-p2
              containerPort: 5555
              protocol: TCP
---
kind: Service
apiVersion: v1
metadata:
  name: lm-pod
  namespace: locust-perf
  labels:
    app: lm-pod
spec:
  ports:
    - port: 8089
      targetPort: loc-master-web
      protocol: TCP
      name: loc-master-web
    - port: 5557
      targetPort: loc-master-p1
      protocol: TCP
      name: loc-master-p1
    - port: 5555
      targetPort: loc-master-p2
      protocol: TCP
      name: loc-master-p2
  selector:
    app: lm-pod
  type: LoadBalancer

Slave Yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: lw-pod
  namespace: locust-perf
  labels:
    name: lw-pod
spec:
  replicas: 1
  selector:
    matchLabels:
      app: lw-pod
  template:
    metadata:
      labels:
        app: lw-pod
    spec:
      containers:
        - name: lw-pod
          image: perf_locust:v0.9.7
          imagePullPolicy: Never
          tty: true
          stdin: true
          securityContext:
            runAsUser: 0
          command: ["/bin/bash","-c"]
          args: [<make command to run my locust file>]
          resources:
            limits:
              cpu: 500m
              memory: 512Mi
          env:
            - name: LOCUST_MODE
              value: slave
            - name: LOCUST_MASTER_HOST
              value: lm-pod

@HeyHugo
Copy link
Member

HeyHugo commented Apr 28, 2020

@mmarquezv Your yaml looks good but I saw that the Dockerfile was updated recently so the image with latest tag doesn't use the env vars any more. Try with locustio/locust:0.14.6

@heyman maybe the latest tag for the docker image should use the latest git tag instad of master?

@mmarquezv
Copy link

@HeyHugo Nice suggestion! I'm going to test it. Thanks a lot for your response!

@heyman
Copy link
Member

heyman commented Apr 28, 2020

@HeyHugo:

maybe the latest tag for the docker image should use the latest git tag instad of master?

Hm, it’s supposed to point at the latest release, and according to Docker Hub the latest tag is currently built from the 0.14.6 git tag (I’m currently on mobile and I can’t verify this until tomorrow)?

@mmarquezv
Copy link

mmarquezv commented Apr 29, 2020

I've changed to docker image tag 0.14.6 and named service and master metadata the same. Result = ERROR.
Changed to docker image tag 0.14.5 and rechecked that service and master have the same name in metadata. Result = ERROR

E 2020-04-29T02:59:56.501543774Z [2020-04-29 02:59:56,501] masterloadtests-5f7dddc77-2ljk9/INFO/locust.main: Starting web monitor at http://*:8089
E 2020-04-29T02:59:56.501820060Z [2020-04-29 02:59:56,501] masterloadtests-5f7dddc77-2ljk9/INFO/locust.main: Starting Locust 0.14.5
E 2020-04-29T03:01:10.034781314Z [2020-04-29 03:01:10,033] masterloadtests-5f7dddc77-2ljk9/WARNING/locust.runners: You are running in distributed mode but have no slave servers connected. Please connect slaves prior to swarming.

Also completely deleted the Kubernetes cluster, and same error occurs.

@HeyHugo
Copy link
Member

HeyHugo commented Apr 29, 2020

@mmarquezv What does the worker pod logs say?

I see you have imagePullPolicy: Never I'm not sure if that means it wouldn't even pull new tags or not, but k8 docs suggest this might be the case

imagePullPolicy: Never: the image is assumed to exist locally. No attempt is made to pull the image.

Edit
Ah I didn't look at your deployments, disregard the comment about imagePullPolicy yours were already IfNotPresent

@mmarquezv
Copy link

@mmarquezv
I tried a few permutation combinations with the parameters and finally one worked. I kept the meta-data name same for master and its service. below are my configs and they work like a charm. Since I have a few customizations in my implementation, I am not using k8s locust integrated parameters to run locust file. Instead, I'm using make command to run my locust. My Master and Service configs are in the same yaml file whereas slave is in a separate file. I hope this helps. And for testing purpose, I have executed below files on minikube which shouldn't be problem to replicate on cloud I suppose.

Apart from this, I created a fresh docker image, deleted the performance namespace and started from scratch to avoid any conflicts from my previous builds.

Master and Service:

apiVersion: v1
kind: Namespace
metadata:
  name: locust-perf
  labels:
    name: locust-perf

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: lm-pod
  namespace: locust-perf
  labels:
    name: lm-pod
spec:
  replicas: 1
  selector:
    matchLabels:
      app: lm-pod
  template:
    metadata:
      labels:
        app: lm-pod
    spec:
      containers:
        - name: lm-pod
          image: perf_locust:v0.9.7
          imagePullPolicy: Never
          stdin: true
          tty: true
          securityContext:
            runAsUser: 0
          command: ["/bin/bash","-c"]
          args: [<make command to run my locust file>]
          env:
            - name: LOCUST_MODE
              value: master
            - name: TARGET_HOST
              value: ''
          ports:
            - name: loc-master-web
              containerPort: 8089
              protocol: TCP
            - name: loc-master-p1
              containerPort: 5557
              protocol: TCP
            - name: loc-master-p2
              containerPort: 5555
              protocol: TCP
---
kind: Service
apiVersion: v1
metadata:
  name: lm-pod
  namespace: locust-perf
  labels:
    app: lm-pod
spec:
  ports:
    - port: 8089
      targetPort: loc-master-web
      protocol: TCP
      name: loc-master-web
    - port: 5557
      targetPort: loc-master-p1
      protocol: TCP
      name: loc-master-p1
    - port: 5555
      targetPort: loc-master-p2
      protocol: TCP
      name: loc-master-p2
  selector:
    app: lm-pod
  type: LoadBalancer

Slave Yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: lw-pod
  namespace: locust-perf
  labels:
    name: lw-pod
spec:
  replicas: 1
  selector:
    matchLabels:
      app: lw-pod
  template:
    metadata:
      labels:
        app: lw-pod
    spec:
      containers:
        - name: lw-pod
          image: perf_locust:v0.9.7
          imagePullPolicy: Never
          tty: true
          stdin: true
          securityContext:
            runAsUser: 0
          command: ["/bin/bash","-c"]
          args: [<make command to run my locust file>]
          resources:
            limits:
              cpu: 500m
              memory: 512Mi
          env:
            - name: LOCUST_MODE
              value: slave
            - name: LOCUST_MASTER_HOST
              value: lm-pod

Thanks man! your yaml's made me realize I was using

  • name: LOCUST_MODE
    value: worker

instead of

  • name: LOCUST_MODE
    value: slave

I was finally able to run my distributed load test

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants