-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
slave not connecting to master and no error reported #294
Comments
Hi,
From two weeks I was able to get working Locust cluster without problem using Kubernetes 1.8.1 My deployment is very easy:
I tried also with Kubernetes 1.7.11 but no success :-( |
@suikoy This definitely sounds like an issue with the Kubernetes setup, and not with Locust. How do you refer to the hostname/IP of the master node? Is it though the |
Yes I know that is a problem with Kubernetes but I thought to ask here too in case of some useful suggestions. Anyway the variable
(from slave deployment) I take the code from this project: https://github.com/peter-evans/locust-docker/blob/master/kubernetes/locust-slave.yaml and as already mentioned, this way to send variables to Locust code was working. In you opinion, the key-value
is not correct? |
As I said, I have limited experience with Kubernetes, but looking at the repo you linked (https://github.com/peter-evans/locust-docker/), the environment variable |
Hi, [master deployment]
[slave deployment]
[load balancer service]
|
I've set LOCUS_MASTER but it complains with the following: |
@mmarquezv Did you mean to set |
No, I've set LOCUST_MASTER. When I set LOCUST_MASTER_HOST, the slaves can't see master host. So I thought it was LOCUS_MASTER. Pretty confusing even if the metadata.name of master deployment is equal also in slave deployment and loadbalancer service |
@mmarquezv Did you get an solution on your below issue:
|
Still no luck configuring Locust on Distributed Mode. I've tried a lot of things but nothing seems to work. locust-ingress.yaml: apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: services-ingress
namespace: locust-load-tests
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: <<<myowndomain.site.com>>>
http:
paths:
- path: /
backend:
serviceName: master-bploadtest-service
servicePort: 8089 locust-master-service.yaml apiVersion: v1
kind: Service
metadata:
name: master-bploadtest-service
namespace: locust-load-tests
labels:
app: master-bploadtest
spec:
ports:
- port: 8089
targetPort: 8089
protocol: TCP
name: loc-master-ms
- port: 5557
targetPort: 5557
protocol: TCP
name: loc-master-p1
- port: 5558
targetPort: 5558
protocol: TCP
name: loc-master-p2
selector:
app: master-bploadtest locust-master-controller.yaml apiVersion: "extensions/v1beta1"
kind: "Deployment"
metadata:
name: master-bploadtest
namespace: locust-load-tests
labels:
name: master-bploadtest
spec:
replicas: 1
selector:
matchLabels:
app: master-bploadtest
template:
metadata:
labels:
app: master-bploadtest
spec:
containers:
- name: master-bploadtest
image: locustio/locust:latest
env:
- name: LOCUST_MODE
value: master
- name: TARGET_URL
value: <<my-api-main-url.com>>
- name: LOCUSTFILE_PATH
value: /locust/locustfile.py
- name: LOCUST_MASTER_PORT
value: "5557"
volumeMounts:
- mountPath: /locust
name: locust-scripts
ports:
- name: loc-master-ms
containerPort: 8089
protocol: TCP
- name: loc-master-p1
containerPort: 5557
protocol: TCP
- name: loc-master-p2
containerPort: 5558
protocol: TCP
volumes:
- name: locust-scripts
configMap:
name: scripts-configmap locust-worker-controller.yaml apVersion: "extensions/v1beta1"
kind: "Deployment"
metadata:
name: slave-bploadtest
namespace: locust-load-tests
labels:
name: slave-bploadtest
spec:
replicas: 5
selector:
matchLabels:
app: slave-bploadtest
template:
metadata:
labels:
app: slave-bploadtest
spec:
containers:
- name: slave-bploadtest
image: locustio/locust:latest
imagePullPolicy: IfNotPresent
env:
- name: LOCUST_MODE
value: worker
- name: LOCUST_MASTER_HOST
value: master-bploadtest-service
- name: TARGET_URL
value: <<my-api-main-url.com>>
- name: LOCUSTFILE_PATH
value: /locust/locustfile.py
- name: LOCUST_MASTER_PORT
value: "5557"
volumeMounts:
- mountPath: /locust
name: locust-scripts
volumes:
- name: locust-scripts
configMap:
name: scripts-configmap scripts-configmap.yaml apiVersion: v1
kind: ConfigMap
metadata:
name: scripts-configmap
namespace: locust-load-tests
data:
locustfile.py: |
import uuid
from datetime import datetime
from locust import HttpLocust, TaskSet, task
import os
import time
import logging
import json
from locust import HttpLocust, TaskSet, task
.......
<< Ommitted the rest of the script for brevity >>
....... Prior to these configuration I had problems with master port and worker configuration, lots of CrashLoops and so on. Still getting: Right now I'm thinking if there's a better alternative than Locust. I'm sad because I thought this was a good tool for my load tests. Has anyone solved the "no slaves servers connected" problem? Hope my scripts help someone else figuring out how to configure Locust inside a Kubernetes Cluster. @kiranbhadale Hope this solves your question |
@mmarquezv Apart from this, I created a fresh docker image, deleted the performance namespace and started from scratch to avoid any conflicts from my previous builds. Master and Service:apiVersion: v1
kind: Namespace
metadata:
name: locust-perf
labels:
name: locust-perf
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: lm-pod
namespace: locust-perf
labels:
name: lm-pod
spec:
replicas: 1
selector:
matchLabels:
app: lm-pod
template:
metadata:
labels:
app: lm-pod
spec:
containers:
- name: lm-pod
image: perf_locust:v0.9.7
imagePullPolicy: Never
stdin: true
tty: true
securityContext:
runAsUser: 0
command: ["/bin/bash","-c"]
args: [<make command to run my locust file>]
env:
- name: LOCUST_MODE
value: master
- name: TARGET_HOST
value: ''
ports:
- name: loc-master-web
containerPort: 8089
protocol: TCP
- name: loc-master-p1
containerPort: 5557
protocol: TCP
- name: loc-master-p2
containerPort: 5555
protocol: TCP
---
kind: Service
apiVersion: v1
metadata:
name: lm-pod
namespace: locust-perf
labels:
app: lm-pod
spec:
ports:
- port: 8089
targetPort: loc-master-web
protocol: TCP
name: loc-master-web
- port: 5557
targetPort: loc-master-p1
protocol: TCP
name: loc-master-p1
- port: 5555
targetPort: loc-master-p2
protocol: TCP
name: loc-master-p2
selector:
app: lm-pod
type: LoadBalancer Slave YamlapiVersion: apps/v1
kind: Deployment
metadata:
name: lw-pod
namespace: locust-perf
labels:
name: lw-pod
spec:
replicas: 1
selector:
matchLabels:
app: lw-pod
template:
metadata:
labels:
app: lw-pod
spec:
containers:
- name: lw-pod
image: perf_locust:v0.9.7
imagePullPolicy: Never
tty: true
stdin: true
securityContext:
runAsUser: 0
command: ["/bin/bash","-c"]
args: [<make command to run my locust file>]
resources:
limits:
cpu: 500m
memory: 512Mi
env:
- name: LOCUST_MODE
value: slave
- name: LOCUST_MASTER_HOST
value: lm-pod |
@mmarquezv Your yaml looks good but I saw that the Dockerfile was updated recently so the image with @heyman maybe the |
@HeyHugo Nice suggestion! I'm going to test it. Thanks a lot for your response! |
Hm, it’s supposed to point at the latest release, and according to Docker Hub the latest tag is currently built from the 0.14.6 git tag (I’m currently on mobile and I can’t verify this until tomorrow)? |
I've changed to docker image tag 0.14.6 and named service and master metadata the same. Result = ERROR. E 2020-04-29T02:59:56.501543774Z [2020-04-29 02:59:56,501] masterloadtests-5f7dddc77-2ljk9/INFO/locust.main: Starting web monitor at http://*:8089 Also completely deleted the Kubernetes cluster, and same error occurs. |
@mmarquezv What does the worker pod logs say? I see you have
Edit |
Thanks man! your yaml's made me realize I was using
instead of
I was finally able to run my distributed load test |
I m trying to run a distributed load test
on the master host :
locust --master
[2015-06-17 08:22:11,703] locust-master.ubisoft.org/INFO/locust.main: Starting web monitor at *:8089
[2015-06-17 08:22:11,819] locust-master.ubisoft.org/INFO/locust.main: Starting Locust 0.7.2
on the slave host
locust --slave --master-host=10.30.96.32
[2015-06-17 08:21:09,829] locust-slave-1/INFO/locust.main: Starting Locust 0.7.2
I can reach http://10.30.96.32:8089 in my browser, but when I try to launch a test, I cant
the following message is displayed on the master host
[2015-06-17 08:24:57,868] locust-master.ubisoft.org/WARNING/locust.runners: You are running in distributed mode but have no slave servers connected. Please connect slaves prior to swarming.
I resolved the issue, it was a firewall issue, however it would be nice to get a message on slave host saying if the connection to the master failed or suceed
The text was updated successfully, but these errors were encountered: