Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Start distributed test with multiple slaves with one command. #721

Closed
debugtalk opened this issue Jan 12, 2018 · 9 comments
Closed

Start distributed test with multiple slaves with one command. #721

debugtalk opened this issue Jan 12, 2018 · 9 comments

Comments

@debugtalk
Copy link

Description of feature request

Currently when we need to do distributed test, we have to start Locust master and slaves one by one. Suppose our load test machine has 32 cores, we need to run start command 33 times ! Also, when we adjust our Locust scripts, we have to kill all Locust slaves and start again.

Considering this scenario is so common, we can add one parameter (such as --cpu-cores) to simplify this job.

Expected behavior

With the argument , we can start locust with master and specified number of slaves (default to cpu cores number) at one time.

$ locust -f locustfile.py --cpu-cores 4
[2017-08-26 23:51:47,071] bogon/INFO/locust.main: Starting web monitor at *:8089
[2017-08-26 23:51:47,075] bogon/INFO/locust.main: Starting Locust 0.8a2
[2017-08-26 23:51:47,078] bogon/INFO/locust.main: Starting Locust 0.8a2
[2017-08-26 23:51:47,080] bogon/INFO/locust.main: Starting Locust 0.8a2
[2017-08-26 23:51:47,083] bogon/INFO/locust.main: Starting Locust 0.8a2
[2017-08-26 23:51:47,084] bogon/INFO/locust.runners: Client 'bogon_656e0af8e968a8533d379dd252422ad3' reported as ready. Currently 1 clients ready to swarm.
[2017-08-26 23:51:47,085] bogon/INFO/locust.runners: Client 'bogon_09f73850252ee4ec739ed77d3c4c6dba' reported as ready. Currently 2 clients ready to swarm.
[2017-08-26 23:51:47,084] bogon/INFO/locust.main: Starting Locust 0.8a2
[2017-08-26 23:51:47,085] bogon/INFO/locust.runners: Client 'bogon_869f7ed671b1a9952b56610f01e2006f' reported as ready. Currently 3 clients ready to swarm.
[2017-08-26 23:51:47,085] bogon/INFO/locust.runners: Client 'bogon_80a804cda36b80fac17b57fd2d5e7cdb' reported as ready. Currently 4 clients ready to swarm.

Actual behavior

To achieve the same goal above, we have to start Locust master first.

$ locust -f locustfile.py --master

And then open another terminal shell, start Locust slaves one by one.

$ locust -f locustfile.py --slave &
$ locust -f locustfile.py --slave &
$ locust -f locustfile.py --slave &
$ locust -f locustfile.py --slave &

Environment settings (for bug reports)

N/A

Steps to reproduce (for bug reports)

N/A

@cgoldberg
Copy link
Member

I'm -1 on this

@debugtalk
Copy link
Author

@cgoldberg I have implemented this feature in HttpRunner, as a Locust wrapper.

http://docs.httprunner.top/en/latest/load-test.html

I think this feature may do help to the convenience of Locust, and will not influence any current feature. Shall I set up a PR on this ?

@debugtalk
Copy link
Author

OK, I will keep this feature in HttpRunner.

@SpencerPinegar
Copy link

@cgoldberg OR @heyman - I am relatively new to Python (2 years experience) and I was wondering why a feature like this would be too hard to maintain or implement -- I am sure your decisions are based on good reasoning, can you help me understand?

@cgoldberg
Copy link
Member

why a feature like this would be too hard to maintain or implement

While it might be hard, my reasoning was based on the fact that many great configuration management tools already exist... you should use one to provision and execute your Locust tests if you need a complex distributed setup.

@SpencerPinegar
Copy link

Yeah, it would be cool to have an example of this.

@indrgun
Copy link

indrgun commented Sep 9, 2019

OK, I will keep this feature in HttpRunner.

@debugtalk
I tried running your "locusts" but it simply exits without any output on std-out console.

@emilorol
Copy link

emilorol commented Sep 30, 2019

I had a similar need and I was able to solve it locally with minishift. Later I was able to take it to openshift to get the most out of the hardware with minimum commands. The only down side is that the auto scale will reset your tests.

@MarcSteven
Copy link

The issue is still fuzzy, but the command cannot go @debugtalk

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants