Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

scale clients up and down during a run #1185

Closed
cjw296 opened this issue Dec 3, 2019 · 8 comments
Closed

scale clients up and down during a run #1185

cjw296 opened this issue Dec 3, 2019 · 8 comments

Comments

@cjw296
Copy link

cjw296 commented Dec 3, 2019

Is your feature request related to a problem?

During a run, I'd like to be able to increment or decrement the number of clients while consulting metrics from the system under test.

Describe the solution you'd like

Ideally, I'd like a box or menu item somewhere to let me set a new target number of workers and have locust scale to that.

Describe alternatives you've considered

#1001 would also be great, but I don't know what I want the steps to be before I start the master.

Currently I have to kill the master and workers and re-start them with new params whenever I want to change the number of workers.

@delulu
Copy link
Contributor

delulu commented Dec 20, 2019

I think locust already supports to update your test execution plan during your testing, and you can do it through the webui or web api call like below:
curl -XPOST $LOCUST_URL/swarm -d"locust_count=100&hatch_rate=10"

image

@delulu
Copy link
Contributor

delulu commented Dec 20, 2019

Could you share more about your scenario about scaling down, as it's kinda of symmetric with scaling up. If you only want to know how server's performance changes with different user load, I think scaling up should be enough.

@cjw296
Copy link
Author

cjw296 commented Dec 23, 2019

I must have missed that edit link, or has it been recently added?
My use case is scaling up to the point where I think I've just tipped over the limits of the server, and then scaling back to just below this. Often that turns out not to be far enough, so I then need to scale down a bit more, and then repeat until I've found the "safe high load watermark".

@max-rocket-internet
Copy link
Contributor

When #1168 is resolved you can just autoscale on k8s or some other autoscaling platform like an EC2 ASG. This solves the "tipped over the limits of the server" problem for slaves but of course you can't autoscale the master so the mast must always have enough resources for the duration of the test.

@cjw296
Copy link
Author

cjw296 commented Jan 2, 2020

#1168 doesn't appear to address changing the number of users to simulate, still waiting to hear from @delulu when that was added... (by "the server", I'm referring to the server under test, to be clear...)

@max-rocket-internet
Copy link
Contributor

I'm referring to the server under test, to be clear

Oh sorry, I misunderstood.

@delulu
Copy link
Contributor

delulu commented Jan 3, 2020

#1168 doesn't appear to address changing the number of users to simulate, still waiting to hear from @delulu when that was added... (by "the server", I'm referring to the server under test, to be clear...)

this edit function have been there about 9 years ago, here is the code change 9c495c2.

@delulu
Copy link
Contributor

delulu commented Jan 3, 2020

also with step load pattern enabled, you can scale up the clients in small steps and it should be enough to identify the sweet spot.

I think this feature can be closed for now.

@cjw296 cjw296 closed this as completed Jan 3, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants