-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is it possible to autoscale slaves? #1066
Comments
please ask general questions in the Slack channel |
It's not a general question. It's a technical question. Is it possible to autoscale slaves? Also, sending people to the Slack channel is really annoying. It's not searchable from Github. It's not indexed from search engines. It requires signing up and all the messages and info about this open source project is then owned by a separate company. |
This is a tracker for the development of Locust, not a user support forum. It exists solely to keep track of development issues that need to be fixed. |
OK, makes sense I guess, I didn't realise this fact. |
Keep in mind that the template for any issue created here contains (I removed the comment part):
The best place for questions are either Slack or Stack Overflow. Even better is a PR to update documentation too :) |
Would you not consider using issue labels like many other repos? I understand you don't want to provide user support, and that's totally fine, but as you can see most people view Github issues for a project as a go to place for users as well as developers. With labels and filtering you can support both without too much headache 😅 Pushing people to Slack is easy but it's so ephemeral, even for the people that do go to the effort of creating an account. |
@max-rocket-internet yes, it is definitely possible, but be aware of the non-cloud native nature of locust (#1136) |
I've used Locust in cloud environments since 2011, and I don't think it's fair to call it "non-cloud native" on the basis that you can't arbitrarily kill the master process. You could say it's not built for high availability though.
EDIT: I previously said that Locust doesn't automatically re-distribute the load when new slave nodes connected, which is wrong. Completely forgot that we now do support re-distributing the load. My bad. |
Awesome! How does this work exactly? What does the master do it if has 10 slaves running 10 clients and an 11th slave connects? |
@max-rocket-internet it resets everything to 0 and distributes the new value to all slaves. atleast it looks like this (see #1143) so if you have a target of 10000 and 10 slaves it will first run 10000/10 on every slave and then 10000/11, but some slaves go down to zero (atleast it reports them as 0) which makes it kinda slow to come back at the previous value. |
@max-rocket-internet The master will send out new hatch messages to the slave nodes. Which will result in the existing 10 nodes killing ~90 of their running locust users, and the new node spawning ~900 users. A bug was causing a temporary drop in the current RPS when this happened, as reported by #1143, but that has now been fixed in master. |
Amazing 🚀 |
For example when running on Kubernetes this is very easy to set up by adding a
HorizontalPodAutoscaler
but will locust be OK with new slaves connecting to the master as the users are increased?The text was updated successfully, but these errors were encountered: