-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Slave count doesn't get updated in the UI if no more slaves are alive #62
Comments
The slave count sounds like a bug and should be fixed. Thanks for reporting, and a fix would definitely be appreciated :). I guess some kind of warning in the web UI wouldn't be bad either, but please do two separate pull requests if you give them a shot. |
I second the issue, are the slaves actually there and the number is invalid? Or is the slave number correct? |
Ok, thanks for reporting! Hopefully I'll get time to go over some waiting pull requests and issues, early next week. |
It might not be entirely in accurate. I am trying to spawn 20 slaves per
Nicholas McCready Personal Email: [email protected] |
Ok this bug did come again and I did verify that it does report inaccurately at times. This instance for example was supposed to be 14 slaves and it was 12. To help you count each machine you can use the command below ps aux | grep py | grep -v grep | awk '{print $12}' | wc -l |
@nem: starting slaves works as it should for me. how exactly are you trying to spawn slaves? |
@bogdangherca: BTW that url should be in the Documentation site somewhere or the text should be in the Documentation for the latest version. I've noticed many "gotchas" of documentation gems that are on Github and not on the doc site. |
@nem: Indeed, starting the master last was your problem. You should start the master first in order for the slaves to connect to it. Anyway, glad it worked fine for you. |
nvm looks like it was chrome gist problem, it worked fine in safari . Here is the gist https://gist.github.com/nmccready/5547455 . Anyways my issue is not being able to start a user amount beyond the slave amount. At least the reported users is never larger than the slave amount. So the gist is to determine if something is wrong with setup. |
FYI this has started working, IE user count > than slave count. |
[#62] Correctly update slave count when drops below 1.
Hello guys, Our docker is using latest locustio package from pip. Thanks! |
@vorozhko:
The problem is related to how the containers are stopped. For a locust slave to be properly terminated and the number correctly updated, it needs to send the In my case, the entrypoint for the container is a shell script which starts locust as a child process. It means that the shell script will be assigned PID 1 and the locust script a different PID. When The locust start-up I'm using is mostly inspired by https://github.com/peter-evans/locust-docker. With that setup, the easy fix was to prepend
Another way to fix this problem would be to handle the case when a slave is disconnected in the locust code itself (socket closed or similar). |
This is to ensure when scaling down worker pod, `quit` message being sent and master to be notified. related: locustio/locust#62
This is to ensure when scaling down worker pod, `quit` message being sent and master to be notified. related: locustio/locust#62
Hi,
I was running some simple tests with locust (which is so cool btw) and I noticed that if you end up with no slaves connected, the UI does not reflect this change. The slave count in the UI sticks to 1 in this case.
Also, it would be nice to get a warning message in the webUI if you start swarming with no slaves connected. Now, you get this warning only in the command line.
I could provide a quick fix for this if you'd like.
Thanks!
The text was updated successfully, but these errors were encountered: