-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
In distributed mode, not all stats are collected/displayed in the 'main' UI #217
Comments
That's not an expected behaviour. Do you get any exceptions in any of the slaves', or the master's, output? What's the CPU load on the machines? |
The load is under 1% for all locust process and not much more for entire system on each node. These are clean installs with locust only. I do not see any errors, but interestingly enough i only seem to see the logs for the 'second half' of the results on all of the slaves. [2014-12-11 20:47:46,357] locust02.host.net/INFO/locust.runners: Hatching and swarming 72 clients at the rate 7.14286 clients/s... Test scenario is same as above. 1000 users, 100 hatch rate. This has been the case for every test ive done in clusters over 8 and ive not done anything different then whats suggested in the docs. |
This is logs from master: https://gist.github.com/the2hill/25c7d58c281d0e8625b3 |
What do you mean by "only seem to see the logs for the 'second half' of the results on all of the slaves"? Did you stop all processes (slaves as well as master) before trying to run the test (you've probably tried that, I just to make sure it isn't an issue with some stray process still running)? |
Yea, these are fresh environments prior to running, I utilize Ansible to spin things up/down. By 'second half' i mean the stats/data that is hidden when running the 'main' swarm. i.e the 71 out of 1000 rather than the full 1000 |
In other words, just these logs: On all slaves. I should see references to the other 900+ clients or ideally just the 1000. |
Im moving on to another tool and will not be updating this. It would be nice to see if others could replicate it, so I wont close it as i believe it is a big issue that should be resolved. |
I have 14 slaves and 1 master. (Ive seen this happen with 9 slaves also, anything less has been fine).
When i run tests users:1000 hatch:100, only 929 users are shown to have hatched. I click stop and the UI is still running with the other 71 users.
Am i missing a piece of documentation where i need to aggregate stats from different runs/endpoints after a certain cluster size?
The text was updated successfully, but these errors were encountered: