-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
balance/recover the load distribution when new slave joins #970
Conversation
@Jonnymcc for awareness |
can you move those to a separate PR? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, I was thinking this would be a nice improvement to have.
locust/test/test_zmqrpc.py
Outdated
self.assertEqual(msg.type, 'test') | ||
self.assertEqual(msg.data, 'message') | ||
|
||
def test_client_recv(self): | ||
sleep(0.01) | ||
sleep(0.1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was the sleep not long enough?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no, not long enough at my side, besides there's no harm to set a longer time here.
sure, here is the separate PR: #972 |
@cgoldberg please help check this pr and merge into master, let me know if any concern, thx! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.. thanks
With Locust master and slave agents running in Kubernetes, Kubernetes will guarantee the availability of Locust agents.
But when a slave agent crashes and restarts, it will have a different client id and it has no idea of the user load that master assigned to it previously. And the total number of running locusts will not be as many as expected.
So it might be a better way to balance the user load when new client joins, and the total number of running locusts will still be the same as we specified in the swarm request.
also this PR fixes some issue I noticed when running in Python 3 with web mode, it turns out to be the inconsistency introduced in recv_from_client and send_to_client
Any thought or comment?