-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow --no-web together with --master for automation #333
Conversation
Hi @undera Will this functionality allow us to run a master and multiple slaves using the Taurus config? |
Yep. It's not 100% ready, but enough to get the idea. |
I like this functionality, though maybe I'd prefer the flag to be called Any thoughts on this? |
From my experience, some people are willing to have exact control on number if requests made. For example, they doing benchmark of 100 000 requests and compare them to each other. Usually I'm implementing in my tools the logic "stop whatever end first, requests limit or time limit". |
I'm fine with the name |
@@ -115,6 +116,15 @@ def parse_options(): | |||
help="Port that locust master should bind to. Only used when running with --master. Defaults to 5557. Note that Locust will also use this port + 1, so by default the master node will bind to 5557 and 5558." | |||
) | |||
|
|||
parser.add_option( | |||
'--expect-slaves', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Personally I prefer --min-slaves
as it better communicates what the behavior is. Since you can have greater than the number of expected slaves with this implementation. See https://github.com/locustio/locust/pull/372/files#diff-6f782ed63a6a642694db58c0d5cdd932R125
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't the current implementation start close to immediately once the min slave count has been reached? If so, how would I go about to start a test with more slaves than min_slaves (other than just spawning them all at the same time and hoping that they would connect before the the poll check)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it would but I think that's the behavior I would expect. I think most people using this feature would bring up the master process specifying --min-slaves 10
, then bring up 10 slave processes to connect to the master. If I wanted 11 slaves, I would simply indicate --min-slaves 11
. However if I thought I wanted 10 slaves, but later realized I need 11 slaves to reach my desired throughput, then I could add another 1 slave after the original 10 slaves have already started swarming.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In that case I think i'd still prefer --num-slaves
or --expected-slave-count
. Though it might be bikeshedding, and I'm fine with --min-slaves
as well :).
but later realized I need 11 slaves to reach my desired throughput, then I could add another 1 slave after the original 10 slaves have already started swarming
That would require us to have some kind of automatic re-balancing of simulated users when new slaves connect.
@heyman @justiniso Are there any updates on merging in this feature? |
+1 :) |
Summary:
Taurus mailing list discussion for reference: https://groups.google.com/forum/#!topic/codename-taurus/Gs3VdphRSjo