You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a single master and 2 workers clustered in AWS. I can see from the UI that the master knows there are two workers, but when I hit the button to swarm it never moves onto the next page.
It does start making requests to get the error and response status, but these all just return empty.
I even get this log:
[2018-04-21 15:45:18,969] ip-172-31-21-173/INFO/locust.runners: Sending hatch jobs to 2 ready clients
but nothing else.
Expected behavior
I expect locusts to hatch on the worker nodes, and for the ui to start displaying metrics.
@cgoldberg thanks for encouraging me to double check! It turns out that on AWS VMs in the same security group do not have access to each other. Fortunately you can specify the security group as a source and it will adjust appropriately. Hopefully this will be useful in hindsight! Thanks!
Description of issue / feature request
I have a single master and 2 workers clustered in AWS. I can see from the UI that the master knows there are two workers, but when I hit the button to swarm it never moves onto the next page.
It does start making requests to get the error and response status, but these all just return empty.
I even get this log:
[2018-04-21 15:45:18,969] ip-172-31-21-173/INFO/locust.runners: Sending hatch jobs to 2 ready clients
but nothing else.
Expected behavior
I expect locusts to hatch on the worker nodes, and for the ui to start displaying metrics.
Environment settings (for bug reports)
Steps to reproduce (for bug reports)
locust -f locust_static_rpc.py --master
locust -f locust_static_rpc.py --slave --master-host=AWS_PRIVATE_IP
The text was updated successfully, but these errors were encountered: