Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't start swarming when in master/slave mode on Ubuntu 14.04 #293

Closed
hartfordfive opened this issue Jun 11, 2015 · 3 comments
Closed

Can't start swarming when in master/slave mode on Ubuntu 14.04 #293

hartfordfive opened this issue Jun 11, 2015 · 3 comments

Comments

@hartfordfive
Copy link

I've recently worked on implementing a Locust cluster on Ubuntu 14.04 and I'm getting the following error when ever I attempt to start a new swarm:

[2015-06-11 00:30:40,622] ip-10-1-1-78/INFO/locust.runners: Sending hatch jobs to 1 ready clients
[2015-06-11 00:30:40,622] ip-10-1-1-78/ERROR/stderr: Traceback (most recent call last):
[2015-06-11 00:30:40,622] ip-10-1-1-78/ERROR/stderr: File "/usr/local/lib/python2.7/dist-packages/gevent/pywsgi.py", line 508, in handle_one_response
[2015-06-11 00:30:40,622] ip-10-1-1-78/ERROR/stderr: self.run_application()
[2015-06-11 00:30:40,623] ip-10-1-1-78/ERROR/stderr: File "/usr/local/lib/python2.7/dist-packages/gevent/pywsgi.py", line 494, in run_application
[2015-06-11 00:30:40,623] ip-10-1-1-78/ERROR/stderr: self.result = self.application(self.environ, self.start_response)
[2015-06-11 00:30:40,623] ip-10-1-1-78/ERROR/stderr: File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1836, in call
[2015-06-11 00:30:40,623] ip-10-1-1-78/ERROR/stderr: return self.wsgi_app(environ, start_response)
[2015-06-11 00:30:40,623] ip-10-1-1-78/ERROR/stderr: File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1820, in wsgi_app
[2015-06-11 00:30:40,623] ip-10-1-1-78/ERROR/stderr: response = self.make_response(self.handle_exception(e))
[2015-06-11 00:30:40,623] ip-10-1-1-78/ERROR/stderr: File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1403, in handle_exception
[2015-06-11 00:30:40,623] ip-10-1-1-78/ERROR/stderr: reraise(exc_type, exc_value, tb)
[2015-06-11 00:30:40,623] ip-10-1-1-78/ERROR/stderr: File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1817, in wsgi_app
[2015-06-11 00:30:40,624] ip-10-1-1-78/ERROR/stderr: response = self.full_dispatch_request()
[2015-06-11 00:30:40,624] ip-10-1-1-78/ERROR/stderr: File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1477, in full_dispatch_request
[2015-06-11 00:30:40,624] ip-10-1-1-78/ERROR/stderr: rv = self.handle_user_exception(e)
[2015-06-11 00:30:40,624] ip-10-1-1-78/ERROR/stderr: File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1381, in handle_user_exception
[2015-06-11 00:30:40,624] ip-10-1-1-78/ERROR/stderr: reraise(exc_type, exc_value, tb)
[2015-06-11 00:30:40,624] ip-10-1-1-78/ERROR/stderr: File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1475, in full_dispatch_request
[2015-06-11 00:30:40,624] ip-10-1-1-78/ERROR/stderr: rv = self.dispatch_request()
[2015-06-11 00:30:40,624] ip-10-1-1-78/ERROR/stderr: File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1461, in dispatch_request
[2015-06-11 00:30:40,624] ip-10-1-1-78/ERROR/stderr: return self.view_functionsrule.endpoint
[2015-06-11 00:30:40,624] ip-10-1-1-78/ERROR/stderr: File "/usr/local/lib/python2.7/dist-packages/locust/web.py", line 51, in swarm
[2015-06-11 00:30:40,625] ip-10-1-1-78/ERROR/stderr: runners.locust_runner.start_hatching(locust_count, hatch_rate)
[2015-06-11 00:30:40,625] ip-10-1-1-78/ERROR/stderr: File "/usr/local/lib/python2.7/dist-packages/locust/runners.py", line 297, in start_hatching
[2015-06-11 00:30:40,625] ip-10-1-1-78/ERROR/stderr: self.server.send(Message("hatch", data, None))
[2015-06-11 00:30:40,625] ip-10-1-1-78/ERROR/stderr: File "/usr/local/lib/python2.7/dist-packages/locust/rpc/zmqrpc.py", line 7, in send
[2015-06-11 00:30:40,625] ip-10-1-1-78/ERROR/stderr: self.sender.send(msg.serialize())
[2015-06-11 00:30:40,625] ip-10-1-1-78/ERROR/stderr: File "/usr/lib/python2.7/dist-packages/zmq/green/core.py", line 215, in send
[2015-06-11 00:30:40,625] ip-10-1-1-78/ERROR/stderr: self.wait_write()
[2015-06-11 00:30:40,625] ip-10-1-1-78/ERROR/stderr: File "/usr/lib/python2.7/dist-packages/zmq/green/core.py", line 124, in wait_write
[2015-06-11 00:30:40,625] ip-10-1-1-78/ERROR/stderr: assert self.__writable.ready(), "Only one greenlet can be waiting on this event"
[2015-06-11 00:30:40,625] ip-10-1-1-78/ERROR/stderr: AssertionError: Only one greenlet can be waiting on this event
[2015-06-11 00:30:40,627] ip-10-1-1-78/ERROR/stderr: {'CONTENT_LENGTH': '30',
'CONTENT_TYPE': 'application/x-www-form-urlencoded',
'GATEWAY_INTERFACE': 'CGI/1.1',
'HTTP_ACCEPT': '
/
',
'HTTP_ACCEPT_ENCODING': 'gzip, deflate',
'HTTP_ACCEPT_LANGUAGE': 'en-US,en;q=0.8',
'HTTP_CONNECTION': 'keep-alive',
'HTTP_HOST': '54.173.130.189:8089',
'HTTP_ORIGIN': 'http://54.173.130.189:8089',
'HTTP_REFERER': 'http://54.173.130.189:8089/',
'HTTP_USER_AGENT': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.81 Safari/537.36',
'HTTP_X_REQUESTED_WITH': 'XMLHttpRequest',
'PATH_INFO': '/swarm',
'QUERY_STRING': '',
'REMOTE_ADDR': '70.80.243.33',
'REMOTE_PORT': '63638',
'REQUEST_METHOD': 'POST',
'SCRIPT_NAME': '',
'SERVER_NAME': 'ip-10-1-1-78',
'SERVER_PORT': '8089',
'SERVER_PROTOCOL': 'HTTP/1.1',
'SERVER_SOFTWARE': 'gevent/1.0 Python/2.7',
'werkzeug.request': <Request 'http://54.173.130.189:8089/swarm' [POST]>,
'wsgi.errors': <locust.log.StdErrWrapper object at 0x7fc21f99f410>,
'wsgi.input': <gevent.pywsgi.Input object at 0x7fc21da3fb50>,
'wsgi.multiprocess': False,
'wsgi.multithread': False,
'wsgi.run_once': False,
'wsgi.url_scheme': 'http',
'wsgi.version': (1, 0)} failed with AssertionError

Versions of related software running:

locustio -> 0.7.2
pyzmq -> 14.0.1
Python -> 2.7.6

Any ideas what could be causing this error?

@hartfordfive
Copy link
Author

After some closer inspection, I've noticed that the initial problem is that the POST to /swarm never seems to return a response. If I click the "Start swarming" button again, that's when the above error appears in the locust-master log file, which make sense as there already is a thread in progress waiting for an event. Unfortunately, i'm still no closer in finding out what the problem is. Any ideas @heyman ?

@mpasternacki
Copy link

I just bumped into this issue (both on Ubuntu 14.04 and 12.04) on AWS. I had only port 4444 (master control port) open on firewall (AWS security group) between master and slaves. When inspecting netstat -alpn I saw that Locust master process listens on two ports: 4444 and 4445. When I opened port 4445 on the firewall, it started to work.

Locust may have been waiting on communication on port 4445, because packets to ports blocked in the AWS security group are ignored, so there was no "connection refused" response from master to slave.

@hartfordfive
Copy link
Author

I appreciate the info @mpasternacki. That actually solved the issue for me. I'm using the standard 5557 port and I forgot to open up the necessary port on 5558 which the documentation states is required. Thanks again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants