Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

250 users on single machine fails #296

Closed
ericandrewlewis opened this issue Jun 22, 2015 · 10 comments
Closed

250 users on single machine fails #296

ericandrewlewis opened this issue Jun 22, 2015 · 10 comments

Comments

@ericandrewlewis
Copy link
Contributor

I can run 100 users on a single machine fine, with a basic test (hit index of a website with an implicit 1s wait time).

from locust import HttpLocust, TaskSet, task

class UserBehavior(TaskSet):
    @task(1)
    def profile(self):
        self.client.get("/")

class WebsiteUser(HttpLocust):
    task_set = UserBehavior

When I get to ~250 users, the test jumps to ~35% failure rate, with this error:

ConnectionError(ProtocolError('Connection aborted.', gaierror(8, 'nodename nor servname provided, or not known')),)
@jaylett
Copy link

jaylett commented Jun 22, 2015

I get this, but only on OS X. On linux (running in a VM on the same OS X host) I can scale fine to at least 500.

@heyman
Copy link
Member

heyman commented Jun 22, 2015

Sounds like it could possibly be caused by max open files being reached. Could you check it with ulimit -a?

@jaylett
Copy link

jaylett commented Jun 22, 2015

Bingo. Sorry, should have considered that. Can requests be configured to use cached DNS lookups? locust shouldn't really be testing scalability of the local resolver…

@heyman
Copy link
Member

heyman commented Jun 22, 2015

@jaylett Not as far as I know. Actually, I've seen a few weird issues with gevent and DNS (though this was before 1.0), so at some point I've been using the IP-address directly, and manually setting the Host-header (if needed).

@jaylett
Copy link

jaylett commented Jun 22, 2015

@heyman ah, that's a neat trick. However I'm testing at 1000 in the swarm quite happily for now :-)

@cgoldberg
Copy link
Member

@jaylett
I'm surprised your OS isn't caching DNS requests locally.

so, ulimit gates the maximum number of file descriptors you can have open. Each HTTP connection uses a socket (which uses a file descriptor).

fwiw, Debian/Ubuntu defaults to a ulimit of 1000.

@ericandrewlewis
Copy link
Contributor Author

Sounds like it could possibly be caused by max open files being reached. Could you check it with ulimit -a?

This seems to be the culprit. I followed OS-specific instructions on setting this and can now DDoS my server properly 😄

Should we mention this in the installation / configuration instructions? Glad to contribute some documentation.

@heyman
Copy link
Member

heyman commented Jun 22, 2015

@ericandrewlewis Sure! That would be nice!

@ericandrewlewis
Copy link
Contributor Author

How's this? #298

@ericandrewlewis
Copy link
Contributor Author

Closing as this is a documentation issue and #298 has been merged 🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants