-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Locust slaves eat all available memory when working with a failing service #816
Comments
please fill out the fields in the issue template. |
I experienced the same issue as well. In my case it was worse, because my orchestrator (I run locust in a cloud environment) kills instances when they hit RAM or HD limits. This bug makes impossible to perform a long running soak test. |
Sorry, here it is: Description of issue / feature requestLocustIO eats up all memory when it works over a failing service. Expected behaviorLocustio should be tolerant to performing long scans for failing services Actual behaviorLocustIO gobbles up all available memory rapidly when target service returns non 200 response. Environment settings (for bug reports)
Steps to reproduce (for bug reports)
alternative steps:
I'm not sure that it would be helpful but here is my locustfile.py, it won't run without my server, but I hope it would help to get the point:
|
Forgot to mention you Cory, @cgoldberg Thank you in advance! |
hmm.. I think the memory usage you are seeing is because you are appending the response from every request to your If you don't store all results in your list, does the problem go away? If it does, lets close this issue. If it doesn't, please try to reproduce this with a minimal locustfile that doesn't store responses in each instance. |
@cgoldberg but before I make a request I check if number of results is greater than 50 and if so, then |
If you create a much simpler reproduction example, I can look some more.. otherwise, I have no idea. |
I'm using locust in a master / slave scenario, one master and ten slaves running on 3 machines (one machine for master and two machines for 5 + 5 slaves). This setup produces a load of ~ 100 reqs per second. When the tested service is turned off slave machines start to consume memory very rapidly. I suspect that the issue is with failures being logged into memory without ever releasing it.
The text was updated successfully, but these errors were encountered: