-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Locust does not stop all users #1947
Comments
Your test looks a little strange. Why are you calling randrange(1000) in on_stop? Also, TaskSets that dont call self.interrupt() will never exit, so you probably shouldnt use a TaskSet at all. I think the problem is not actually that the users dont stop, but there is an issue with environment.runner.user_count - it sometimes returns the wrong value (particuarly noticeable when the ramp up finishes). Please use a more basic User (no TaskSet) and print something in on_stop, so we can see what the problem is. |
So I updated on_start and on_stop method to log when they're called, and in this case on_stop is not called for all users and additionally after hitting stop button I can see increased number of request in WebUI. This scenario is just simplification of scenario we are using for load test without any internal code that is not relevant. I don't get why don't calling self.interrupt() is anyhow related to this problem - the TaskSet will run until I stop it. |
Hmm... I dont know what could be causing this. Maybe the cause is somehow in your "internal" code? Sorry, I dont have much time to spend on this right now. If you can write a failing test case I will investigate further (I know that is asking a lot, but I'm doing this on my free time :) |
The scenario attached to this bug is causing the problem without any internal code - not sure what else I could write. With single user defined, everything seems to be ok. When we add second user, stopping is not working correctly. |
Sorry, I dont have time to investigate further. If you can express your problem in a unit test I will give it another look. |
Also, I did try running your locust file, adding an |
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 10 days. |
This issue was closed because it has been stalled for 10 days with no activity. This does not necessarily mean that the issue is bad, but it most likely means that nobody is willing to take the time to fix it. If you have found Locust useful, then consider contributing a fix yourself! |
Hi, sorry for late response. I finally got some time to investigate this a little bit more. So I created a unit test, to cover this case:
Every time there are 2 users left after stopping. |
I think problematic is line 253 in runners.py:
if we change it to:
or:
test starts passing |
Describe the bug
When there is more than 1 user defined in locust file, Locust does not stop users correctly - there are always 10%-20% left in running state. I'm adding the scenario, that causes problem - there is a background task that periodically checks the number of users and print sit to console.
Expected behavior
All users stopped correctly
Actual behavior
Not all users stopped
Steps to reproduce
Run scenario attached (containing 2 user classes)
Environment
The text was updated successfully, but these errors were encountered: