Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add an option (--stop-timeout) to allow tasks to finish running their iteration before exiting #1099

Merged
merged 13 commits into from
Oct 26, 2019
Merged

Add an option (--stop-timeout) to allow tasks to finish running their iteration before exiting #1099

merged 13 commits into from
Oct 26, 2019

Conversation

cyberw
Copy link
Collaborator

@cyberw cyberw commented Sep 30, 2019

Solves #1049

@codecov
Copy link

codecov bot commented Oct 2, 2019

Codecov Report

Merging #1099 into master will increase coverage by 0.58%.
The diff coverage is 45%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #1099      +/-   ##
==========================================
+ Coverage   73.94%   74.52%   +0.58%     
==========================================
  Files          18       18              
  Lines        1769     1798      +29     
  Branches      262      269       +7     
==========================================
+ Hits         1308     1340      +32     
+ Misses        398      391       -7     
- Partials       63       67       +4
Impacted Files Coverage Δ
locust/main.py 34.95% <100%> (+0.28%) ⬆️
locust/runners.py 61.68% <30.76%> (+2.18%) ⬆️
locust/core.py 86.02% <66.66%> (-0.14%) ⬇️
locust/rpc/zmqrpc.py 100% <0%> (ø) ⬆️
locust/stats.py 82.04% <0%> (+0.45%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 43e7487...6969a1b. Read the comment docs.

@heyman
Copy link
Member

heyman commented Oct 18, 2019

I think the feature makes sense. With the current implementation (if my understanding of it is correct), the runner would very often wait much longer than actually needed, since all the locust users that are currently sleeping would have to wake up, and even do an extra task execution, before raising GreenletExit.

I'm wondering if it would perhaps make sense to introduce a state property on the Locust class which could be something like either running, waiting, or stopping (which could then replace exit_at_end_of_iteration)? Then the runner's stop method could immediately kill all Locust which are in waiting state, and then wait for the others to stop.

Also I think the check for the stop condition should be done at the beginning of the TaskSet's main loop (for example here:

), because otherwise a sub-TaskSet calling interrupt() wouldn't cause the TaskSet to exit (instead another task would be executed since wait() isn't called by default after an interrupt).

@cyberw
Copy link
Collaborator Author

cyberw commented Oct 22, 2019

@heyman Is there a way for me to get a reference to the actual locust instance (or task instance) from LocustRunner.stop() ? I can get a reference to the class using self.locusts[i].args[0], but I cant check the state of the instance...

@heyman
Copy link
Member

heyman commented Oct 22, 2019

Hmm, from the look of the code:

locust/locust/runners.py

Lines 128 to 132 in d6d87b4

def start_locust(_):
try:
locust().run(runner=self)
except GreenletExit:
pass

we don't seem to store a reference to the instance (locust()). So we would need to start tracking them. Maybe a dict that maps the locust instances to the greenlet running the locust (since we would need a reference to the greenlet to be able to kill specific locust instances).

@cyberw
Copy link
Collaborator Author

cyberw commented Oct 23, 2019

Could we just spawn the greenlet with a Locust object instead of a locust class as argument? All the tests pass, and everything seems to work if I change those lines to:

                new_locust = locust()
                def start_locust(_):
                    try:
                        new_locust.run(runner=self)
                    except GreenletExit:
                        pass
                self.locusts.spawn(start_locust, new_locust)

I understand that it is kind of a sneaky change, but it does seem to work, and to me it does make more sense to spawn a greenlet with a reference to an object rather than to a class anyway...

(this way I can access the locust object from LocustRunners.stop() easily)

@heyman
Copy link
Member

heyman commented Oct 23, 2019

Could we just spawn the greenlet with a Locust object instead of a locust class as argument?

Ah, yes! That's much better. Just make sure to update the kill_locusts method accordingly :).

…e locust object from LocustRunner.stop(),I pass the locust object to spawn() instead of the locust class.
@cyberw
Copy link
Collaborator Author

cyberw commented Oct 24, 2019

@heyman Looks like it works now. I didnt see any need for modification in kill_locusts, but maybe I'm missing something?

Also I think the check for the stop condition should be done at the beginning of the TaskSet's main loop ...

I dont think that is actually necessary, now that we kill sleeping locusts. Either the locust is sleeping when stop is run, and then it will be killed immediately, or it is running, in which case we set, exit_on_end_of_iteration and it will be killed as soon as the iteration is finished (it will never get to the beginning of the main loop)

@cyberw
Copy link
Collaborator Author

cyberw commented Oct 24, 2019

I dont understand the code coverage report. There's lots of diffs in there that shouldn't be there, and merging with master didnt make any difference...

@heyman
Copy link
Member

heyman commented Oct 24, 2019

I didnt see any need for modification in kill_locusts, but maybe I'm missing something?

It's definitely not super clear, but if I'm not mistaken a small change is needed.

Full function for reference:

locust/locust/runners.py

Lines 144 to 161 in b1c347b

def kill_locusts(self, kill_count):
"""
Kill a kill_count of weighted locusts from the Group() object in self.locusts
"""
bucket = self.weight_locusts(kill_count)
kill_count = len(bucket)
self.num_clients -= kill_count
logger.info("Killing %i locusts" % kill_count)
dying = []
for g in self.locusts:
for l in bucket:
if l == g.args[0]:
dying.append(g)
bucket.remove(l)
break
for g in dying:
self.locusts.killone(g)
events.hatch_complete.fire(user_count=self.num_clients)

g.args[0] on this line:

if l == g.args[0]:

currently refers to a Locust class, and this change:

self.locusts.spawn(start_locust, new_locust)

changes so that g.args[0] will be a Locust class instance, so you'd have to change the line to if l == type(g.args[0]):

@heyman
Copy link
Member

heyman commented Oct 24, 2019

I just pushed a test to master which I believe would fail for this PR. I think the fix should be the oneliner described in my previous comment though. You should be able to rebase on/merge master and verify.

Also, I wouldn't mind renaming the variables in spawn_locust and kill_locusts methods. Use locust_class instead of locust (when referring to a locust class) and locust instead of l (when referring to a locust instance). I do know those names doesn't come from this PR :).

I dont understand the code coverage report.

Me neither.

@heyman
Copy link
Member

heyman commented Oct 24, 2019

Also, this line shouldn't be needed (also not from this PR, but if you want you can remove it):

bucket.remove(l)

@heyman
Copy link
Member

heyman commented Oct 24, 2019

Oops, my test had a Python 2.7 incompatibility 🤦‍♀. Fixed now!

@cyberw
Copy link
Collaborator Author

cyberw commented Oct 24, 2019

Also, this line shouldn't be needed (also not from this PR, but if you want you can remove it):

bucket.remove(l)

I guess it was there to optimize the loop? But I agree it is not needed.

@heyman
Copy link
Member

heyman commented Oct 24, 2019

I guess it was there to optimize the loop? But I agree it is not needed.

Oh, yes, you're correct. I misinterpreted the code. It does make sense to keep it. (I missed that there were a nested loop. Sorry, about that!)

@cyberw
Copy link
Collaborator Author

cyberw commented Oct 24, 2019

Force push ftw :)

@heyman
Copy link
Member

heyman commented Oct 24, 2019

in which case we set, exit_on_end_of_iteration and it will be killed as soon as the iteration is finished (it will never get to the beginning of the main loop)

Actually (unless I'm mistaken) if TaskSet.interrupt() is called with the default argument reschedule=True, the execution will end in the current TaskSet, and then a new task will be immediately executed in the parent TaskSet, and therefore the wait method will not be called.

That won't happen if the check is at the beginning of the main loop. Also I still like the idea of using a Locust._state attribute (that can be either STATE_RUNNING, STATE_WAITING or STATE_STOPPING) instead of the _waiting attribute. Then there is no need for the exit_at_end_of_iteration attribute.

…ing)

Added test case with use of TaskSet.interrupt(), made sure we actually terminate when that happens.
@cyberw
Copy link
Collaborator Author

cyberw commented Oct 25, 2019

@heyman LGTY? If you want I can file a new PR, in a new branch so that the changes get the right name (stop timeout instead of task finish wait time), and also squash the changes.

locust/main.py Outdated Show resolved Hide resolved
@heyman heyman changed the title Add an option (--task-finish-wait-time) to allow tasks to finish running their iteration before exiting Add an option (--stop-timeout) to allow tasks to finish running their iteration before exiting Oct 25, 2019
@heyman
Copy link
Member

heyman commented Oct 25, 2019

It's nitty but I think that if you place the if self.locust._state == LOCUST_STATE_STOPPING: check immediately after this line:

try:

you don't need the check it in two places.

Other than that (and my suggested copy for the help text) I think it looks great! I don't mind merging this PR (I changed the title of it) :).

@cyberw
Copy link
Collaborator Author

cyberw commented Oct 26, 2019

@heyman Ready to merge, IMO :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants