-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Setup/teardown hooks #59
Comments
Hi! I'm really sorry that you haven't gotten a reply until now. There's currently no functionality that does exactly what you describe. When I've been in need of some setup code for my locust tests, I've been putting it at the module level of my test scripts. But that's not run every time a test is started/stopped, and there's no tear down. I'm curious if you have, or have had, a specific use-case where this was needed (I'm sure there are such cases, I would just like to hear about it). |
In my case it's an application that needs to ingest test data (some of which are randomly selected) before running meaningful load tests, and then make sure they're gone afterwards to avoid bloating the data store. Setup can be a fairly lengthy process. Sure, you can call external scripts before and after test runs, but that's quite inconvenient, especially if the teardown phase needs to know things from the setup phase (e.g. generated IDs). And it just makes sense to have a self-contained test package instead of a bunch of different things glued together. (I'm no longer using locust, but the lack of a good setup/teardown facility is one reason that made me switch to another solution.) |
I too need this functionality - I am creating and reading files through a RESTful interface, and some clean-up between runs is needed. I figured it needed a companion to the class MyTaskSet(TaskSet):
def run(self, *args, **kwargs):
try:
super(MyTaskSet, self).run(args, kwargs)
except GreenletExit:
if hasattr(self, "on_stop"):
self.on_stop()
raise Of course the ideal solution would be to put my exception-handling code in the relevant place in |
I'd also like this functionality -- in my case in order to create and then delete user accounts (using a single, dummy account isn't possible in my case). The user creation I'm doing in the |
For cleanup we've used the quitting event, which might also work for you if you don't mind quitting rather than just stopping (depending on how much cleanup you really need, if you just persistently track created things for the entire runtime, then cleanup on quit might be fine) - we do something like: import threading
import functools
QUIT_HANDLED = False
quit_lock = threading.Lock()
def _quit(client, delete_sessions):
global QUIT_HANDLED
if not QUIT_HANDLED:
with quit_lock:
if not QUIT_HANDLED:
QUIT_HANDLED = True
#...cleanup code here
#...actual code
class APIUser(Locust):
task_set = APILikeTaskDistribution
# min sec ms
min_wait = 30 * 1000
avg_wait = 2 * 60 * 1000
max_wait = 5 * 60 * 1000
def __init__(self):
super(APIUser, self).__init__()
events.quitting += functools.partial(_quit, self.client, True) But I agree, a more uniform/easy approach to teardown (that works on stop and not just quit) would be a nice feature. |
Thanks -- I looked at the quitting event and will likely use it as you suggest. Nice to know that it works for someone else. That will work fine for the actual deployed version of the tests since we'll shutdown after the run. For development having something at the test level would be more convenient (and there may be other cases where quitting won't work). |
On a somewhat related note, does anyone have a technique for performing per-user work (aka per-locust) that must be done before the locust should be considered fully hatched? I tried putting this in the
Ideally I'd like to set the locusts off in groups of N, with an N second pause between groups. The count of clients and spawn rate sound like they can do this, but they don't really. Instead each client is created and starts running (with no real difference between init work and task running work) and there is an M second pause between starting each client (where M is clients/spawn rate). |
@sfitts: One slightly hacky solution to that would be to acquire a semaphore that you release at the locust.events.hatch_complete event, and wait for that semaphore when the locusts/tasksets start. Here's a working example: from locust import HttpLocust, TaskSet, task, events
from gevent.coros import Semaphore
all_locusts_spawned = Semaphore()
all_locusts_spawned.acquire()
def on_hatch_complete(**kw):
all_locusts_spawned.release()
events.hatch_complete += on_hatch_complete
class UserTasks(TaskSet):
def on_start(self):
all_locusts_spawned.wait()
@task
def index(self):
self.client.get("/")
class WebsiteUser(HttpLocust):
host = "http://127.0.0.1:8089"
min_wait = 2000
max_wait = 5000
task_set = UserTasks One caveat though. If you're running Locust distributed, there's still a possibility for some requests to happen before all locusts has hatched. That's because there's no synchronisation of the hatch_complete events between the slaves, so for example if one machine is much slower for some reason, it might lag behind in the spawning of the locust instances. Also, since there is no event to listen for when the test stops, there's no easy way of re-acquiring the semaphore once the test has stopped. Since there's clearly a need for it, we should add starting and stopping events into the next release of locust. |
@heyman: Thanks for the suggestion and the time putting together the example. I'm not expecting any kind of distributed coordination, just need to throttle things on a local basis. So something along these lines should work well. |
I'm also looking to support ingesting test data that can be referenced when executing a task. I have a django app with various models/factories. I'm planning on writing a script that will generate the models i need for the load test within the django app. My plan is then to adjust the locust runner to take an "initial_data" argument which can be referenced within the task. If a master was passed this information, it could also send it along to the slaves when sending the hatch event. Is there some other way that I can do that currently? Does that seem like a reasonable extension to the current architecture? |
This issue is pretty old and I'm looking for something along the lines of what's been discussed. Has there been any progress? |
any progress on this, would be great. |
The on_start() can be used as a setup I guess? And not sure if the events.quitting can be used to create a hook that acts as a teardown? Would be nice to have a |
+1 for an on_stop() feature. I have some custom websockets started on their on greenlet and having an on_stop handler would enable me to tear them down gracefully. |
+1 for on_stop() feature...Have some common tear-down tasks, to be executed. |
Another +1 for on_stop! would be immensely helpful |
Another +1 for on_stop() |
+1000 |
+1 |
This was addressed in #658 and will be released in the next release of Locust! |
awesome :) I am curently testing a websocket based title and it will help a lot because stopping the test dont close the websockets |
…t file (locustio#59) * expose client index to locust, so user can access it in the test file * update logger info
It'd be extremely useful to have dedicated setup and teardown functionality in Locust (or if there is something like this already, to have it documented).
My rough idea would be:
Thoughts? (Have I missed something that already exists?)
The text was updated successfully, but these errors were encountered: