Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Two test_stop events triggered when --run-time expires #1421

Closed
lolwhitaker opened this issue Jun 11, 2020 · 2 comments
Closed

Two test_stop events triggered when --run-time expires #1421

lolwhitaker opened this issue Jun 11, 2020 · 2 comments
Labels

Comments

@lolwhitaker
Copy link

lolwhitaker commented Jun 11, 2020

Describe the bug

on_stop event triggers twice when running with --run-time option.

Expected behavior

While stop() is invoked twice, once for the run-time expiration and again when the last worker quits, I would expect only one on_stop event to trigger.

Current workaround is to handle the first invocation and ignore the second, but it would be helpful to have a single event which indicates test completion so that actions like publishing test results can be performed without considering the possibility of it firing twice.

Actual behavior

The stop() method is invoked twice, once for the run-time expiration and again when the last worker quits. Both invocations trigger the on_stop listener. Without any context for these events included code must be written to ensure that any actions in the listener are only performed once.

Steps to reproduce

  1. Add on_stop listener to master script and log a message when it fires.
    i.e. test is stopping
  2. Launch a distributed headless load test using the --run-time flag
  3. Let the test complete cleanly and reach the run-time expiration
  4. Review the logging and notice that the on_stop message is logged twice

Running a test with --run-time 3600, here is the log output seen:

[2020-06-10 23:15:31,170] master-i0fcb20ff0d09d6000/INFO/root: Waiting for workers to be ready, 0 of 5 connected
[2020-06-10 23:15:31,594] master-i0fcb20ff0d09d6000/INFO/locust.runners: Client 'worker-i0d546ad7c75f6145e_5cc723077cf6ee606ddd2cd2c78d3202' reported as ready. Currently 1 clients ready to swarm.
[2020-06-10 23:15:31,595] master-i0fcb20ff0d09d6000/WARNING/locust.runners: Worker worker-i0d546ad7c75f6145e_5cc723077cf6ee606ddd2cd2c78d3202 exceeded cpu threshold (will only log this once per worker)
[2020-06-10 23:15:31,596] master-i0fcb20ff0d09d6000/INFO/locust.runners: Client 'worker-i0b60627624f3c09a2_7c42adfd2885822af33a87ad70c6e1e2' reported as ready. Currently 2 clients ready to swarm.
[2020-06-10 23:15:31,596] master-i0fcb20ff0d09d6000/WARNING/locust.runners: Worker worker-i0b60627624f3c09a2_7c42adfd2885822af33a87ad70c6e1e2 exceeded cpu threshold (will only log this once per worker)
[2020-06-10 23:15:31,682] master-i0fcb20ff0d09d6000/INFO/locust.runners: Client 'worker-i067df301bf38422b1_3f58f5805cb10bbdf1287f2529576980' reported as ready. Currently 3 clients ready to swarm.
[2020-06-10 23:15:31,684] master-i0fcb20ff0d09d6000/WARNING/locust.runners: Worker worker-i067df301bf38422b1_3f58f5805cb10bbdf1287f2529576980 exceeded cpu threshold (will only log this once per worker)
[2020-06-10 23:15:31,687] master-i0fcb20ff0d09d6000/INFO/locust.runners: Client 'worker-i01af57f1c1e685003_c07c6aeb08912f27c5c5740a59200196' reported as ready. Currently 4 clients ready to swarm.
[2020-06-10 23:15:31,687] master-i0fcb20ff0d09d6000/WARNING/locust.runners: Worker worker-i01af57f1c1e685003_c07c6aeb08912f27c5c5740a59200196 exceeded cpu threshold (will only log this once per worker)
[2020-06-10 23:15:31,692] master-i0fcb20ff0d09d6000/INFO/locust.runners: Client 'worker-i05ba8e9d6e9d3f051_22b221cf0d4a4caa191f3bc92c01eee6' reported as ready. Currently 5 clients ready to swarm.
[2020-06-10 23:15:31,693] master-i0fcb20ff0d09d6000/WARNING/locust.runners: Worker worker-i05ba8e9d6e9d3f051_22b221cf0d4a4caa191f3bc92c01eee6 exceeded cpu threshold (will only log this once per worker)
[2020-06-10 23:15:32,171] master-i0fcb20ff0d09d6000/INFO/locust.runners: Sending hatch jobs of 10 users and 2.00 hatch rate to 5 ready clients
[2020-06-10 23:15:32,171] master-i0fcb20ff0d09d6000/INFO/locust.main: Run time limit set to 3600 seconds
[2020-06-10 23:15:32,172] master-i0fcb20ff0d09d6000/INFO/locust.main: Starting Locust 1.0.3
[2020-06-11 00:15:32,171] master-i0fcb20ff0d09d6000/INFO/locust.main: Time limit reached. Stopping Locust.
[2020-06-11 00:15:32,171] master-i0fcb20ff0d09d6000/INFO/root: test is stopping
[2020-06-11 00:15:32,174] master-i0fcb20ff0d09d6000/INFO/locust.runners: Client 'worker-i0b60627624f3c09a2_7c42adfd2885822af33a87ad70c6e1e2' quit. Currently 0 clients connected.
[2020-06-11 00:15:32,177] master-i0fcb20ff0d09d6000/INFO/locust.runners: Client 'worker-i067df301bf38422b1_3f58f5805cb10bbdf1287f2529576980' quit. Currently 0 clients connected.
[2020-06-11 00:15:32,178] master-i0fcb20ff0d09d6000/INFO/locust.runners: Client 'worker-i05ba8e9d6e9d3f051_22b221cf0d4a4caa191f3bc92c01eee6' quit. Currently 0 clients connected.
[2020-06-11 00:15:32,183] master-i0fcb20ff0d09d6000/INFO/locust.runners: Client 'worker-i01af57f1c1e685003_c07c6aeb08912f27c5c5740a59200196' quit. Currently 0 clients connected.
[2020-06-11 00:15:32,194] master-i0fcb20ff0d09d6000/INFO/locust.runners: Client 'worker-i0d546ad7c75f6145e_5cc723077cf6ee606ddd2cd2c78d3202' quit. Currently 0 clients connected.
[2020-06-11 00:15:32,194] master-i0fcb20ff0d09d6000/INFO/locust.runners: The last worker quit, stopping test.
[2020-06-11 00:15:32,194] master-i0fcb20ff0d09d6000/INFO/root: test is stopping
[2020-06-11 00:15:32,673] master-i0fcb20ff0d09d6000/INFO/locust.main: Running teardowns...
[2020-06-11 00:15:33,174] master-i0fcb20ff0d09d6000/INFO/locust.main: Shutting down (exit code 0), bye.
[2020-06-11 00:15:33,174] master-i0fcb20ff0d09d6000/INFO/locust.main: Cleaning up runner...

Environment

  • OS: Ubuntu 16.04
  • Python version: 3.6.8
  • Locust version: 1.0.3
  • Locust command line that you ran:
python3 -m locust -f bin/locust_dummy_script.py  --headless --master --logfile locust.log -u 10 -r 2 --master-bind-host 127.0.0.1 --master-bind-port 5557 --web-host 127.0.0.1 --web-port 8089 --run-time 3600 --expect-workers 5
  • Locust file contents (anonymized if necessary):
     $ cat locust_dummy_script.py 
     # coding: utf8
     from locust import User, TaskSet, task, events
     import logging
    
     import locust.stats
     locust.stats.CSV_STATS_INTERVAL_SEC = 30 # default is 2 seconds
    
     @events.test_stop.add_listener
     def on_test_stop(**kw):
         logging.info("test is stopping")
    
     class MyTaskSet(TaskSet):
         @task(20)
         def hello(self):
             pass
     class Dummy(User):
         tasks = [MyTaskSet]
    
@lolwhitaker
Copy link
Author

My particular use case is that I am trying to publish the results (csv files) of the test run somewhere else. I only want to archive them when/if the test job is complete. With the current on_stop behavior it is not obvious how to distinguish a stop event resulting from a failed run (slaves quit, etc.) from a clean run (run time reached) and to ensure its only done once.

There is a simple workaround to ignore more than one trigger of the test_stop listener. But, ideally there is a little more detail passed into the stop event (perhaps a reason) so that could be done more simply and only in the case of a successful run. Perhaps a new event for run_time expired would be the simplest solution?

Let me know if I am just doing it wrong and there is a better pattern to follow to know when a distributed load test is complete and its time to collect and publish the results.

@lolwhitaker
Copy link
Author

There is a better event available for my purposes that I apparently missed -- quitting.

https://github.com/locustio/locust/blob/master/docs/running-locust-without-web-ui.rst#controlling-the-exit-code-of-the-locust-process

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant