You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
on_stop event triggers twice when running with --run-time option.
Expected behavior
While stop() is invoked twice, once for the run-time expiration and again when the last worker quits, I would expect only one on_stop event to trigger.
Current workaround is to handle the first invocation and ignore the second, but it would be helpful to have a single event which indicates test completion so that actions like publishing test results can be performed without considering the possibility of it firing twice.
Actual behavior
The stop() method is invoked twice, once for the run-time expiration and again when the last worker quits. Both invocations trigger the on_stop listener. Without any context for these events included code must be written to ensure that any actions in the listener are only performed once.
Steps to reproduce
Add on_stop listener to master script and log a message when it fires.
i.e. test is stopping
Launch a distributed headless load test using the --run-time flag
Let the test complete cleanly and reach the run-time expiration
Review the logging and notice that the on_stop message is logged twice
Running a test with --run-time 3600, here is the log output seen:
[2020-06-10 23:15:31,170] master-i0fcb20ff0d09d6000/INFO/root: Waiting for workers to be ready, 0 of 5 connected
[2020-06-10 23:15:31,594] master-i0fcb20ff0d09d6000/INFO/locust.runners: Client 'worker-i0d546ad7c75f6145e_5cc723077cf6ee606ddd2cd2c78d3202' reported as ready. Currently 1 clients ready to swarm.
[2020-06-10 23:15:31,595] master-i0fcb20ff0d09d6000/WARNING/locust.runners: Worker worker-i0d546ad7c75f6145e_5cc723077cf6ee606ddd2cd2c78d3202 exceeded cpu threshold (will only log this once per worker)
[2020-06-10 23:15:31,596] master-i0fcb20ff0d09d6000/INFO/locust.runners: Client 'worker-i0b60627624f3c09a2_7c42adfd2885822af33a87ad70c6e1e2' reported as ready. Currently 2 clients ready to swarm.
[2020-06-10 23:15:31,596] master-i0fcb20ff0d09d6000/WARNING/locust.runners: Worker worker-i0b60627624f3c09a2_7c42adfd2885822af33a87ad70c6e1e2 exceeded cpu threshold (will only log this once per worker)
[2020-06-10 23:15:31,682] master-i0fcb20ff0d09d6000/INFO/locust.runners: Client 'worker-i067df301bf38422b1_3f58f5805cb10bbdf1287f2529576980' reported as ready. Currently 3 clients ready to swarm.
[2020-06-10 23:15:31,684] master-i0fcb20ff0d09d6000/WARNING/locust.runners: Worker worker-i067df301bf38422b1_3f58f5805cb10bbdf1287f2529576980 exceeded cpu threshold (will only log this once per worker)
[2020-06-10 23:15:31,687] master-i0fcb20ff0d09d6000/INFO/locust.runners: Client 'worker-i01af57f1c1e685003_c07c6aeb08912f27c5c5740a59200196' reported as ready. Currently 4 clients ready to swarm.
[2020-06-10 23:15:31,687] master-i0fcb20ff0d09d6000/WARNING/locust.runners: Worker worker-i01af57f1c1e685003_c07c6aeb08912f27c5c5740a59200196 exceeded cpu threshold (will only log this once per worker)
[2020-06-10 23:15:31,692] master-i0fcb20ff0d09d6000/INFO/locust.runners: Client 'worker-i05ba8e9d6e9d3f051_22b221cf0d4a4caa191f3bc92c01eee6' reported as ready. Currently 5 clients ready to swarm.
[2020-06-10 23:15:31,693] master-i0fcb20ff0d09d6000/WARNING/locust.runners: Worker worker-i05ba8e9d6e9d3f051_22b221cf0d4a4caa191f3bc92c01eee6 exceeded cpu threshold (will only log this once per worker)
[2020-06-10 23:15:32,171] master-i0fcb20ff0d09d6000/INFO/locust.runners: Sending hatch jobs of 10 users and 2.00 hatch rate to 5 ready clients
[2020-06-10 23:15:32,171] master-i0fcb20ff0d09d6000/INFO/locust.main: Run time limit set to 3600 seconds
[2020-06-10 23:15:32,172] master-i0fcb20ff0d09d6000/INFO/locust.main: Starting Locust 1.0.3
[2020-06-11 00:15:32,171] master-i0fcb20ff0d09d6000/INFO/locust.main: Time limit reached. Stopping Locust.
[2020-06-11 00:15:32,171] master-i0fcb20ff0d09d6000/INFO/root: test is stopping
[2020-06-11 00:15:32,174] master-i0fcb20ff0d09d6000/INFO/locust.runners: Client 'worker-i0b60627624f3c09a2_7c42adfd2885822af33a87ad70c6e1e2' quit. Currently 0 clients connected.
[2020-06-11 00:15:32,177] master-i0fcb20ff0d09d6000/INFO/locust.runners: Client 'worker-i067df301bf38422b1_3f58f5805cb10bbdf1287f2529576980' quit. Currently 0 clients connected.
[2020-06-11 00:15:32,178] master-i0fcb20ff0d09d6000/INFO/locust.runners: Client 'worker-i05ba8e9d6e9d3f051_22b221cf0d4a4caa191f3bc92c01eee6' quit. Currently 0 clients connected.
[2020-06-11 00:15:32,183] master-i0fcb20ff0d09d6000/INFO/locust.runners: Client 'worker-i01af57f1c1e685003_c07c6aeb08912f27c5c5740a59200196' quit. Currently 0 clients connected.
[2020-06-11 00:15:32,194] master-i0fcb20ff0d09d6000/INFO/locust.runners: Client 'worker-i0d546ad7c75f6145e_5cc723077cf6ee606ddd2cd2c78d3202' quit. Currently 0 clients connected.
[2020-06-11 00:15:32,194] master-i0fcb20ff0d09d6000/INFO/locust.runners: The last worker quit, stopping test.
[2020-06-11 00:15:32,194] master-i0fcb20ff0d09d6000/INFO/root: test is stopping
[2020-06-11 00:15:32,673] master-i0fcb20ff0d09d6000/INFO/locust.main: Running teardowns...
[2020-06-11 00:15:33,174] master-i0fcb20ff0d09d6000/INFO/locust.main: Shutting down (exit code 0), bye.
[2020-06-11 00:15:33,174] master-i0fcb20ff0d09d6000/INFO/locust.main: Cleaning up runner...
My particular use case is that I am trying to publish the results (csv files) of the test run somewhere else. I only want to archive them when/if the test job is complete. With the current on_stop behavior it is not obvious how to distinguish a stop event resulting from a failed run (slaves quit, etc.) from a clean run (run time reached) and to ensure its only done once.
There is a simple workaround to ignore more than one trigger of the test_stop listener. But, ideally there is a little more detail passed into the stop event (perhaps a reason) so that could be done more simply and only in the case of a successful run. Perhaps a new event for run_time expired would be the simplest solution?
Let me know if I am just doing it wrong and there is a better pattern to follow to know when a distributed load test is complete and its time to collect and publish the results.
Describe the bug
on_stop event triggers twice when running with --run-time option.
Expected behavior
While stop() is invoked twice, once for the run-time expiration and again when the last worker quits, I would expect only one on_stop event to trigger.
Current workaround is to handle the first invocation and ignore the second, but it would be helpful to have a single event which indicates test completion so that actions like publishing test results can be performed without considering the possibility of it firing twice.
Actual behavior
The stop() method is invoked twice, once for the run-time expiration and again when the last worker quits. Both invocations trigger the on_stop listener. Without any context for these events included code must be written to ensure that any actions in the listener are only performed once.
Steps to reproduce
i.e.
test is stopping
--run-time
flagRunning a test with
--run-time 3600
, here is the log output seen:Environment
The text was updated successfully, but these errors were encountered: