You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Depending on how others are deploying this tool from the command line it may be a non-issue, but it's causing problems for us. Returning non-zero with the same code that we use when there are misconfigurations for, say, one 500 response code out of thousands is not subtle enough.
By the time we exit in this case, the entire test was already completed and requests were sent. Whatever downstream metrics systems will have valid data, etc. If we retry based on a non-zero exit code, we run the test twice. Users of the tool should probably be doing something more intelligent downstream to determine if the test was a failure or not than if there was at least 1 error. So why exit with an error code?
Two initial ideas that could improve the behavior (either or both would work):
Use a return code that's unique to this kind of "failure but not really", so it can be handled and ignored / logged with the correct criticality
Add a new command line parameter that is a threshold for failure (percent using stats.fail_ratio and/or absolute compared against num_requests)
Hm, I agree. Maybe we could just make it so that one could set locust.runners.locust_runner.exit_code, and if it's set it'll be returned when the program exists. If it's no set, it'll use the current behaviour?
I wonder if something like --failure-threshold=0.05 as a command line arg would be a good way to handle this?
I'm not sure a good name there (naming is hard) but it seems better to add that as an opt-in optional feature than to change the default exit behavior.
Depending on how others are deploying this tool from the command line it may be a non-issue, but it's causing problems for us. Returning non-zero with the same code that we use when there are misconfigurations for, say, one 500 response code out of thousands is not subtle enough.
By the time we exit in this case, the entire test was already completed and requests were sent. Whatever downstream metrics systems will have valid data, etc. If we retry based on a non-zero exit code, we run the test twice. Users of the tool should probably be doing something more intelligent downstream to determine if the test was a failure or not than if there was at least 1 error. So why exit with an error code?
Two initial ideas that could improve the behavior (either or both would work):
Is this bothering anyone else?
(ref)
The text was updated successfully, but these errors were encountered: