-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Introduce total iterations-per-second limiting timers #1241
Conversation
Codecov Report
@@ Coverage Diff @@
## master #1241 +/- ##
==========================================
+ Coverage 78.49% 78.54% +0.04%
==========================================
Files 20 20
Lines 2037 2037
Branches 321 321
==========================================
+ Hits 1599 1600 +1
Misses 361 361
+ Partials 77 76 -1
Continue to review full report at Codecov.
|
7cfcbd8
to
0894a0f
Compare
I'm currently not convinced it would be a good idea to incorporate this is Locust itself. My main objection is that it doesn't fit with the mental model of Locust where one declares the behaviour of Users, and then swarm the system with X number of users. This change would make the simulated users' wait time depend on the execution time of other users. I'm also not a big fan of changes that work differently when running Locust distributed. I can see that there could be use-cases for this feature, but I'm not convinced they're common enough. And since this should function well living in a separate package, I'm currently -1 on including this feature in Locust itself. Would like to hear others' thoughts though. |
I kind of see what you mean, but I dont think this is a major break from that model.
I think I can fix this somehow (achieving initial IPS distribution between slaves, but not not IPS checking, as that would be way harder)
I would like to mention that every major load testing tool I have ever tried implements this as a core feature (LoadRunner, NeoLoad, Artillery, K6, JMeter, Tsung, Yandex.Tank, etc) |
…rom locust-plugins)
…because that is what it really is.
5ce3fc7
to
4a2da70
Compare
The merge commit also moves MasterLocustRunner.target_user_count to be available in the base LocustRunner class. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm still skeptical about having this as a part of Locust itself. Mainly because of the objections raised in my previous comment. That is doesn't fit the mental model of Locust model, it'll work differently distributed/non-distributed, and it'll introduce an "awareness"/dependency between simulated locust users.
I'd much rather have it live as a separate package, since I really can't see any drawback from that. We could point to the package from the relevant parts of the documentation.
I can move it to contrib if that makes more sense? It still relies on a few changes in core (particularly the check to see if target ips has not been reached), so implementing it completely separate from locust would be hard. I can try to fix it so that distribution works predictably, but not the dependency/awareness part. Imho that is not a big issue though, and the simplicity to do this kind of thing (communication between users/greenlets) is really one of the things that makes locust/gevent so good. |
Do you mean the the code for emitting a warning message? Couldn't that be done in an event listener? I don't mind the introduction of
In a distributed environment, communication/awareness between simulated users isn't possible. But I do agree that the simplicity/hackability of Locust is one of the main USPs, and I think the fact that one can implement this kind of feature is quite a few line of code is a testament to that. However, I also want to keep it that way, which is why I think it's important to also say no to features a lot. Especially when the use-case for them are rare (I do believe I haven't seen any feature request for this feature before), they can be implemented in separate packages, and doesn't align well with the mental model of simulated users. I do apologize that it makes me sound so negative, but I'm sure that Locust would have been in a much worse state today, if we hadn't been pushing back on a lot of feature requests. This doesn't necessarily mean I'm right in this case though, and I'd still be interested to hear others opinion. |
I'm mainly thinking about the addition to core (that checks if ips target was missed and fails the run if it did)
This PR doesnt introduce target_user_count, it just moves it to the base runner class. But maybe that is what you meant?
Ok, we'll wait for someone else to weigh in then. The original request for this was #646 (although you could argue that it is partially solved by introducing per-locust pacing). I think it was the most upvoted ticket for locust ever, and I bet some of those 33 upvotes were hoping for a global rps rate :) |
Yes, IIRC the example for that request were an API-client that used something like I do think that that #646 has a very misleading title which is why I proposed that we change it. |
well instead of doing a global limit, you could still do per user limit by introducing the concept of "next tick" per user. What that would do is have adaptive request rate per user, and by extension globally, but it would also eliminate the effect that I have seen with constant_pacing where if a response time was too long, wait_time for next request is 0, and you get temporary spike in request rate since a number of users may be experiencing the same issue and are sending next request immediately and causing request rate spike. Also you wouldn't have to do global rate control, which is in general messy if you want to be able to scale freely with different slave/master approaches. |
To make locust more powerful and easy to use, out-of-box functions are always good to have, though plugins are easy to use. I think we could think about prioritizing some functions and putting them into the core package from that point of view. limiting rps can be one of them. And add paragraphs in doc to let people know what is the best practices to use it. |
It looks like the |
No description provided.