-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why the "RPS" generated by locust is much fewer than other performance testing tools ? #277
Comments
Hi, Were you able to solve this issue? I'm facing the same issue. |
The same to. |
@jacexh why was this question closed? |
@jonathannaguin http://docs.locust.io/en/latest/running-locust-distributed.html. A locust instance is running on a single CPU core, so this test is unfair. |
ah! that makes sense, this should be part of a FAQ section :) |
Run Locust distributed on several machines. |
Why this question is closed, I dont understand ? I'm facing the same issue in 2019 |
because the OP closed it. |
Why so many people said that 's because Locust run in one thread, so it is reasonable it could only reach about 300 RPS? And if you want more , run more slaves?? Usually there is no CPU consuming task in Locust task, just make http requests and send out, that is IO bound, that's what Gevent/greenlet shoud handle very efficiently. Why Just 300 RPS ???!! I remember we have developed our own performance testing tool with Python about 10 years ago, it could reach about 2000 RPS easily. We don't use Gevent, instead, we use reactor event loop. Anyway, I think that's a little shame for Locust as a load testing tool, just get 300 RPS per core. |
You should probably go back to using that tool if Locust does not meet your needs. You are also welcome to improve Locust and submit your changes in a PR. |
Found way to generate fixed RPS number in Locust: |
As someone pointed out in another PR, the root cause is Locust use Requests to send/receive Http . Requests is not efficent , And I've noticed you guys are aleady making your own Http underline handler. That's great. |
This issue should be opened up. Sure the title is a "general question", but it could say "Improve RPS to be as fast as other testing tools". You can say "too much work", but it's a valid issue. It's particularly problematic because then you go to your team and say "well our app only does ~300 r/s, we need to spend time optimizing", even though the app can actually do 10k r/s. As I see it and others have pointed out, there are two ways to improve:
|
First of all, even a fairly low-end machine running Locust should be able to generate far more than 300 RPS (using However, I'd recommend that you monitor the CPU usage of your load testing tool when you run your load tests (to make sure that it isn't your load generator that is the bottleneck). That is why we've added support for seeing slave/worker CPU usage under the Slaves/Workers tab. ("Workers" in upcoming versions).
That wouldn't help because of the Python GIL. Use multiple slave/worker processes to make use of multiple cores.
Yes, that's why we've added FastHttpLocust (check out the docs for more info). It uses another HTTP client (actually extracted from nginx, and implemented in C) that is ~6x faster (IIRC). |
After I wrote another load testing tool ultron , I did not want to talk about this issue any more. |
Yes. The reason I mentioned it was to debunk the 300 RPS number that were mentioned by multiple people in this thread. Other than that the benchmark is really a bit like comparing apples to oranges (though I guess it's useful if what you care about is maximizing the RPS on a single URL endpoint) since Locust is a framework for simulating user behaviour in Python code, while many of the tools compared were created just to max out RPS. Also, if the test would have been done with a couple of machines, Locust would actually be ahead most of them since most of them can't be run distributed. |
Cool, I wasn't aware of the fast HTTP client implementation in locust when I wrote this because I was a 0.11.0 version user. Will see how this plays out. BTW, I've been using the multi-machine setup for some time now, even if it's a bit cumbersome to set up. |
There was is an old article from k6 that showed locust being very slow but they configured it with min_wait = 5000 max_wait = 15000, I assume because this was used in an example in the locust docs. They have an updated article now which is very comprehensive: https://k6.io/blog/comparing-best-open-source-load-testing-tools For comparison, we have run tests up to 30000 RPS (not using
Python/locust will likely never match the raw speed of other tools like k6 or the basic tools like wrk/ab etc but ease of the Python language and MUCH easier horizontal scaling on k8s with the master/slave model more than makes up for it IMO. |
Hi @heyman While running load at 200 RPS, i am getting some drop on the RPS. Can you please let me know how can we resolve this through locust? Please let me know how can we pass a constant RPS. Locust version: 0.13.3 |
Same issue |
Use constant_pacing for constant iteration time per user (https://docs.locust.io/en/stable/writing-a-locustfile.html#wait-time-attribute) or constant_total_ips from locust-plugins for constant total iterations per second (experimental, https://github.com/SvenskaSpel/locust-plugins/blob/master/examples/constant_total_ips_ex.py) |
For the original question (why is my throughput so low), the best answers are here: https://github.com/locustio/locust/wiki/FAQ#increase-my-request-raterps |
FastHttpLocust Name # reqs # fails | Avg Min Max Median | req/s failures/sPOST /api/send 262084 0(0.00%) | 79 0 2485 84 | 3069.00 0.00Aggregated 262084 0(0.00%) | 79 0 2485 84 | 3069.00 0.00 |
Locust sends those headers automatically:
when means activated compression, in one of our tests this indeed meant lower RPS, but that was because we used too high compression level, which was actually fixed by setting to a lower level thanks to Locust, so this might be one of the reasons RPC is different from testing with ab |
I did load testing on a HTTP interface with several performance testing tools, and I found the 'RPS' generated by
locust
is much fewer than others.ApacheBench
command:
ab -n 1000 -c 80 http://testurl:8000/echo/hello
Benchmark:
Jmeter
set
Number of Threads
to 80, setLoop Count
to 100, and got the value ofThroughtout
was 270/sec.Locust
set
min_wait = 0
max_wait = 0
in script file,and run locustfile with command:
locust -f api.py --no-web -c 80 -r 80 -n 10000 --only-summary
Benchmark:
The text was updated successfully, but these errors were encountered: