-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inaccurate response time? #663
Comments
That definitely sounds strange. You should see response times very similar to ab and wrk.
Nope, gevent makes it non-blocking. Is this regardless to how many Locust users you simulate? Do you still see response times ~300 ms if you only simulate <100 users? |
@heyman
Seems totally unusable.
Seems meaningless using more than 20 coroutines.
And I also tested some real APIs: API1
API2:
settings(both client & target server):
|
with --no-web option:
So gevent is buggy on not just windows but Mac and linux now? (Maybe it's time to embrace asyncio(and py35+) :P |
locust
jmeter with 200 threads (avg 131ms, rps 1508)
|
there is nothing actionable in this ticket.. closing. |
hello |
are you setting a min_wait and max_wait? min_wait and max_wait defaults to 1000, and therefore a locust will always wait 1 second between each task if min_wait and max_wait is not declared. |
yes I know, I have set them to different values most between 1 and 5. vegeta (similar results as for wrk, ab): Bucket # % Histogram
[0s, 20ms] 1091 24.24% ##################
[20ms, 30ms] 2880 64.00% ################################################
[30ms, 60ms] 489 10.87% ########
[60ms, 90ms] 18 0.40%
[90ms, 120ms] 14 0.31%
[120ms, 150ms] 5 0.11%
[150ms, 180ms] 0 0.00%
[180ms, 250ms] 3 0.07%
[250ms, 300ms] 0 0.00%
[300ms, 500ms] 0 0.00%
[500ms, 600ms] 0 0.00%
[600ms, 700ms] 0 0.00% Percentage of the requests completed within given times
Name # reqs 50% 66% 75% 80% 90% 95% 98% 99% 100%
-----------------------------------------------------------------------------------
GET /ping 7 630 880 1400 1400 1500 1500 1500 1500 1538
----------------------------------------------------------------------------------- |
I tried to do bench marking of the locust response times with enterprise tools If any one find the same behavior , please comment and if any one resolved this type of issue let me know . Thanks in advance |
Hi @kishoregorapalli ! There could be a number of reasons for this. Assuming you didnt run into issues regarding CPU usage (shouldnt be an issue at such low load, but check your logs for CPU utilization warnings just in case) one thing that might differ between tools is how connections are handled (typically locust users will reuse connections, and sometimes load balancers/firewalls cause issues with this). But the first thing you should do is ensure you have enough processes. The easiest thing you can do (assuming you are on a recent locust version) is just add Read more here: https://github.com/locustio/locust/wiki/FAQ#increase-my-request-raterps If you still have issues, please file a new ticket with complete logs, locustfile, locust version etc. |
I ran some benchmarks with
locust
,wrk
andab
, and locust always reports an average response time around 300ms, while the other 2 report 4ms. Same server, same settings, same bandwidth.I assume the BIO nature of requests lib has no effect to the response time, right?
I know locust isn't for benchmark, but how can we trust its result if it can't even get the (remotely) correct stats of a 100B static page? For most APIs in most .com company, this extra 300ms is unacceptable.
locustfile:
The text was updated successfully, but these errors were encountered: