Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not able to achieve high RPS (3000 users, 20 workers, 32 vcpu-64 GB RAM) #2154

Closed
prabhatsharma opened this issue Aug 8, 2022 · 8 comments
Closed
Labels

Comments

@prabhatsharma
Copy link

Describe the bug

I tried many things to get higher RPS but am unable to go any higher than 750 requests per second.

My details

  • 20 workers. Even with 10 workers RPS is similar.
  • 3000 users. even with 1000 users RPS is similar
  • spawn rate of 20
  • app being tested is running on c7g.8xlarge with 32 vcpu and 64GB RAM
  • locust runners are running in different nodes than app server being tested.
  • Am using FastHttpUser

Expected behavior

I should be able to achieve higher RPS with higher number of workers and users.

Actual behavior

Getting limited to ~750 RPS

Steps to reproduce

I am running the setup in a kubernetes cluster. You can see my setup details at https://github.com/zinclabs/artist/tree/main/k8s

Environment

I have added 2 test reports as well for better understanding.

@eldaduzman
Copy link

I'm not sure why you expect higher load if you add more workers and with the same total number of users.

What pops up to me is the high response times (3.5 seconds median), if locust awaits for response then the virtual user won't send the next request before the current one completed, so that might be part of the issue.

@cyberw
Copy link
Collaborator

cyberw commented Aug 8, 2022

looks like a problem with the system you are testing, point 2 (or 1) here:

https://github.com/locustio/locust/wiki/FAQ#increase-my-request-raterps

@cyberw cyberw closed this as completed Aug 8, 2022
@prabhatsharma
Copy link
Author

I'm not sure why you expect higher load if you add more workers and with the same total number of users.

Not all the tests that I have done here are with the same number of users. I have mentioned earlier that I have tried with 500, 1000, and 3000 users.

What pops up to me is the high response times (3.5 seconds median), if locust awaits for response then the virtual user won't send the next request before the current one completed, so that might be part of the issue.

This sounds more plausible as 95th percentile responses are going to 8-10 seconds. What I am not able to understand is that if I have 3000 users each user can go ahead and send at least one request and get me to at least 3000 RPS. Is that a wrong understanding? Obviously it is not working so it works some other way? How do we explain that?

looks like a problem with the system you are testing, point 2 (or 1) here:

https://github.com/locustio/locust/wiki/FAQ#increase-my-request-raterps

I have read this FAQ.

point 1

Increase the number of users. In order to fully utilize your target system you may need a lot of simultaneous users, especially if each request takes a long time to complete.

I have tried with 500, 1000, 3000 users. RPS does not change much.

point 2

resource utilization (e.g. CPU, memory & network. Check these metrics on the locust side as well)
configuration (e.g. max threads for your web server)
back end response times (e.g. DB)
client side DNS performance/flood protection (Locust will normally make at least one DNS Request per User)

  • resource utilization on both locust and target system is low enough with more CPU and memory available.
  • go uses goroutines, so concurrency is not an issue.
  • response times are upto 10 seconds - Is that a problem. I know it's far from ideal, but will that prevent other request from locust to fire?
  • DNS - Will need to check.

So the most plausible one is

  • response times are upto 10 seconds.

Any reason this could be an issue?

@eldaduzman
Copy link

Let's take a hypothetical scenario.

You have 100 users, response time is 1 second.

This means that each user sends a request every 1 second.

In total that's 100 requests per second.

Now let's say you have 200 users but response time went up to 2 seconds.

Now each user sends .5 request per second and your total rps is still 100.

Locust, similarly to any other tool, has no way around this.

@prabhatsharma
Copy link
Author

Ah, Thanks. I should have thought of this. This explains low RPS.

You are good at explaining.

@lzt007
Copy link

lzt007 commented Oct 25, 2023

Let's take a hypothetical scenario.

You have 100 users, response time is 1 second.

This means that each user sends a request every 1 second.

In total that's 100 requests per second.

Now let's say you have 200 users but response time went up to 2 seconds.

Now each user sends .5 request per second and your total rps is still 100.

Locust, similarly to any other tool, has no way around this.

According to this reply,the RPS in locust means how many requests were finished per second rather than how many requests were made per second?

@eldaduzman
Copy link

Let's take a hypothetical scenario.

You have 100 users, response time is 1 second.

This means that each user sends a request every 1 second.

In total that's 100 requests per second.

Now let's say you have 200 users but response time went up to 2 seconds.

Now each user sends .5 request per second and your total rps is still 100.

Locust, similarly to any other tool, has no way around this.

According to this reply,the RPS in locust means how many requests were finished per second rather than how many requests were made per second?

I'm not sure, I think that locust counts the number of requests generates, but it should make much difference, since locust greenlet can only generate a request after the previous request is completed.

@cyberw
Copy link
Collaborator

cyberw commented Oct 25, 2023

Requests are counted when they are finished, but like you said there is not much difference between the number of requests "finished" and "completed", because a User cannot (under normal circumstances anyway) generate a new request until the previous one has finished.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants