-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question] How can I control the speed of sending requests? #472
Comments
Use the The idea with Locust is that you implement user behaviour using code, and then you choose how many users you want to simulate. Therefore there isn't a way of saying "I want to start my test with X number of requests/second". |
@heyman Is it that every task will be executed once for every user in a second? |
If you set both min_wait and max_wait to 1000, the wait time between the execution of two tasks will be 1 second, for each user. I you set min_wait and max_wait to 10000 and 40000, the average wait time between tasks, for each user, will be 25 seconds. |
Understood. How can I compare locust with jmeter, may I compare the time for sending same number requests with jmeter and locust? |
@heyman It seems locust use gevent to send requests, and gevent can not work with multi-core CPU(as 4-core CPU). Am I right? In my test locust take more time than jmeter with large number of requests. If locust can solve the multi-core problem on this, it will be more powerful than other load testing tool as jmeter, I think. ~~ |
Yeah, to utilize multiple cores you should run Locust distributed (See: http://docs.locust.io/en/latest/running-locust-distributed.html) |
@heyman Can I set the min_wait and max_wait to 0? If this, the time between executing is 0 also? |
@yisake Yes |
Many thanks!~~ |
@heyman I've done a test with locust and jmeter. In my test I simulate 800 users. And I capture the request with wireshark as below. Locust rps is 530rps, jmeter is 824.3KB/sec. From the speed for each. Jmeter is faster than locust, why this happended? |
It doesn't make sense to compare RPS to KB/s. However I can see a number of reasons for why you might achieve higher throughput with JMeter in your set up. JMeter uses java threads for their "users" which let's them utilize all CPU cores (while using up a lot more memory). To utilize multiple CPU cores with Locust you must run it distributed with one master and multiple slave processes. Overall Locust and JMeter take a quite different approach when it comes to load testing, and you should really use the one that suits your need best. If you want a framework for defining real user behaviour in code, and then simulating very large amounts of these users, I think Locust would be a good choice. If you just want to achieve a high RPS throughput to a few URL endpoints, you're probably better off with JMeter or even Apache Bench. |
package != packet != request |
@cgoldberg I've filter with http&&tcp.dstport==8000(as my server port is 8000), so the filter result is the speed for client sending requests. |
@cgoldberg If the simuate user is 800. What the packet speed should be as your opinion? |
@heyman @cgoldberg So my result is that there's a limit load testing ability for 1 core. With multiple CPU cores, the packets rate for Locust will increase linearly. Right? |
@yisake I really don't understand your questions or your use of english. sorry. |
@cgoldberg I'm sorry my english is not good. My question is that why the packet rate is different between 1 CPU-core and 2 CPU-core (1 slave and 2 slaves) with the same simulate user count ? The RPS with 2 slaves(1096reps/sec) is higher than 1 slave (647reps/sec), |
@cgoldberg How can I set different host(http://www.google.com/index and http://www.baidu.com/index) in one locust file ? |
You can specify a full URL (with hostname) when you make requests using the Locust client. Please don't use the Locust Github issues unless you have real indication that there are actually real issues with Locust. This isn't the right forum to raise pure support questions. Especially if it's questions that has clear answers in the documentation. |
@heyman Sorry for that, where can I ask for support, mail or any other tool ? Do you have any idea? |
@yisake You could use StackOverflow. Personally I don't really have time to answer pure support requests. |
@yisake 大兄弟,辛苦了。。 |
For 1000 simulator users, the requests speed is not 1000resquest/second,
So how can I control the speed?
The text was updated successfully, but these errors were encountered: