-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Requests per second metric per step in a user flow #2040
Comments
You could already do most of this with a import http from 'k6/http'
export let options = {
scenarios: {
test_counters: {
executor: 'constant-arrival-rate',
rate: 5,
duration: '30s',
preAllocatedVUs: 150,
},
},
thresholds: {
// Hack to surface these sub-metrics (https://github.com/k6io/docs/issues/205)
'http_req_duration{my_tag:test1}': ['max>=0'],
'http_req_duration{my_tag:test2}': ['max>=0'],
'http_req_duration{my_tag:test3}': ['max>=0'],
'http_reqs{my_tag:test1}': ['count>=0'],
'http_reqs{my_tag:test2}': ['count>=0'],
'http_reqs{my_tag:test3}': ['count>=0'],
},
}
export default function () {
http.get('https://httpbin.test.k6.io/delay/1', { tags: { my_tag: 'test1' } });
http.get('https://httpbin.test.k6.io/delay/1', { tags: { my_tag: 'test1' } });
http.get('https://httpbin.test.k6.io/delay/2', { tags: { my_tag: 'test2' } });
http.get('https://httpbin.test.k6.io/delay/1', { tags: { my_tag: 'test3' } });
} will produce something like this:
But notice that So I'm not sure if this will give you a lot of valuable information 🤷♂️ And I have no idea how you'd even calculate something like |
@na-- thank you very much for your quick response and explanation.
Yeah, that is what I meant with "not meaningful data".
That means, that
... I'm not sure I got it. In fact, I think, I don't understand how k6 works at all 😮 But well, I'll try. Maybe I'm a little bit naive, but with
In my view, rps isn't exactly a counter, but a 1 second window. Maybe again very naive, but I can think of two models. For every
In both models I see a bunch of numbers, which represents rps on different points of time. For example, If I got 10 rps numbers like 1, 2, 3, 4, 4, 4, 4, 5, 6, 7 from the first model, then What do you think about it? Does it make sense at all or I'm on wrong path? Maybe I'm damaged by |
I'll start backwards 😅
This is the crux. k6 itself doesn't operate in 1-second time windows. The "time window" used in the end-of-test summary when k6 calculates the requests per second is... the whole test run duration 😅 So, when you see When users need something else (e.g. bucketing request counts in 1-second time windows or charting things over time), k6 allows you to export the raw metrics it measured to an external output like InfluxDB or a JSON/CSV file. You'd get the timestamped measurements for every HTTP request, iteration, custom metric, etc. in your test and you can process them however you like. I've briefly touched on time windows for metrics internally in k6 when it comes to setting thresholds in #1136, but I can see how it also makes some sense to support them for the end-of-test summary. Mind you, there would still be some gotchas if you want to measure requests per second in 1s intervals, but I can see why it might be useful in some cases.
Neither of these statements is true 🙂 Let me backtrack a little bit... The 3 important concepts in k6 you need to know are VUs (virtual users), iterations and executors:
Additionally, because k6 doesn't have per-VU event loops yet (#882), most of the JS code in an iteration is executed synchronously. For example, if you have: export default function () {
http.get("https://test-api.k6.io/");
http.get("https://test.k6.io/");
sleep(0.1);
}; Every VU executing this iteration will first make a request to The only exceptions to that currently are the So, if you have a long HTTP request followed by a short HTTP request, the RPS of the second can't really be higher than the rate per second of the first. It is basically coordinated omission and the only way to avoid it is to have the requests be executed completely independently from each other, with an arrival-rate executor and in different scenarios. Even |
That matches exactly my expectations!
Thanks, great, then I understood most of the things right.
I see the point. The second request is "bottle-necked" by the first request. Let us ignore k6 internals for a moment and make some assumptions: The use case I see is to have a user flow (= a bunch of synchronous requests). While the requests fired, I want to measure not only latencies, but also rps of every involved endpoint. In fact, I can do that with k6 already now! For example, to do that I:
If we assume, that we can synchronize the above steps among VUs (= "barrier"), then we can get it in one test file. Of course there are some other bits like "does iteration belong to the steps or to whole script" and gotchas. Now all assumptions off. It seems like this is not possible in k6 yet, because all VUs/Workers are independent. Do you think it makes sense for k6 to support this flow? If not, well, feel free to close this FR. Thank you, @na-- ! |
As you've seen, k6 doesn't have file writing capabilities outside of Regarding having everything in a single file, that seems somewhat possible to do currently with multiple scenarios. Scenarios don't need to be concurrent, they can be sequential with the import http from 'k6/http'
// TODO: actually use something like this:
// consts users = JSON.parse(open('users.json'));
// and maybe a SharedArray, if the file is too big
const users = [
{ username: 'test1', password: 'pass1' },
{ username: 'test2', password: 'pass2' },
{ username: 'test3', password: 'pass3' },
{ username: 'test4', password: 'pass4' },
];
export let options = {
scenarios: {
login: {
executor: 'per-vu-iterations',
vus: users.length, // have exactly as many VUs as users
iterations: 1,
maxDuration: '15s',
gracefulStop: '0s',
exec: 'login',
},
update_profile: {
executor: 'per-vu-iterations',
vus: users.length, // have exactly as many VUs as users
iterations: 1,
startTime: '15s', // at least the (maxDuration+gracefulStop) of login
maxDuration: '15s',
gracefulStop: '0s',
exec: 'updateProfile',
},
},
thresholds: {
// Hack to surface these sub-metrics (https://github.com/k6io/docs/issues/205)
'http_req_duration{scenario:login}': ['max>=0'],
'http_req_duration{scenario:update_profile}': ['max>=0'],
'http_reqs{scenario:login}': ['count>=0'],
'http_reqs{scenario:update_profile}': ['count>=0'],
},
}
export function login() {
let credentials = users[__VU - 1];
console.log(`User ${__VU} logging in with ${JSON.stringify(credentials)}...`);
// TODO: actually implement login logic, this is only for demonstration
let randomDelay = Math.random().toFixed(2);
http.get(`https://httpbin.test.k6.io/delay/${randomDelay}?username=${credentials.username}`);
let bogusToken = `vu${__VU}_${randomDelay}`;
credentials.token = bogusToken;
}
export function updateProfile() {
let credentials = users[__VU - 1];
console.log(`User ${__VU} updating their profile ${JSON.stringify(credentials)}...`);
// TODO: actually implement update profile logic, this is only for demonstration
let randomDelay = (3 * Math.random()).toFixed(2); // longer requests
http.get(`https://httpbin.test.k6.io/delay/${randomDelay}?token=${credentials.token}`);
} This has some other rough edges though:
|
Feature Description
I like the output from
ab
and I have a hard time to understand the metrics from k6. Especially I miss the rps metric, if I want to test a user flow. For example, my user flow consists of 5 stepsI managed to get more submetrics:
which shows latencies for every step. Great! If I look at "iterations [...] 29.437014/s", then my assumption is, that the system can handle 29.4 registrations per second, also meaning all five steps. Probably, it is the rps metric if there were only one step, right? If I use the same technique for
iterations
orhttp_reqs
then it doesn't print it or print not meaningful data:Latencies are great and give very useful insights. But from my point of view they are not enough to understand the capabilities of a system. For example, latency of 200ms is great for 5000rps, but bad for 5rps.
Suggested Solution (optional)
Probably it would be nice to have same behavior for
http_reqs
(oriterations
?) like forhttp_req_duration
:I looked at custom metrics, but didn't found a way to achieve it with them.
The text was updated successfully, but these errors were encountered: