-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
support for varying requests in benchmarks #100
Comments
This seems totally reasonable. Currently, before the benchmark, yab does:
If we had multiple requests, would we want a loop over all the requests initially? Or should we just pick a random one, then rotate through the specified requests during warmup and benchmarks only. Is there any other parameters we'd want to tweak on a per-request basis? |
Timeouts would need to be tweaked on a per-request basis; at least for the use case I'm currently targeting. I think a verification request to each endpoint is definitely useful. How fast is the "k" done? In my current use case I have endpoints which range (normalizing traffic to 100 rps) from 40 rps to 0.3 rps. In the case of the "0.3" rps the warm up requests need to either be very slow or non-existent, but on the other end a "ramp up" is definitely useful. Maybe this value needs to be tuneable per-endpoint. |
Hmm, that seems like it will complicate yab a lot to have that level of customization on a per-request basis. Right now, the warmup just makes a set number of requests (customizable via If there's two different endpoints wich such different behaviours, it seems like the user should run Thoughts? |
For posterities sake, a summary of the offline conversation we just had. You are suggesting that I run multiple yabs concurrently so that I maintain this load distribution but get performance results on a per endpoint basis. This reducing the complexity of configuring yab and avoid having to change the aggregation behaviour currently in yab. The other concrete use case for multiple requests in the yab config would be multiple request bodies to the same or very similarly performing endpoints. This would enable you, for example, to round robin through a number of input parameters in an attempt to mitigate inaccurate results due to caching. |
Right now we only take one body and duplicate it for every request within a benchmark. @ZymoticB pointed out that it's sometimes useful to benchmark different request bodies concurrently, and I wanted to open up a conversation around whether we could (or should) support something like that.
Concretely, this means something like "I want half of my benchmark's requests to use a body FOO and the other half to use BAR."
I was thinking we might be able to support a directory argument, and each file therein would be interpreted as a unique request body. With a heuristic like "append .headers to the request's filename" we could also attach per-request headers. This would leave a directory structure like so:
This would cause yab to round-robin between foo and bar; and foo would override headers provided on the CLI.
The user could then control the distribution of requests with this file structure. For example if I want 2/3 of requests to be FOO and 1/3 to be BAR I would simply drop FOO.1 and FOO.2 into the directory.
Thoughts?
The text was updated successfully, but these errors were encountered: