Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Load and concurrency control #235

Merged
merged 27 commits into from
Nov 21, 2020
Merged

Load and concurrency control #235

merged 27 commits into from
Nov 21, 2020

Conversation

bojand
Copy link
Owner

@bojand bojand commented Oct 22, 2020

This is initial work in progress on some significant changes to allow for improved behaviour and better control of load and concurrency within ghz.

Changes

  • qps option renamed to -r, --rps.
  • Added --async option to allow for sending of requests asynchronously.
  • Added a collection of --load-* parameters to control request rate.
  • Added a similar collection of --concurrency-* parameters to control the worker concurrency.

Please see usage docs, options docs with some examples, and additional load walkthrough and concurrency overview.

Tagging relevant issues: #111, #182, #185, #209, #225, #233, #238.

Respectfully tagging relevant users that were involved in the issues above which may be interested in this work:

@ezsilmar @tp @juwatanabe @datngohoang @vlasenkoalexey @howardjohn @sujitdmello @craigh1015

You can find the binaries at the v0.70.0-beta and v0.70.0-beta.1 pre-release pages.

I would appreciate if you gave the new options a try and see if it addresses the various issues and quirks.

Any feedback would be helpful.

Thanks!

@bojand bojand mentioned this pull request Oct 23, 2020
@sujitdmello
Copy link
Contributor

Tested the new switches and controls and they certainly worked well in my own gRPC application. In particular, the load control and concurrency control were nice features and are welcome additions to ghz. A few general comments:

  • I used both -c and the new concurrency controls and was left unsure which one had precedence.
  • The (HTML) output does not seem to reflect the new switches.
  • The load controls do a good job of controlling the start and end values for the load, but is there a need for a 'cool-down' as well? A ramp-down maybe?

@bojand
Copy link
Owner Author

bojand commented Nov 4, 2020

Hello, thanks for giving the prerelease a try and for the feedback!

I used both -c and the new concurrency controls and was left unsure which one had precedence.

As new docs state, -c is only used for the (default) const concurrency schedule; that is when the number of workers does not change through the duration of the test. I kept it this way for backwards compatibility, essentially if clients were using -c before, it should function the same now. If a different --concurrency-schedule is used, additional concurrency-* options specified, -c option has no effect. Please let me know if this makes sense, or if it's not clear; perhaps documentation can be improved.

The (HTML) output does not seem to reflect the new switches.

Yup will have to fix this. Thanks!

The load controls do a good job of controlling the start and end values for the load, but is there a need for a 'cool-down' as well? A ramp-down maybe?

Can you elaborate some more on this please? I am not sure the best way to express this via CLI, but it should be doable via package API that's still to come to expose specifying custom pacer (for controlling RPS load) and worker ticket (for controlling concurrency).

Thanks for the feedback.

I am hoping to finish this up and release it soon.

@sujitdmello
Copy link
Contributor

Hello, thanks for giving the prerelease a try and for the feedback!

I used both -c and the new concurrency controls and was left unsure which one had precedence.

As new docs state, -c is only used for the (default) const concurrency schedule; that is when the number of workers does not change through the duration of the test. I kept it this way for backwards compatibility, essentially if clients were using -c before, it should function the same now. If a different --concurrency-schedule is used, additional concurrency-* options specified, -c option has no effect. Please let me know if this makes sense, or if it's not clear; perhaps documentation can be improved.
I think the docs are clear - I suspect once we have the next item resolved it will be clear which concurrency model was used for the test.

The (HTML) output does not seem to reflect the new switches.

Yup will have to fix this. Thanks!

The load controls do a good job of controlling the start and end values for the load, but is there a need for a 'cool-down' as well? A ramp-down maybe?

Can you elaborate some more on this please? I am not sure the best way to express this via CLI, but it should be doable via package API that's still to come to expose specifying custom pacer (for controlling RPS load) and worker ticket (for controlling concurrency).
This is an optional 'nice-to-have' but traditional load testers such as LoadRunner and others do have the option to ramp down the number of concurrent requests. One quick way maybe to have a flag that simply implies that the ramp-up model will also apply to the ramp-down. Again this is a very optional improvement and not having it does not take away from all the other improvements you have already implemented.

Thanks for the feedback.

I am hoping to finish this up and release it soon.

@bojand bojand merged commit 37ed69e into master Nov 21, 2020
@bojand bojand deleted the pace branch November 21, 2020 15:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants