-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
QA Automation test task - mass user registration load test #5199
Conversation
- Add admin user/pass discovery for api_client config params
… locust) - Add more registration params
…r' into inpv_test_task_mass_user_register
- Add more request fields - Remove the main class and set it to method - Minor layout corrections for linter checks
Well, I have a couple questions:
def quit_runner(environment)
return environment.runner.quit()
# And then, inside the main function
gevent.spawn_later(10, quit_runner(environment=env)) doesn't get to start the default runner at all (response data at this line isn't printed, only with lambda warning suppressed). I guess I'll disable the warning in this test for now: |
Generally, you can feel free to do everything to make your PR "green", if disabling of some checks seems unfounded for us, we will mention it in review.
|
register_request = RegisterSerializerExRequest( | ||
username=username, | ||
password1=passwd, | ||
password2=passwd, | ||
email=email, | ||
first_name=first_name, | ||
last_name=last_name, | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When I run the test locally, I see some requests fail with 400, but the test doesn't fail. I think we should implement this test in such a way that we don't have this error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you want to catch the request exceptions and make the test fail upon parsing a non-200 response, or to suppress errors when sending the request altogether?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ended up doing both
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In my opinion the test should fail if server return non-200 responses.
We are trying to check the average user registration time, but how can we trust this number if some (or all) of our registrations fail.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, thought so too. Corrected as needed.
- Move the Pylint suppressing line to its actual execution spot - Remove debug console printing - Add non-200 response codes' failure checks - Disable response status check - Add locust to testing requirements.txt
Motivation and context
Since I haven't found any particular load/performance tests in the repo, I decided to start making them myself. I think
locust
can be a useful instrument for load testing, since it's developer-friendly and easily integrated into Python code.In this particular example I start a locust env with a runner and greenlets and start swarming the endpoint with
POST
requests fromapi_client.auth_api.create_register
populated byFaker
andsecrets
data, measuring the average response time and asserting for 0 failures before closing the server instance.How has this been tested?
This test has been launched on a (custom) Arch Linux dev env, using python3.9, pytest 6.2.5 and locust 2.13.0.
Checklist
develop
branchcvat-core, cvat-data and cvat-ui)
License
Feel free to contact the maintainers if that's a concern.