Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

QA Automation test task - mass user registration load test #5199

Closed
wants to merge 20 commits into from

Conversation

inpv
Copy link
Contributor

@inpv inpv commented Oct 31, 2022

Motivation and context

Since I haven't found any particular load/performance tests in the repo, I decided to start making them myself. I think locust can be a useful instrument for load testing, since it's developer-friendly and easily integrated into Python code.
In this particular example I start a locust env with a runner and greenlets and start swarming the endpoint with POST requests from api_client.auth_api.create_register populated by Faker and secrets data, measuring the average response time and asserting for 0 failures before closing the server instance.

How has this been tested?

This test has been launched on a (custom) Arch Linux dev env, using python3.9, pytest 6.2.5 and locust 2.13.0.

Checklist

License

  • I submit my code changes under the same MIT License that covers the project.
    Feel free to contact the maintainers if that's a concern.

@inpv inpv requested a review from azhavoro as a code owner October 31, 2022 01:46
@inpv inpv changed the title QA Automation test task - mass user register load test QA Automation test task - mass user registration load test Oct 31, 2022
@inpv
Copy link
Contributor Author

inpv commented Nov 2, 2022

Well, I have a couple questions:

  1. Some of the checks fail because faker and locust aren't in the default requirements list. May I add them as part of this PR as well or should I open another PR for this?

  2. Pylint seems to mark the check upon finding an "unnecessary lambda" warning here as failed, however rewriting the function like it "should" be:

def quit_runner(environment)
    return environment.runner.quit()

# And then, inside the main function
gevent.spawn_later(10, quit_runner(environment=env))

doesn't get to start the default runner at all (response data at this line isn't printed, only with lambda warning suppressed). I guess I'll disable the warning in this test for now: # pylint: disable=unnecessary-lambda, until I find a more compatible runner.

@sizov-kirill
Copy link
Contributor

sizov-kirill commented Nov 3, 2022

Well, I have a couple questions:

  1. Some of the checks fail because faker and locust aren't in the default requirements list. May I add them as part of this PR as well or should I open another PR for this?
  2. Pylint seems to mark the check upon finding an "unnecessary lambda" warning here as failed, however rewriting the function like it "should" be:
def quit_runner(environment)
    return environment.runner.quit()

# And then, inside the main function
gevent.spawn_later(10, quit_runner(environment=env))

doesn't get to start the default runner at all (response data at this line isn't printed, only with lambda warning suppressed). I guess I'll disable the warning in this test for now: # pylint: disable=unnecessary-lambda, until I find a more compatible runner.

Generally, you can feel free to do everything to make your PR "green", if disabling of some checks seems unfounded for us, we will mention it in review.

  1. Update tests requirements.txt so that CI can run your test
  2. In case with this with pylint warning it's looks fine to me to disable it.

@cvat-ai cvat-ai deleted a comment from github-actions bot Nov 3, 2022
tests/python/rest_api/test_load_register.py Outdated Show resolved Hide resolved
tests/python/rest_api/test_load_register.py Outdated Show resolved Hide resolved
tests/python/rest_api/test_load_register.py Outdated Show resolved Hide resolved
Comment on lines +104 to +111
register_request = RegisterSerializerExRequest(
username=username,
password1=passwd,
password2=passwd,
email=email,
first_name=first_name,
last_name=last_name,
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When I run the test locally, I see some requests fail with 400, but the test doesn't fail. I think we should implement this test in such a way that we don't have this error.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you want to catch the request exceptions and make the test fail upon parsing a non-200 response, or to suppress errors when sending the request altogether?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ended up doing both

Copy link
Contributor

@sizov-kirill sizov-kirill Nov 4, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my opinion the test should fail if server return non-200 responses.
We are trying to check the average user registration time, but how can we trust this number if some (or all) of our registrations fail.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, thought so too. Corrected as needed.

inpv and others added 6 commits November 4, 2022 03:25
- Move the Pylint suppressing line to its actual execution spot
- Remove debug console printing
- Add non-200 response codes' failure checks
- Disable response status check
- Add locust to testing requirements.txt
@inpv inpv requested a review from nmanovic as a code owner November 4, 2022 18:20
@nmanovic nmanovic closed this Nov 18, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants