Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add performance test scripts #1671

Merged
merged 21 commits into from
Apr 4, 2022
Merged

Conversation

kotwanikunal
Copy link
Member

Signed-off-by: Kunal Kotwani [email protected]

Description

  • Add Jenkins job and groovy scripts for performance tests

Issues Resolved

Check List

  • Commits are signed per the DCO using --signoff

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.

Signed-off-by: Kunal Kotwani <[email protected]>
@kotwanikunal kotwanikunal requested a review from a team as a code owner February 25, 2022 00:28
@codecov-commenter
Copy link

codecov-commenter commented Feb 25, 2022

Codecov Report

Merging #1671 (a2a40d5) into main (c6d41cf) will increase coverage by 0.03%.
The diff coverage is 92.85%.

@@             Coverage Diff              @@
##               main    #1671      +/-   ##
============================================
+ Coverage     94.56%   94.60%   +0.03%     
- Complexity        0       20      +20     
============================================
  Files           140      178      +38     
  Lines          3515     3633     +118     
  Branches         19       27       +8     
============================================
+ Hits           3324     3437     +113     
- Misses          191      192       +1     
- Partials          0        4       +4     
Impacted Files Coverage Δ
src/test_workflow/perf_test/perf_test_suite.py 95.23% <83.33%> (-0.22%) ⬇️
src/run_perf_test.py 95.12% <100.00%> (+0.52%) ⬆️
tests/jenkins/jobs/PromoteArtifacts_Jenkinsfile 100.00% <0.00%> (ø)
...bs/PrintArtifactDownloadUrlsForStaging_Jenkinsfile 100.00% <0.00%> (ø)
tests/jenkins/jobs/Hello_Jenkinsfile 100.00% <0.00%> (ø)
...nIntegTestScript_OpenSearch_Dashboards_Jenkinsfile 100.00% <0.00%> (ø)
.../jenkins/jobs/PromoteArtifacts_actions_Jenkinsfile 100.00% <0.00%> (ø)
tests/jenkins/jobs/UploadTestResults_Jenkinsfile 100.00% <0.00%> (ø)
src/jenkins/BuildManifest.groovy 97.56% <0.00%> (ø)
tests/jenkins/jobs/RunIntegTestScript_Jenkinsfile 100.00% <0.00%> (ø)
... and 33 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update c6d41cf...a2a40d5. Read the comment docs.

src/run_perf_test.py Outdated Show resolved Hide resolved
Copy link
Contributor

@abhinavGupta16 abhinavGupta16 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to add tests for jenkinsfile and vars/runPerfTestScript.groovy. Thank you!

vars/runPerfTestScript.groovy Show resolved Hide resolved
jenkins/opensearch/perf-test.jenkinsfile Outdated Show resolved Hide resolved
jenkins/opensearch/perf-test.jenkinsfile Outdated Show resolved Hide resolved
jenkins/opensearch/perf-test.jenkinsfile Outdated Show resolved Hide resolved
jenkins/opensearch/perf-test.jenkinsfile Show resolved Hide resolved
jenkins/opensearch/perf-test.jenkinsfile Show resolved Hide resolved
vars/runPerfTestScript.groovy Show resolved Hide resolved
jenkins/opensearch/perf-test.jenkinsfile Outdated Show resolved Hide resolved
jenkins/opensearch/perf-test.jenkinsfile Outdated Show resolved Hide resolved
@abhinavGupta16 abhinavGupta16 requested a review from a team February 25, 2022 18:07
Copy link
Member

@zelinh zelinh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can refer to this for tests on jenkins.

with TemporaryDirectory(keep=args.keep, chdir=True) as work_dir:
current_workspace = os.path.join(work_dir.name, "infra")
with GitRepository(get_infra_repo_url(), "main", current_workspace):
security = "security" in manifest.components
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't we get security from bundle manifest?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That would prevent us from parallelizing the tests in the future. We want to kick off two tests for the security based bundles, which can be done from the Jenkins pipeline

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel like security enabled/disabled should be decided based on security param in manifest.components. To parallelize the test we can pass 2 different bundle manifest, one with security enabled and other disabled.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We will have to publish these manifests to our buckets and distributions, which we do not currently.
Moving it out additionally gives more flexibility for extensibility moving forward for other use cases rather than updating/adding manifests for every build.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We do publish manifests to our buckets. without-security tests is something that will run irrespective of what's in the manifest but for with-security can we add a check in the codebase and then start the test only if the security component is present?
If the performance tests are to be run nightly it is highly possible that security component is not present initially in the manifest.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Refer to the comment below - #1671 (comment)

What I was trying to say is - we have a single manifest which might or might not have security in the components. By enabling the logic at the Jenkins level, we can execute the two runs in parallel, giving us more control over the test runs and statuses.

Also, it is based directly on the manifest. Look here for more - https://github.com/opensearch-project/opensearch-build/pull/1671/files#diff-4debf5e3ece07145d8395f15df88f49b8b784a274cead94e2a94c8b7152c11efR75

All I am doing is pulling out that logic for better control at the top level of job execution.

Signed-off-by: Kunal Kotwani <[email protected]>
@gaiksaya
Copy link
Member

@kotwanikunal Looks like there are failing tests. Can you fix those?
Thanks!

Copy link
Member

@dblock dblock left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add tests for python code, too.

@@ -35,6 +36,9 @@ def main():
parser.add_argument("--bundle-manifest", type=argparse.FileType("r"), help="Bundle Manifest file.", required=True)
parser.add_argument("--stack", dest="stack", help="Stack name for performance test")
parser.add_argument("--config", type=argparse.FileType("r"), help="Config file.", required=True)
parser.add_argument("--security", dest="security", action="store_true",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We already have a configuration file, and we have a manifest that has or does not have security, why are we promoting something so specific to a top level feature/option and how is it going to add up with that?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am trying to parallelize the two jobs (security/non security) at the top level to get clear status and result visibility. Moving it at the python script level will require additional async logic to be added in to make the script execute in parallel.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The tooling is trying hard not to be too specific for OpenSearch/OpenSearch Dashboards so I'd think about this twice. My concern though is that with this change we have 2 ways to say "with security" and "without security", you should collapse it in 1 way as part of this change.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have updated the config to be more appropriate with the general use case. I can work on abstracting it out/utilizing a test config for example, when we move to add more components to this script.

src/test_workflow/perf_test/perf_test_suite.py Outdated Show resolved Hide resolved
sh'''
pipenv install "dataclasses_json~=0.5" "aws_requests_auth~=0.4" "json2html~=1.3.0"
pipenv install "aws-cdk.core~=1.143.0" "aws_cdk.aws_ec2~=1.143.0" "aws_cdk.aws_iam~=1.143.0"
pipenv install "boto3~=1.18" "setuptools~=57.4" "retry~=0.9"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This shouldn't be necessary. Dependencies needed by the tools should go into Pipfile.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The is because of the nature in which the packages are separated and it being pulled at run time. pipenv does not support nested module pipfile installation, which is why I had to resort to installing it via this script.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should fix this in the other project. Implement a similar wrapper as test.sh, a Pipfile and have the .sh script run pipenv install to get these dependencies. It's not on the caller's responsibility to ensure that the dependencies are met, or you'll be constantly chasing changes in that project here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can work on this in a follow up PR. I want to get the changes going as they are blocking other performance test related integrations and tests.

@kotwanikunal
Copy link
Member Author

The tests succeed on my machine locally -

(opensearch-build) kkotwani@3c22fbb2c92e opensearch-build % git commit -m "Add parallel stages for performance tests, add tests" -s
isort....................................................................Passed
flake8...................................................................Passed
mypy.....................................................................Passed
pytest...................................................................Passed
yamllint.................................................................Passed

Looks to be an import issue on the GHA distros -

Run pipenv run flake[8](https://github.com/opensearch-project/opensearch-build/runs/5597195180?check_suite_focus=true#step:6:8) .
Traceback (most recent call last):
  File "/home/runner/.local/share/virtualenvs/opensearch-build-ucxYjMk8/bin/flake8", line 5, in <module>
    from flake8.main.cli import main
  File "/home/runner/.local/share/virtualenvs/opensearch-build-ucxYjMk8/lib/python3.7/site-packages/flake8/main/cli.py", line 5, in <module>
    from flake8.main import application
  File "/home/runner/.local/share/virtualenvs/opensearch-build-ucxYjMk8/lib/python3.7/site-packages/flake8/main/application.py", line [11](https://github.com/opensearch-project/opensearch-build/runs/5597195180?check_suite_focus=true#step:6:11), in <module>
    from flake8 import checker
  File "/home/runner/.local/share/virtualenvs/opensearch-build-ucxYjMk8/lib/python3.7/site-packages/flake8/checker.py", line 18, in <module>
    from flake8 import processor
  File "/home/runner/.local/share/virtualenvs/opensearch-build-ucxYjMk8/lib/python3.7/site-packages/flake8/processor.py", line 13, in <module>
    from flake8 import utils
  File "/home/runner/.local/share/virtualenvs/opensearch-build-ucxYjMk8/lib/python3.7/site-packages/flake8/utils.py", line 16, in <module>
    from flake8._compat import lru_cache
  File "/home/runner/.local/share/virtualenvs/opensearch-build-ucxYjMk8/lib/python3.7/site-packages/flake8/_compat.py", line [12](https://github.com/opensearch-project/opensearch-build/runs/5597195180?check_suite_focus=true#step:6:12), in <module>
    import importlib_metadata
ModuleNotFoundError: No module named 'importlib_metadata'

@dblock
Copy link
Member

dblock commented Mar 18, 2022

Python version mismatch? Is yours passing on 3.8 vs. 3.7? python-poetry/poetry#1487

Signed-off-by: Kunal Kotwani <[email protected]>
@kotwanikunal
Copy link
Member Author

Python version mismatch? Is yours passing on 3.8 vs. 3.7? python-poetry/poetry#1487

Thanks!
That seemed to be it. My venv was on 3.7 but it picked up the OS default which was 3.8.
And since the Pipfile.lock was updated, it was breaking with 3.7 on the build envs.

@kotwanikunal kotwanikunal reopened this Mar 23, 2022
@abhinavGupta16
Copy link
Contributor

Can you also add a test case for the jenkins job?
Refer this
Thank you!

There are test for both the scenarios now. The dependency on downloading the manifest file was eliminated and instead I have used hard coded values, which makes it on par with the actual test script.

We would still need a test for the jenkinsfile, to make sure that it fails if there are any changes to the job. It would also make sure that the job calls the libraries correctly. Example SignStandaloneArtifactsJob, DataPrepperJob

I'll add and update the PR shortly.

Let me know whenever you add it. Thanks!

cc: @bbarani

@kotwanikunal
Copy link
Member Author

Can you also add a test case for the jenkins job?
Refer this
Thank you!

There are test for both the scenarios now. The dependency on downloading the manifest file was eliminated and instead I have used hard coded values, which makes it on par with the actual test script.

We would still need a test for the jenkinsfile, to make sure that it fails if there are any changes to the job. It would also make sure that the job calls the libraries correctly. Example SignStandaloneArtifactsJob, DataPrepperJob

I'll add and update the PR shortly.

Let me know whenever you add it. Thanks!

cc: @bbarani

Took longer than expected to get all the cases in. All the cases and jobs should have relevant tests now.

@abhinavGupta16
Copy link
Contributor

abhinavGupta16 commented Mar 25, 2022

Can you also add a test case for the jenkins job?
Refer this
Thank you!

There are test for both the scenarios now. The dependency on downloading the manifest file was eliminated and instead I have used hard coded values, which makes it on par with the actual test script.

We would still need a test for the jenkinsfile, to make sure that it fails if there are any changes to the job. It would also make sure that the job calls the libraries correctly. Example SignStandaloneArtifactsJob, DataPrepperJob

I'll add and update the PR shortly.

Let me know whenever you add it. Thanks!
cc: @bbarani

Took longer than expected to get all the cases in. All the cases and jobs should have relevant tests now.

Thanks for adding the tests. However, we don't want to create a copy for the job, but rather use the original jenkinsfile to make sure that works (as in the examples and readme). If the test is written on the copy, then any change on the original job will not fail the test and therefore, defeat it's purpose. Let's connect offline so we can close this soon.

Thanks!

@kotwanikunal
Copy link
Member Author

Can you also add a test case for the jenkins job?
Refer this
Thank you!

There are test for both the scenarios now. The dependency on downloading the manifest file was eliminated and instead I have used hard coded values, which makes it on par with the actual test script.

We would still need a test for the jenkinsfile, to make sure that it fails if there are any changes to the job. It would also make sure that the job calls the libraries correctly. Example SignStandaloneArtifactsJob, DataPrepperJob

I'll add and update the PR shortly.

Let me know whenever you add it. Thanks!
cc: @bbarani

Took longer than expected to get all the cases in. All the cases and jobs should have relevant tests now.

Thanks for adding the tests. However, we don't want to create a copy for the job, but rather use the original jenkinsfile to make sure that works (as in the examples and readme). If the test is written on the copy, then any change on the original job will not fail the test and therefore, defeat it's purpose. Let's connect offline so we can close this soon.

Thanks!

Sorry took me a while to realize that and then get the actual script to have all the testable stubs and mocks. Jenkins/Python newbie here. 🙂
Let me know if this resolves your concerns.
Thanks!

Copy link
Contributor

@abhinavGupta16 abhinavGupta16 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for addressing the review.

src/run_perf_test.py Outdated Show resolved Hide resolved
src/run_perf_test.py Outdated Show resolved Hide resolved
src/run_perf_test.py Show resolved Hide resolved
as (test_cluster_endpoint, test_cluster_port):
perf_test_suite = PerfTestSuite(manifest, test_cluster_endpoint, security,
current_workspace, tests_dir, args)
retry_call(perf_test_suite.execute, tries=3, delay=60, backoff=2)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we expose retry options to the command line?

Copy link
Member Author

@kotwanikunal kotwanikunal Apr 1, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That seems like a good option to have. I do have a few things to follow up on post - based on the test run outcomes once everything is up and running.
Added an issue here for tracking - opensearch-project/OpenSearch#2718

src/test_workflow/perf_test/perf_test_suite.py Outdated Show resolved Hide resolved
@dblock
Copy link
Member

dblock commented Apr 2, 2022

Merge on green.

@kotwanikunal
Copy link
Member Author

kotwanikunal commented Apr 2, 2022

GHA Run:

62 tests completed, 2 failed, 1 skipped

The generated .txt files for the call stack are in sync. Getting - org.opentest4j.AssertionFailedError: [If you intended to update the callstack, use JVM parameter -Dpipeline.stack.write=true] on the GHA run.

Tests running successfully locally.

Can someone from @opensearch-project/engineering-effectiveness help with this? 🙂

@dblock
Copy link
Member

dblock commented Apr 2, 2022

The success message in TestRunPerfTestScript > testRunPerfTestScript_Pipeline looks different between expected and not, probably leaking ENV variables (JOB_NAME, etc.) across tests that causes different output.

Diff the failed test output in the log:

<                           publishNotification.sh(curl -XPOST --header "Content-Type: application/json" --data '{"result_text":":white_check_mark: perf-test [1236] Performance Tests Successful
<      Build: test://artifact.url
---
>                           publishNotification.sh(curl -XPOST --header "Content-Type: application/json" --data '{"result_text":":white_check_mark:
>      JOB_NAME=perf-test
>      BUILD_NUMBER=[1236]
>      MESSAGE=Performance Tests Successful
>      BUILD_URL: test://artifact.url
>      MANIFEST: null
146,148d149
<                     perf-test.postCleanup()
<                        postCleanup.cleanWs({disableDeferredWipeout=true, deleteDirs=true})
<      "

(to get this I copy-pasted the expected/received text from the build log into 1.txt and 2.txt and ran diff on them)

@kotwanikunal
Copy link
Member Author

The success message in TestRunPerfTestScript > testRunPerfTestScript_Pipeline looks different between expected and not, probably leaking ENV variables (JOB_NAME, etc.) across tests that causes different output.

Diff the failed test output in the log:

<                           publishNotification.sh(curl -XPOST --header "Content-Type: application/json" --data '{"result_text":":white_check_mark: perf-test [1236] Performance Tests Successful
<      Build: test://artifact.url
---
>                           publishNotification.sh(curl -XPOST --header "Content-Type: application/json" --data '{"result_text":":white_check_mark:
>      JOB_NAME=perf-test
>      BUILD_NUMBER=[1236]
>      MESSAGE=Performance Tests Successful
>      BUILD_URL: test://artifact.url
>      MANIFEST: null
146,148d149
<                     perf-test.postCleanup()
<                        postCleanup.cleanWs({disableDeferredWipeout=true, deleteDirs=true})
<      "

(to get this I copy-pasted the expected/received text from the build log into 1.txt and 2.txt and ran diff on them)

It looks like the groovy tests are run with the latest code with the change merged in, which caused the inconsistency. Nevertheless, improved the notifications and merged in the latest. All green now

@zelinh zelinh merged commit 8f7ffee into opensearch-project:main Apr 4, 2022
@kotwanikunal kotwanikunal deleted the perf-test branch April 24, 2022 20:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants