-
Notifications
You must be signed in to change notification settings - Fork 277
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add performance test scripts #1671
Conversation
Signed-off-by: Kunal Kotwani <[email protected]>
Codecov Report
@@ Coverage Diff @@
## main #1671 +/- ##
============================================
+ Coverage 94.56% 94.60% +0.03%
- Complexity 0 20 +20
============================================
Files 140 178 +38
Lines 3515 3633 +118
Branches 19 27 +8
============================================
+ Hits 3324 3437 +113
- Misses 191 192 +1
- Partials 0 4 +4
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Need to add tests for jenkinsfile and vars/runPerfTestScript.groovy
. Thank you!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can refer to this for tests on jenkins.
with TemporaryDirectory(keep=args.keep, chdir=True) as work_dir: | ||
current_workspace = os.path.join(work_dir.name, "infra") | ||
with GitRepository(get_infra_repo_url(), "main", current_workspace): | ||
security = "security" in manifest.components |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't we get security from bundle manifest?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That would prevent us from parallelizing the tests in the future. We want to kick off two tests for the security based bundles, which can be done from the Jenkins pipeline
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel like security enabled/disabled should be decided based on security
param in manifest.components
. To parallelize the test we can pass 2 different bundle manifest, one with security enabled and other disabled.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We will have to publish these manifests to our buckets and distributions, which we do not currently.
Moving it out additionally gives more flexibility for extensibility moving forward for other use cases rather than updating/adding manifests for every build.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We do publish manifests to our buckets. without-security
tests is something that will run irrespective of what's in the manifest but for with-security
can we add a check in the codebase and then start the test only if the security component is present?
If the performance tests are to be run nightly it is highly possible that security component is not present initially in the manifest.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Refer to the comment below - #1671 (comment)
What I was trying to say is - we have a single manifest which might or might not have security in the components. By enabling the logic at the Jenkins level, we can execute the two runs in parallel, giving us more control over the test runs and statuses.
Also, it is based directly on the manifest. Look here for more - https://github.com/opensearch-project/opensearch-build/pull/1671/files#diff-4debf5e3ece07145d8395f15df88f49b8b784a274cead94e2a94c8b7152c11efR75
All I am doing is pulling out that logic for better control at the top level of job execution.
Signed-off-by: Kunal Kotwani <[email protected]>
Signed-off-by: Kunal Kotwani <[email protected]>
@kotwanikunal Looks like there are failing tests. Can you fix those? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add tests for python code, too.
src/run_perf_test.py
Outdated
@@ -35,6 +36,9 @@ def main(): | |||
parser.add_argument("--bundle-manifest", type=argparse.FileType("r"), help="Bundle Manifest file.", required=True) | |||
parser.add_argument("--stack", dest="stack", help="Stack name for performance test") | |||
parser.add_argument("--config", type=argparse.FileType("r"), help="Config file.", required=True) | |||
parser.add_argument("--security", dest="security", action="store_true", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We already have a configuration file, and we have a manifest that has or does not have security, why are we promoting something so specific to a top level feature/option and how is it going to add up with that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am trying to parallelize the two jobs (security/non security) at the top level to get clear status and result visibility. Moving it at the python script level will require additional async logic to be added in to make the script execute in parallel.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The tooling is trying hard not to be too specific for OpenSearch/OpenSearch Dashboards so I'd think about this twice. My concern though is that with this change we have 2 ways to say "with security" and "without security", you should collapse it in 1 way as part of this change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have updated the config to be more appropriate with the general use case. I can work on abstracting it out/utilizing a test config for example, when we move to add more components to this script.
sh''' | ||
pipenv install "dataclasses_json~=0.5" "aws_requests_auth~=0.4" "json2html~=1.3.0" | ||
pipenv install "aws-cdk.core~=1.143.0" "aws_cdk.aws_ec2~=1.143.0" "aws_cdk.aws_iam~=1.143.0" | ||
pipenv install "boto3~=1.18" "setuptools~=57.4" "retry~=0.9" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This shouldn't be necessary. Dependencies needed by the tools should go into Pipfile.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The is because of the nature in which the packages are separated and it being pulled at run time. pipenv
does not support nested module pipfile installation, which is why I had to resort to installing it via this script.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You should fix this in the other project. Implement a similar wrapper as test.sh
, a Pipfile
and have the .sh script run pipenv install
to get these dependencies. It's not on the caller's responsibility to ensure that the dependencies are met, or you'll be constantly chasing changes in that project here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can work on this in a follow up PR. I want to get the changes going as they are blocking other performance test related integrations and tests.
Signed-off-by: Kunal Kotwani <[email protected]>
The tests succeed on my machine locally -
Looks to be an import issue on the GHA distros -
|
Python version mismatch? Is yours passing on 3.8 vs. 3.7? python-poetry/poetry#1487 |
Signed-off-by: Kunal Kotwani <[email protected]>
Thanks! |
Let me know whenever you add it. Thanks! cc: @bbarani |
Signed-off-by: Kunal Kotwani <[email protected]>
Took longer than expected to get all the cases in. All the cases and jobs should have relevant tests now. |
Signed-off-by: Kunal Kotwani <[email protected]>
Thanks for adding the tests. However, we don't want to create a copy for the job, but rather use the original jenkinsfile to make sure that works (as in the examples and readme). If the test is written on the copy, then any change on the original job will not fail the test and therefore, defeat it's purpose. Let's connect offline so we can close this soon. Thanks! |
Sorry took me a while to realize that and then get the actual script to have all the testable stubs and mocks. Jenkins/Python newbie here. 🙂 |
Signed-off-by: Kunal Kotwani <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for addressing the review.
as (test_cluster_endpoint, test_cluster_port): | ||
perf_test_suite = PerfTestSuite(manifest, test_cluster_endpoint, security, | ||
current_workspace, tests_dir, args) | ||
retry_call(perf_test_suite.execute, tries=3, delay=60, backoff=2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we expose retry options to the command line?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That seems like a good option to have. I do have a few things to follow up on post - based on the test run outcomes once everything is up and running.
Added an issue here for tracking - opensearch-project/OpenSearch#2718
Merge on green. |
GHA Run:
The generated .txt files for the call stack are in sync. Getting - Tests running successfully locally. Can someone from @opensearch-project/engineering-effectiveness help with this? 🙂 |
The success message in Diff the failed test output in the log:
(to get this I copy-pasted the expected/received text from the build log into 1.txt and 2.txt and ran diff on them) |
Signed-off-by: Kunal Kotwani <[email protected]>
Signed-off-by: Kunal Kotwani <[email protected]>
Signed-off-by: Kunal Kotwani <[email protected]>
It looks like the groovy tests are run with the latest code with the change merged in, which caused the inconsistency. Nevertheless, improved the notifications and merged in the latest. All green now |
Signed-off-by: Kunal Kotwani [email protected]
Description
Issues Resolved
Check List
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.