This repository serves as a test bed for the SD-CICD team to build tooling and support operators with minimal impact on other teams
https://github.com/openshift/ops-sop/blob/master/v4/howto/osde2e/operator-test-harnesses.md
- Run
make e2e-harness-build
to make sure harness builds ok - Deploy your new version of operator in a test cluster
- Ensure e2e test scenarios run green on a test cluster using one of the methods below
- create stage rosa cluster
- install ginkgo executable
- get kubeadmin credentials from your cluster using
ocm get /api/clusters_mgmt/v1/clusters/$CLUSTER_ID/credentials | jq -r .kubeconfig > /<path-to>/kubeconfig
- Run harness using
OCM_ENVIRONMENT=stage KUBECONFIG=/<path-to>/kubeconfig ./<path-to>/bin/ginkgo --tags=osde2e -v
- This will show test results, but also one execution error due to reporting configs. You can ignore this, or get rid of this, by temporarily removing the
suiteConfig
andreporterConfig
arguments fromRunSpecs()
function inosde2e/<operator-name_>test_harness_runner_test.go
file
- Publish a docker image for the test harness from operator repo using
HARNESS_IMAGE_REPOSITORY=<your quay HARNESS_IMAGE_REPOSITORY> HARNESS_IMAGE_NAME=<your quay HARNESS_IMAGE_NAME> make e2e-image-build-push
- Create a stage rosa cluster
- Clone osde2e:
git clone [email protected]:openshift/osde2e.git
- Build osde2e executable:
make build
- Run osde2e
#!/usr/bin/env bash
OCM_TOKEN="[OCM token here]" \
CLUSTER_ID="[cluster id here]" \
AWS_ACCESS_KEY_ID="[aws access key here]" \
AWS_SECRET_ACCESS_KEY="[aws access secret here]" \
TEST_HARNESSES="quay.io/$HARNESS_IMAGE_REPOSITORY/$HARNESS_IMAGE_NAME" \
# Save results in specific local dir
REPORT_DIR="[path to local report directory]" \
# OR in s3
LOG_BUCKET="[name of the s3 bucket to upload log files to]" \
./out/osde2e test \
--configs rosa,stage,sts,test-harness \
--skip-must-gather \
--skip-destroy-cluster \
--skip-health-check