Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make an E2E test for manifests #1244

Closed
krishnadurai opened this issue Jun 11, 2020 · 9 comments
Closed

Make an E2E test for manifests #1244

krishnadurai opened this issue Jun 11, 2020 · 9 comments

Comments

@krishnadurai
Copy link
Contributor

krishnadurai commented Jun 11, 2020

Since #1237 removes the erstwhile GCP cluster-based E2E test. We might need an E2E test to:

  1. Ensure that dependencies within the manifest definitions are maintained and are deployed in order.
  2. All required custom resources definitions are created before the CRs are applied
  3. Ensure that common services defined in manifests remain functional

The bar for this E2E test would be:

  1. The test is fast and reliable
  2. That the test is catching breakages that would otherwise be missed
@issue-label-bot
Copy link

Issue-Label Bot is automatically applying the labels:

Label Probability
kind/feature 0.92

Please mark this comment with 👍 or 👎 to give our bot feedback!
Links: app homepage, dashboard and code for this bot.

@issue-label-bot
Copy link

Issue-Label Bot is automatically applying the labels:

Label Probability
area/engprod 0.79

Please mark this comment with 👍 or 👎 to give our bot feedback!
Links: app homepage, dashboard and code for this bot.

1 similar comment
@issue-label-bot
Copy link

Issue-Label Bot is automatically applying the labels:

Label Probability
area/engprod 0.79

Please mark this comment with 👍 or 👎 to give our bot feedback!
Links: app homepage, dashboard and code for this bot.

@jlewi
Copy link
Contributor

jlewi commented Jun 11, 2020

Per my comment, I think the question is what tests should happen upstream vs. downstream?

For GCP our plan is to move the E2E tests downstream GoogleCloudPlatform/kubeflow-distribution#42. We are doing this because I think we need better separation of concerns.

The manifests themselves should largely be owned and maintained by the application OWNERs.

It seems like the areas you want to test are mostly about deployment which I think should be a downstream concern.

Deploying kubeflow seems like a two step process

kustomize build | kubectl apply 

Should testing reflect that? So tests for kubeflow/manifests should be focused on ensuring that kustomize build produces the correct output. Testing that the manifests get applied correctly seems like it should be the responsibility of platform/distribution owners. For example, ensuring manifests are applied in the correct order will depend on the deployment tooling used which will vary by platform.

Likewise ensuring that configuration is correct in terms of integrating with external systems seems like it should be the responsibility of platform owners. e.g. testing that on GCP pipelines is actually configured to talk to CloudSQL should happen downstream or at best the test should be tightly scoped so that unless you are directly changing the GCP configs the code doesn't get triggered.

Ensure that common services defined in manifests remain functional

What do you mean by common service?

@krishnadurai
Copy link
Contributor Author

krishnadurai commented Jun 15, 2020

Most of my concerns are deployment concerns.

It may benefit us to have a conformance test with a neutral platform (like KinD) apart from the downstream tests with vendors. This will help us in having a deployment validity check in CI before the merge of a PR to master.

What do you mean by common service?

Services like cert-manager, Istio etc fall under this bracket.

@jtfogarty
Copy link

/priority p1

@stale
Copy link

stale bot commented Dec 8, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in one week if no further activity occurs. Thank you for your contributions.

@stale stale bot added the lifecycle/stale label Dec 8, 2020
@PatrickXYS
Copy link
Member

This is finished I think

/close

@k8s-ci-robot
Copy link
Contributor

@PatrickXYS: Closing this issue.

In response to this:

This is finished I think

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants