-
Notifications
You must be signed in to change notification settings - Fork 261
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🏃 Implement conformance test via e2e test framework #782
🏃 Implement conformance test via e2e test framework #782
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: sbueringer The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
8273622
to
7054730
Compare
/test pull-cluster-api-provider-openstack-make-conformance |
Build failed.
|
7054730
to
22fb620
Compare
/test pull-cluster-api-provider-openstack-make-conformance |
Build failed.
|
/test pull-cluster-api-provider-openstack-make-conformance |
Build failed.
|
/test pull-cluster-api-provider-openstack-make-conformance |
Build failed.
|
Build failed.
|
/test pull-cluster-api-provider-openstack-make-conformance |
1 similar comment
/test pull-cluster-api-provider-openstack-make-conformance |
Build failed.
|
/test pull-cluster-api-provider-openstack-make-conformance |
2 similar comments
/test pull-cluster-api-provider-openstack-make-conformance |
/test pull-cluster-api-provider-openstack-make-conformance |
Build failed.
|
/test pull-cluster-api-provider-openstack-make-conformance |
Build failed.
|
@hidekazuna @prankul88 do you want to review this PR or should we just go ahead? :) It's mostly just about test code |
1d6e07d
to
bade7b9
Compare
/test pull-cluster-api-provider-openstack-make-conformance |
@@ -3,7 +3,6 @@ apiVersion: cluster.x-k8s.io/v1alpha4 | |||
kind: Cluster | |||
metadata: | |||
name: ${CLUSTER_NAME} | |||
namespace: '${NAMESPACE}' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was added with #735
@sbueringer I hope it does not conflict with it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I dropped it because it isn't needed. The namespaces of all CAPI resources are set even without it and our cloudsSecret.namespace
defaults to the same namespace as our other resources. Actually are e2e tests are using this because they are deploying to a random generated namespace (the e2e test with multiple flavors from my other PR even 2 clusters at the same time in different namespaces)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, CAPO works without this. Only I followed CAPI recommendation
https://cluster-api.sigs.k8s.io/clusterctl/provider-contract.html#common-variables
I thought someone uses multi cloud provider and they wonder they can not use --target-namespace
for CAPO.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it was okay that we had it before. My interpretation of the documentation is (now):
- If you have places in you're YAML were you need the target namespace, you can/should use this variable. It is not necessary to use it for the metadata.namespace field as that is set by CAPI in any case
I looked at CAPG / CAPA / CAPZ / CAPV and I only could find it in CAPV. As it's not necessary and most of the other providers are not using it I thought it's cleaner to drop it.
LGTM !! Thankyou |
/test ? |
@sbueringer: The following commands are available to trigger jobs:
Use
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/test pull-cluster-api-provider-openstack-conformance-test In case anybody is wondering because the test failed, it's because of a race condition of ssh tunneling and cluster deletion. I fixed this here in the other PR: cluster-api-provider-openstack/test/e2e/shared/exec.go Lines 78 to 88 in 9d5eb52
I wouldn't backport it as it happens frequently and I expect we will merge the other PR pretty soon, too. |
apiVersion: addons.cluster.x-k8s.io/v1alpha4 | ||
kind: ClusterResourceSet | ||
metadata: | ||
name: "${CLUSTER_NAME}-crs-0" | ||
spec: | ||
strategy: ApplyOnce | ||
clusterSelector: | ||
matchLabels: | ||
cni: "${CLUSTER_NAME}-crs-0" | ||
resources: | ||
- name: "cni-${CLUSTER_NAME}-crs-0" | ||
kind: ConfigMap |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to update templates to use ClusterResourceSet
in templates directory as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should keep it as is. In almost all of the templates of the other providers it is only used for testing. I think it would break the quickstart documentation to change it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I found ClusterResourceSet is still experimental, after GA we would use it in templates.
# Copyright 2020 The Kubernetes Authors. | ||
# | ||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is noisy.. I am not familiar with law, but do we really need this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The copyright header?
I would keep it as is for now. But, I wanted to do a follow-up anyway to get rid of the whole /usr/local/bin/ci-artifacts.sh
file in this YAML anyway. There are some reasons why we are not using the upstream function to add this functionality:
- they are using gsutil instead of curl. Thus they install google cloud sdk first which adds ~3-5 minutes to to node startup in our environment (probably in all environments)
- they are also pulling the "kube-apiserver" "kube-controller-manager" "kube-scheduler" for worker nodes, which is not necessary and also adds time to the test
3/ when we use the upstream method to patch our KubeadmControlPlane and KubeadmConfigTemplate to add thescript we will lose the
/etc/kubernetes/cloud.confand
/etc/certs/cacertfiles because the whole
files` array is overwritten. This is a limitation of kustomize, but I assume with the fixes merged to kustomize a week ago we are able to improve this (Strategic merge support for CRDs kustomize#2825) (although it's not completely trivial)
I wanted to do a follow-up to improve all those things in the cluster-api
util func GenerateCIArtifactsInjectedTemplateForDebian
I currently copied into tmp_template.go
and then drop this in our repo. I think until know I would like to keep the diff as small as possible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I opened a issue for it here: #804
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, understand.
bade7b9
to
efa7d41
Compare
/test pull-cluster-api-provider-openstack-make-conformance /hold @hidekazuna Would be nice if you can take a look and lgtm in case you agree with my comments. Thank you :) I would merge after your lgtm and a successful test-run. I rebased onto master because of a conflict in |
@sbueringer: The specified target(s) for
Use
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/test pull-cluster-api-provider-openstack-conformance-test |
@sbueringer: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
/test pull-cluster-api-provider-openstack-conformance-test (let's see if it reoccurs, pretty much had 100% stable tests before the last rebase) |
/lgtm |
/hold cancel |
This PR refactors our conformance test to use the cluster-api test framework.
It doesn't introduce additional e2e tests yet, but after this PR this will be fairly easy.
I just didn't want to make the PR even bigger.
I also synced the Makefiles with the ones from AWS .
Notes:
xref: #577
/hold