You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Document the explicit versioning policy and release criteria for both the CNI and calico components of this project.
It is unclear if the <major>.<minor>.<revision> schema is explicitly semver 2, or just a project specific pattern.
It is also unclear if there is any concrete dependencies or compatibility requirements between the CNI and calico components vended under the same release versions.
Also, it would be helpful to provide more detailed upgrade guides when major component changes occur in the project.
Why is this needed:
The recent 1.7.9 release included the migration of calico management to the tigera operator introduced what a consumer could consider a backwards compatibility breaking change. See pull request for reference
The PR notes call out some but not all of the changes that would qualify this as containing backwards compatibility breaks.
New namespace creation requirements in default setup (tigera-operator, calico-system) outside the traditionally used kube-system by the previous calico version.
Expansion of K8S RBAC from generally read only policies, and limited management of crd.projectcalico.org CRDs to full management of multiple resources cluster wide (including secrets)
CRDs migrated to use apiextensions.k8s.io/v1 introduced in K8S 1.16, but as 1.15 EKS/K8S clusters are still officially supported by AWS, this could be compatibility issues with 1.15 clusters using only apiextensions.k8s.io/v1beta1
Rollback to previous version difficult or not possible as per PR comments:
Will this break upgrades or downgrades. Has updating a running cluster been tested?:
Because of the way the upgrade will happen with the operator there is a problem upgrading on a small cluster, 3 nodes or less. This is because for 3 nodes or less the operator tries to deploy a typha for each node and the current calico install uses at least one typha and multiple typhas cannot run on a single node.
Once a cluster is upgraded with these changes, it will not be simple to downgrade back to a versions of Calico that was installed without the operator.
The CHANGELOG.md only notes this as an "Improvement", with no immediate indication how that is defined in the context of these projects. With the changes noted, it would be tremendously helpful to also provide a more explicit upgrade guide for these cases.
AWS Official EKS Documentation has previously noted using the cni project release branch for directly applying changes, so consumers who have processes or automation not currently pinned to a specific fix revision (e.g. 1.7.5) will receive these type of larger changes that may fall outside their expectations based on their understanding of the versioning scheme.
Example from noted docs previous recommendation: kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/release-1.7/config/v1.7/calico.yaml
I understand that the method by which manifests are vended will be changing as noted in this PR, but I hope this request is considered relevant to both the current state (as long as that is maintained) and the new process being adopted.
The text was updated successfully, but these errors were encountered:
Thanks and yes it makes sense to have a proper documentation. I will work on it once I get some bandwidth. Since there is already an issue #685 , can you please move the notes there and we can close this issue?
What would you like to be added:
Document the explicit versioning policy and release criteria for both the CNI and calico components of this project.
It is unclear if the
<major>.<minor>.<revision>
schema is explicitly semver 2, or just a project specific pattern.It is also unclear if there is any concrete dependencies or compatibility requirements between the CNI and calico components vended under the same release versions.
Also, it would be helpful to provide more detailed upgrade guides when major component changes occur in the project.
Why is this needed:
The recent 1.7.9 release included the migration of calico management to the tigera operator introduced what a consumer could consider a backwards compatibility breaking change. See pull request for reference
The PR notes call out some but not all of the changes that would qualify this as containing backwards compatibility breaks.
tigera-operator
,calico-system
) outside the traditionally usedkube-system
by the previous calico version.crd.projectcalico.org
CRDs to full management of multiple resources cluster wide (including secrets)apiextensions.k8s.io/v1
introduced in K8S 1.16, but as 1.15 EKS/K8S clusters are still officially supported by AWS, this could be compatibility issues with 1.15 clusters using onlyapiextensions.k8s.io/v1beta1
The CHANGELOG.md only notes this as an "Improvement", with no immediate indication how that is defined in the context of these projects. With the changes noted, it would be tremendously helpful to also provide a more explicit upgrade guide for these cases.
AWS Official EKS Documentation has previously noted using the cni project release branch for directly applying changes, so consumers who have processes or automation not currently pinned to a specific fix revision (e.g. 1.7.5) will receive these type of larger changes that may fall outside their expectations based on their understanding of the versioning scheme.
Example from noted docs previous recommendation:
kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/release-1.7/config/v1.7/calico.yaml
I understand that the method by which manifests are vended will be changing as noted in this PR, but I hope this request is considered relevant to both the current state (as long as that is maintained) and the new process being adopted.
The text was updated successfully, but these errors were encountered: