Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Discussion] Compatibility Guarantees and Versioning #421

Closed
Jefftree opened this issue Aug 24, 2023 · 10 comments
Closed

[Discussion] Compatibility Guarantees and Versioning #421

Jefftree opened this issue Aug 24, 2023 · 10 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@Jefftree
Copy link
Member

Jefftree commented Aug 24, 2023

kube-openapi has always been on the v0 revision with importers pointing go.mod files directly at a specific revision (eg: kube-openapi v0.0.0-20230717233707-2695361300d9). By being on the v0 revision system, we can iterate quite fast and have no compatibility guarantees for breaking changes. I think this repo has got to the point where we should reconsider our versioning system.

  • Recently we've had issues such as github.com/google/gnostic/openapiv2 moved to github.com/google/gnostic-models/openapiv2 client-go#1269 involving a breaking change that affects quite a large number of clients. kube-openapi used to only be a dependency of k/k in which kube-openapi versions were in lockstep with k/k versions. With an additional client-go dependency, the lockstep is not guaranteed, and forcing it (eg: by asking users not to run go get -u) is not great user experience. The main problem is that pre-release/development versions of kube-openapi get captured by older versions of client-go.

  • We have 900+ indirect dependencies on the repo (most via client-go) and 90+ direct dependencies on the repo (mostly openapi-gen or proto), and this will probably grow with more validation components being added in

  • Most updates to this repo directly affect k/k and a follow up PR is created as a version bump to k/k. For many PRs, the reviewer would ask to see the k/k diff (tests, diff in openapi spec, etc) before approving the kube-openapi PR anyway. Especially with multiple rounds of reviews, it becomes difficult to tell whether the k/k PR has synced with the latest kube-openapi PR changes. It would be great if we can improve this process.

  • Iteration frequency is big pro for having kube-openapi's own cadence of releases. However, we are quite locked into k/k's release cadence as almost all the code is a dependency for k/k. Every bump to kube-openapi always results in a bump to k/k. The more changes we bundle up before bumping k/k, the higher the likelihood of them causing problems.

With all that, here are a few suggestions

  • Support kube-openapi as a staging repo in k/k: This would solve quite a few problems together. Lockstep is forced because client-go and kube-openapi versions would be released together and all revisions would run through the k/k test infra so reviewers can have a better judgement of the impact of a PR. The drawback is that we would lose the ability to merge code during k/k code freeze, and CI would take much longer.
  • Release new versions for breaking changes: This is similar to how https://github.com/kubernetes-sigs/structured-merge-diff is handled. This would be quite effortful for us especially during development (updating all imports, etc), and we may bump multiple times during a release. However, this would solve the problem of compatibility and we should have no cases of breaking a go get -u.
  • Release kube-openapi versions in sync with k/k versions: One drawback of syncing with k/k version is that we would in theory only bump k/k's kube-openapi version once per release (additional burden on reviewers to review a large diff), or use pre-release versions in k/k (still requires a bump to a stable version on release which we don't have tooling for). A kube-openapi maintainer needs to cut a release branch for every release. This would apply even if we make no contributions to the repo for a cycle to avoid cases like v1.29 references v0.28 (may or may not be acceptable, but is a case we need to think about)
  • Stay on v0 but have better compatibility guarantees: We already try to maintain backwards compatibility on a best effort basis. This would result in more code paths that are obsolete and still wouldn't encourage users to change their references until certain code paths are completely removed. Library updates (eg: gnostic-> gnostic-models) would still be a problem under this model
@apelisse
Copy link
Member

Possibly third solution:

  • Version kube-openapi like kubernetes: we have a new branch for each version of kubernetes/client-go. In between release we can change compatibility if we want. Running go get -u will only update within the branch.

No? I think that'd be fairly simple to do, am I missing something?

@Jefftree
Copy link
Member Author

Jefftree commented Aug 24, 2023

Possibly third solution:

  • Version kube-openapi like kubernetes: we have a new branch for each version of kubernetes/client-go. In between release we can change compatibility if we want. Running go get -u will only update within the branch.

No? I think that'd be fairly simple to do, am I missing something?

This was part of the second solution but I'll edit to make it more clear. I'll list a couple of drawbacks to this approach that we get for free if kube-openapi becomes a staging repo

  • we'll need to reference a dev version of k/kube-openapi (eg: 1.28 will reference v0.27-20230824-xxxxxx which would need to be flipped to v0.28 kube-openapi once 1.28 is released for k/k). gomod works off of tags and not branches, so we'll still need commit numbers and stuff.
  • A kube-openapi maintainer needs to cut a release branch for every release. This would apply even if we make no contributions to the repo for a cycle to avoid cases like v1.29 references v0.28 (may or may not be acceptable, but is a case we need to think about)
  • kube-openapi release needs to be synced with client-go and k/k release to prevent an "alpha" version from being released and clients upgrading to it. We may break people in between the time with kube-openapi and client-go releases like in github.com/google/gnostic/openapiv2 moved to github.com/google/gnostic-models/openapiv2 client-go#1269. I'd imagine we would still follow k/k code freeze schedule, so we would have to release kube-openapi before code freeze, and k/k and client-go would only be released after code freeze?

@Jefftree
Copy link
Member Author

/cc @alexzielenski @sttts @liggitt

@liggitt
Copy link
Member

liggitt commented Aug 30, 2023

even if we wanted to make k8s.io/kube-openapi a staging repo (I wouldn't), we can't make k8s.io/kube-openapi a staging repo as long as k8s.io/kubernetes has dependencies that reference it.

k8s.io/kubernetes → non-staging dep → staging dep is not allowed, since it would make it impossible to modify the staging dep in a way the non-staging dep would have to react to

@sttts
Copy link
Contributor

sttts commented Aug 31, 2023

I am curious which parts of this repo ppl actually use.

I have considered kube-openapi always as an internal repository without any guarantee (created to increase development velocity). And I think most of the code is written in a style that does not expect reuse in other places. In other words, semver would be cosmetics, but not honest. Semver where we constantly increase the major version has no use.

So if we want to make progress here, we should go package by package and check use outside of kube and then do a decision package by package.

Note, and this is crucial for this repo, kube-openapi has a collection of very different functionality that is independently developed. Our go-openapi fork is very different from pkg/cached or pkg/builder. Semver for a collection repository like this is not a good strategy. We could apply semver to individual parts via nested go.mods. That would make a lot more sense.

@apelisse
Copy link
Member

apelisse commented Sep 5, 2023

Right, so the problem is mostly that whatever is pulled by client-go is problematic. But I'm a little concerned about the package-per-package policy inside the repo. But you're right, only the packages that are pull by client-go are problematic.

Note, and this is crucial for this repo, kube-openapi has a collection of very different functionality that is independently developed. Our go-openapi fork is very different from pkg/cached or pkg/builder. Semver for a collection repository like this is not a good strategy. We could apply semver to individual parts via nested go.mods. That would make a lot more sense.

Agreed with this.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 27, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 26, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Mar 27, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

6 participants