Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request to create an official fork of go-yaml #72

Closed
natasha41575 opened this issue Feb 23, 2022 · 28 comments
Closed

Request to create an official fork of go-yaml #72

natasha41575 opened this issue Feb 23, 2022 · 28 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@natasha41575
Copy link

natasha41575 commented Feb 23, 2022

Currently, the go-yaml library is used heavily by kubernetes and many subprojects. In kustomize, we use both go-yaml.v2 and go-yaml.v3 as well as this library for all of our yaml processing.

An update to go-yaml v3 resulted in what would have been a breaking change in kustomize. We talked to the go-yaml maintainer, who agreed to make the fixes, but could not complete them in time for our deadline. As a result, we created a temporary fork of go-yaml v3 with the intent to remove the fork once the changes were made upstream. More details can be found in this issue: kubernetes-sigs/kustomize#4033

Unfortunately, shortly after we created the fork, the maintainer stopped responding to our issues and did not review the PRs we made to upstream the fix. As I understand it, there is only one go-yaml maintainer and they are very busy. The last response we received was in June 2021: go-yaml/yaml#720 (comment)

Maintaining the fork within kustomize is undesirable not only because of the overhead to the kustomize maintainers, but also because kustomize is built into kubectl, so our fork is now also vendored in k/k. It was accepted into kubectl with the understanding that it would be a temporary fork, but now I'm not so sure we will ever be able to remove it. Besides the fixes that we (kustomize) need to have upstream, it seems that we are not the only ones with this problem; there are lots of open issues and PRs in the go-yaml library with no response.

We would like to propose an official, SIG-sponsored fork of the go-yaml library. I had a very brief exchange in slack about this and I think it's time to push this forward. In that slack exchange I was advised that if we do have this official fork, we would probably want to fold it into this repo, hence why I am creating the issue here.

I would appreciate some advice on next steps and some indication about whether or not a permanent fork would be acceptable.

cc @liggitt @apelisse @kevindelgado @KnVerey @mengqiy

@liggitt
Copy link

liggitt commented Feb 23, 2022

I'm cautiously in favor, for a couple reasons:

I would want to structure it similarly to https://sigs.k8s.io/json, where we expose as little of the API surface as possible while still providing the needed functions to callers

@KnVerey
Copy link

KnVerey commented Feb 24, 2022

Thanks for filing this, Natasha. I also support this request. Maintaining a fork is not the ideal outcome, but it is better than being unable to make critical fixes to such a key dependency. And if we do need a fork, here makes more sense than inside Kustomize.

I would want to structure it similarly to https://sigs.k8s.io/json, where we expose as little of the API surface as possible while still providing the needed functions to callers

Just so we're all aware, this may require exposing quite a bit more of the existing surface than you might expect. kyaml extensively uses, and even exposes, the intermediate yaml.Node struct from yaml.v3, for one thing.

@neolit123
Copy link
Member

neolit123 commented Feb 28, 2022

in other instances of discussions about the creation of larger internal k8s forks (https://github.com/gogo/protobuf ?) it was noted that the better investment would be to convince the existing (mostly unavailable) maintainers to grant k8s maintainers maintainer rights.

that seems like the better option on paper, has it been considered and communicated outside of github - e.g. via email with the existing maintainers?
if k-sigs/yaml becomes the owner of go-yaml v3 code i'm not convinced a k8s SIG would have the bandwidth to own all incoming topics from attracted new users who seek missing feature X in origin.

also worth noting that this repository has not been actively maintained, which can be confirmed by the number of rotted/closed tickets by bots:
https://github.com/kubernetes-sigs/yaml/issues?q=is%3Aissue+is%3Aclosed

@natasha41575
Copy link
Author

that seems like the better option on paper, has it been considered and communicated outside of github - e.g. via email with the existing maintainers?
if k-sigs/yaml becomes the owner of go-yaml v3 code i'm not convinced a k8s SIG would have the bandwidth to own all incoming topics from attracted new users who seek missing feature X in origin.

in other instances of discussions about the creation of larger internal k8s forks (https://github.com/gogo/protobuf ?) it was noted that the better investment would be to convince the existing (mostly unavailable) maintainers to grant k8s maintainers maintainer rights.

Who would we want to add as maintainers to the go-yaml library? To me it seems like we either can:

  • add k8s maintainers to the go-yaml maintainer list (maybe), and have them maintain go-yaml upstream. There are effectively no active go-yaml maintainers at the moment, so it would really be just the kubernetes folks maintaining it in this case.
  • fork go-yaml and maintain the fork in one of our SIG repos.

The maintenance work that would fall on k8s maintainers would be the same in either case, so I'm not convinced that the question of bandwidth is relevant in choosing one of these options over the other.

Additionally, there are some companies that have strict rules about which repos they can contribute to. Some k8s members can only contribute to kubernetes-sponsored repos. In this case, the fact that the go-yaml code lives elsewhere would be an unnecessary barrier that I would like to remove unless there is some other obvious benefit to not forking.

@lavalamp
Copy link

lavalamp commented Mar 8, 2022

Who is volunteering to maintain this new repo?

@liggitt
Copy link

liggitt commented Mar 8, 2022

a subpackage of this repo actually seems like the most natural location if a fork was going to be brought in-org

@inteon
Copy link
Member

inteon commented Mar 9, 2022

I have some contributions/ improvements that could be of value at some point for the fork (see https://github.com/amurant/go-yaml/tree/v3_next). The main idea in that branch is to add a YAML test-suite & to make the implementation more conformant, based on go-yaml/yaml#798.

BTW: I'm pro fork; once the original maintainer responds back, the changes can be up-streamed. I think that, also for tools like Helm, it might be useful to directly use the fork instead of the gopkg.in/yaml.v2 package.

@natasha41575
Copy link
Author

natasha41575 commented Mar 14, 2022

From the SIG API Machinery meeting on 3/9/22, we have reached consensus that this fork would be accepted provided that we add a disclaimer to the README indicating that we will only be accepting a small set of bug fixes, and will not be responsible for any major features.

Here is a draft of the disclaimer:

Disclaimer

This package is a fork of the go-yaml library and is intended solely for consumption by kubernetes projects. In this fork, we plan to support only critical changes required for kubernetes, such as small bug fixes and regressions. Larger, general-purpose feature requests should be made in the upstream go-yaml library, and we will reject such changes in this fork unless we are pulling them from upstream.

@lavalamp
Copy link

lavalamp commented Mar 14, 2022 via email

@natasha41575
Copy link
Author

@lavalamp @liggitt is there anything else I need to do before submitting a PR to fork?

@lavalamp
Copy link

lavalamp commented Mar 16, 2022 via email

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 14, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 14, 2022
@natasha41575
Copy link
Author

/remove-lifecycle rotten

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 12, 2022
@KnVerey
Copy link

KnVerey commented Oct 13, 2022

/remove-lifecycle stale

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 12, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Dec 12, 2022
@KnVerey
Copy link

KnVerey commented Dec 12, 2022

/reopen
/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot reopened this Dec 12, 2022
@k8s-ci-robot
Copy link

@KnVerey: Reopened this issue.

In response to this:

/reopen
/remove-lifecycle rotten

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 12, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 11, 2023
@natasha41575
Copy link
Author

/remove-lifecycle rotten

I just updated the corresponding PR yesterday, so this is still in progress!

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Apr 11, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 10, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 19, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Feb 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

8 participants