Skip to content
This repository has been archived by the owner on Apr 25, 2023. It is now read-only.

Helm chart 0.x.x to 0.9.x upgrade fails #1489

Closed
tehlers320 opened this issue Jan 27, 2022 · 7 comments
Closed

Helm chart 0.x.x to 0.9.x upgrade fails #1489

tehlers320 opened this issue Jan 27, 2022 · 7 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@tehlers320
Copy link
Contributor

What happened:
The new settings in the chart CRD appear to be set before the CRD goes in

Error: error validating "": error validating data: ValidationError(KubeFedConfig.spec.controllerDuration): unknown field "cacheSyncTimeout" in io.kubefed.core.v1beta1.KubeFedConfig.spec.controllerDuration

What you expected to happen:
upgrades work.

How to reproduce it (as minimally and precisely as possible):
upgrade helm from 0.8.1 to 0.9.0

Anything else we need to know?:

I think this may fix it, not sure.

annotations:
  "helm.sh/hook": pre-install

on the CRD
Environment:

  • Kubernetes version (use kubectl version)
  • KubeFed version
  • Scope of installation (namespaced or cluster)
  • Others

/kind bug

@tehlers320 tehlers320 changed the title Chart 0.x.x to 0.9.x upgrade fails Helm chart 0.x.x to 0.9.x upgrade fails Jan 27, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 27, 2022
@jimmidyson
Copy link
Contributor

Could you please provide simplest repro steps?

@ra-grover
Copy link

Hey @jimmidyson , upgrading the helm chart from 0.8.1 to 0.9.2 you will face this error.
helm upgrade -n kube-federation-system --set docker_tag=v0.9.2 kubefed .

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 16, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@mimmus
Copy link

mimmus commented Dec 22, 2022

Same issue.
@tehlers320 were you able to solve this?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

6 participants