Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add RollingUpdate strategy to Calico DaemonSet manifests #1272

Closed
Quentin-M opened this issue Oct 27, 2017 · 1 comment · Fixed by #1506
Closed

Add RollingUpdate strategy to Calico DaemonSet manifests #1272

Quentin-M opened this issue Oct 27, 2017 · 1 comment · Fixed by #1506
Assignees
Milestone

Comments

@Quentin-M
Copy link

Quentin-M commented Oct 27, 2017

Hi there,

Current Behavior

According to Kubernetes' documentation, DaemonSets have two update strategy types:

DaemonSet has two update strategy types:

  • OnDelete: This is the default update strategy for backward-compatibility. With OnDelete update strategy, after you update a DaemonSet template, new DaemonSet pods will only be created when you manually delete old DaemonSet pods. This is the same behavior of DaemonSet in Kubernetes version 1.5 or before.
  • RollingUpdate: With RollingUpdate update strategy, after you update a DaemonSet template, old DaemonSet pods will be killed, and new DaemonSet pods will be created automatically, in a controlled fashion.

Because the Calico self-hosted manifests do not specify a strategy, they are all using the default one: OnDelete, meaning that after kubectl apply -f calico.yaml is executed, nothing will happen until the nodes reboot (or similar).

Expected Behavior

It'd be awesome to have Kubernetes execute Rolling Upgrades of Calico pods instead - if that's technically possible on the Calico side of things.

Possible Solution

Adding the following to the manifests:

updateStrategy:
  type: RollingUpdate
  rollingUpdate:
    maxUnavailable: 1
@caseydavenport
Copy link
Member

caseydavenport commented Oct 27, 2017

Duplicate of #777, but I like this one better so I'll close #777.

Yes - I think we should do this across all of our k8s manifests in master. Would anyone like to submit a patch? :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants