Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Umbrella issue for In place propagation of changes affecting Kubernetes objects only proposal implementation #7731

Closed
18 of 19 tasks
ykakarap opened this issue Dec 12, 2022 · 20 comments
Assignees
Labels
triage/accepted Indicates an issue or PR is ready to be actively worked on.
Milestone

Comments

@ykakarap
Copy link
Contributor

ykakarap commented Dec 12, 2022

This is an umbrella issue for the implementation of the In place propagation of changes affecting Kubernetes objects only proposal. As the proposal is merged the implementation can now be done.

The work is broken down into:

Follow-up:

Follow-up (can do after v1.4):


@ykakarap
Copy link
Contributor Author

/assign

@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Dec 12, 2022
@fabriziopandini
Copy link
Member

/triage accepted

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Dec 14, 2022
@sbueringer
Copy link
Member

sbueringer commented Dec 23, 2022

Q: (not 100% sure). As of today rolloutAfter is not implement for MachineDeployments. Because of that we recommend folks to "make an arbitrary change to its spec.template" or use clusterctl alpha rollout restart machinedeployment/my-md-0 (link). clusterctl alpha rollout restart machinedeployment/my-md-0 depends on that an annotation change triggers a rollout which actually replaces machines.

Does this mean to avoid breaking this behavior part of this effort will be to implement rolloutAfter for MachineDeployments + modifying clusterctl alpha rollout restart machinedeployment/my-md-0 accordingly? I'm not sure if after in-place mutation there are other fields left that we could modify instead.

xrefs:

@sbueringer
Copy link
Member

Q: Do we have to track the following as well?

In order to prevent this we are modifying the MachineDeployment controller in order to pick the "semantically equal" MachineSet with more Machines, thus avoiding or minimizing turbulence in the Cluster.
https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/proposals/20221003-In-place-propagation-of-Kubernetes-objects-only-changes.md#machinedeployment-rollouts

@fabriziopandini
Copy link
Member

Another thing that must be considered is if to apply in-place propagation to the new/current machine set only or to old machine sets as well

@sbueringer
Copy link
Member

Another thing that must be considered is if to apply in-place propagation to the new/current machine set only or to old machine sets as well

That's a very good point. I think if we apply it to old MachineSets as well it would essentially break the history/rollback functionality?

@ykakarap
Copy link
Contributor Author

ykakarap commented Jan 9, 2023

Q: (not 100% sure). As of today rolloutAfter is not implement for MachineDeployments. Because of that we recommend folks to "make an arbitrary change to its spec.template" or use clusterctl alpha rollout restart machinedeployment/my-md-0 (link). clusterctl alpha rollout restart machinedeployment/my-md-0 depends on that an annotation change triggers a rollout which actually replaces machines.

Does this mean to avoid breaking this behavior part of this effort will be to implement rolloutAfter for MachineDeployments + modifying clusterctl alpha rollout restart machinedeployment/my-md-0 accordingly? I'm not sure if after in-place mutation there are other fields left that we could modify instead.

xrefs:

Do not have an answer for this yet but had a similar question in the kcp rollout PR (Ref: #6858 (review)).

One thing to consider while using RolloutAfter for triggering manual rollouts is that if a user has already set a RolloutAfter value to a later time we would be overriding that value and the original information will be lost.
We should take a step back and look at how we want to trigger manual rollouts for both KCP and MDs.

Also, how about tracking this discussion in a dedicated issue?

@sbueringer
Copy link
Member

sbueringer commented Jan 9, 2023

One thing to consider while using RolloutAfter for triggering manual rollouts is that if a user has already set a RolloutAfter value to a later time we would be overriding that value and the original information will be lost.

That's a question for the clusterctl PR about the UX of the clusterctl command right?

Feel free to move it to a separate issue just wondering how the rolloutAfter topic might impact the current issue.

@fabriziopandini
Copy link
Member

@ykakarap do you mind adding #7293 to the task given that this work is complementary to #7088 for the autoscaler support in cluster class?

@ykakarap
Copy link
Contributor Author

@ykakarap do you mind adding #7293 to the task given that this work is complementary to #7088 for the autoscaler support in cluster class?

done

@ykakarap
Copy link
Contributor Author

/milestone v1.4

@k8s-ci-robot
Copy link
Contributor

@ykakarap: You must be a member of the kubernetes-sigs/cluster-api-maintainers GitHub team to set the milestone. If you believe you should be able to issue the /milestone command, please contact your Cluster API Maintainers and have them propose you as an additional delegate for this responsibility.

In response to this:

/milestone v1.4

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@sbueringer
Copy link
Member

@ykakarap Thx for updating the issue, gives a great overview!

@enxebre
Copy link
Member

enxebre commented Feb 27, 2023

@sbueringer @fabriziopandini @ykakarap e2e In-place propagation feature is breaking and potentially disruptive for existing clusters workflows. E.g automation relying on labels to trigger rolling upgrade.
How are we planning to contain disruption for existing workflows? Is the plan just to document changes/impact clearly as part of the minor release?

@sbueringer
Copy link
Member

We are planning to implement RolloutAfter on MachineDeployments, adjust clusterctl accordingly + update the documentation (a corresponding note in the release notes also makes sense I think)

@ykakarap
Copy link
Contributor Author

ykakarap commented Feb 27, 2023

+1 to what Stefan said and implementing rolloutAfter support for MachineDeployments is tracked in this umbrella issue and will be done as part of the v1.4 release.

@ykakarap
Copy link
Contributor Author

@sbueringer @fabriziopandini Can I move all the follow-up items into a separate issues and close this?

@fabriziopandini
Copy link
Member

fabriziopandini commented Mar 20, 2023

I'm +1 to close/migrate pending items to a follow up issue as soon as the documentation PR is out

@ykakarap
Copy link
Contributor Author

All the follow-up items are moved into separate issues. This issue can now be closed.

/close

@k8s-ci-robot
Copy link
Contributor

@ykakarap: Closing this issue.

In response to this:

All the follow-up items are moved into separate issues. This issue can now be closed.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

No branches or pull requests

5 participants