-
Notifications
You must be signed in to change notification settings - Fork 412
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
daemon: skip or perform node reboot based on rebootAction #2254
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: sinnykumari The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Today MCO defaults to rebooting node in order to apply successfully changes to the node. This is the safest way to apply a change on the node as we don't have to worry about things like which change is safe to skip reboot. In certain environments, rebooting is expensive and due to complex hardware setup sometimes can cause problems and node won't boot up. This will help to avoid rebooting nodes for certain cases where MCO thinks it is safe to do so. See - openshift/enhancements#159
9ce5184
to
84362bf
Compare
@sinnykumari: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
case rebootActionReloadCrio: | ||
if err := reloadCrioConfig(); err != nil { | ||
dn.logSystem("Reloading crio configuration failed, rebooting: %v", err) | ||
dn.reboot(fmt.Sprintf("Node will reboot into config %s", configName)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One thing I'm debating is whether we want to hard-reboot here or block + degrade. It seems to me that if we are explicitly supporting "rebootless updates" that we should not reboot unexpectedly because of a failure. We should probably do e.g. what we do when a file write fails. This might also allow us not to drain (although recovery is another thing we need to consider) What do you think @sinnykumari ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
f you degrade, the customer can't really recover from it and do something to make it work on a retry... so degrading might mean that it can't proceed. not sure how that cuts but since it's an optimization it feels like rebooting should be the fall back.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(at least to me rebootless upgrades isn't a promise, it's a best effort optimization in certain cases where it can be safely done)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can but isn't it better to reboot (crio will restart on reboot) and fix the problem by itself rather than degrade? I think we are not promising rebootless update rather it is a best effort basis i.e. MCO tries its best not to reboot for certain config change but if an error occur it reboots. As this is new feature, we can frame documentation accordingly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can definitely become smarter in future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ha jinx! sinny and i both wrote our responses at the same time 😆
} | ||
|
||
state.currentConfig = state.pendingConfig | ||
} | ||
|
||
if state.bootstrapping { | ||
if err := dn.storeCurrentConfigOnDisk(state.currentConfig); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this no longer needed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good catch, seems like it got lost during function split up.
// be called as a special case for the "bootstrap pivot". | ||
func (dn *Daemon) finalizeAndReboot(newConfig *mcfgv1.MachineConfig) (retErr error) { | ||
func reloadCrioConfig() error { | ||
_, err := runGetOut("pkill", "-HUP", "crio") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be nice if we could use systemctl reload crio.service
instead. That way, crio can specify how exactly it needs to be reloaded (ExecReload
) without the MCD needing to know the details. I don't think this will work today, since crio.service doesn't define an ExecReload (systemd will restart the service in this case).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed but I think for this release we are ok with this operation as it stands
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As Jerry said we can but we are trying to keep impact as minimal as possible. Since, crio supports live configuration reload for container registry, we are making use of it in our usecase.
// For non-reboot action, it applies configuration, updates node's config and state. | ||
// In the end uncordon node to schedule workload. | ||
// If at any point an error occurs, we reboot the node so that node has correct configuration. | ||
func (dn *Daemon) performRebootAction(action rebootAction, configName string) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would rename this so that it doesn't include "reboot" in the name (since we might not be rebooting). I'm also not sure that an enum is going to be expressive enough (feel free to tell me to stop over-complicating things and to come back in six months). I had envisioned this being expressed as a list of actions that needed to be taken (e.g. [ ]
(rebootActionNone), [ reboot ]
(rebootActionReboot), [ reload crio.service, reload foobar.service]
(rebootActionReloadCrio + reload foobar) so that we could build up the list as we enumerate the diffed config (e.g. any changes to a particular set of paths triggers a reload of crio, any unrecognized change triggers a reboot).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've changed it to "PostConfigChangeAction" in an attempt to be more descriptive but that doesn't quite feel right either. Naming open to suggestions.
As for the enum, I've changed it to a list. I initially created the enum as a skeleton structure since we were handling very limited cases, but I agree that parsing a list will be needed in the future. I've updated that at: #2259
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree with better regrouping of actions.
PR that builds on this: #2259 |
Today MCO defaults to rebooting node in order to apply successfully
changes to the node. This is the safest way to apply a change on the
node as we don't have to worry about things like which change is safe
to skip reboot.
In certain environments, rebooting is expensive and due to complex
hardware setup sometimes can cause problems and node won't boot up.
This will help to avoid rebooting nodes for certain cases where
MCO thinks it is safe to do so.
See - openshift/enhancements#159
Note In the current state, this will default to rebooting node. When MCO learns what rebootAction to be taken based on old and new rendered MC diff, we can add the logic.