-
Notifications
You must be signed in to change notification settings - Fork 412
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
daemon: skip or perform node reboot based on rebootAction #2254
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -93,9 +93,56 @@ func getNodeRef(node *corev1.Node) *corev1.ObjectReference { | |
} | ||
} | ||
|
||
// finalizeAndReboot is the last step in an update(), and it can also | ||
// be called as a special case for the "bootstrap pivot". | ||
func (dn *Daemon) finalizeAndReboot(newConfig *mcfgv1.MachineConfig) (retErr error) { | ||
func reloadCrioConfig() error { | ||
_, err := runGetOut("pkill", "-HUP", "crio") | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It would be nice if we could use There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Agreed but I think for this release we are ok with this operation as it stands There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. As Jerry said we can but we are trying to keep impact as minimal as possible. Since, crio supports live configuration reload for container registry, we are making use of it in our usecase. |
||
return err | ||
} | ||
|
||
// performRebootAction takes action based on what rebootAction has been asked. | ||
// For non-reboot action, it applies configuration, updates node's config and state. | ||
// In the end uncordon node to schedule workload. | ||
// If at any point an error occurs, we reboot the node so that node has correct configuration. | ||
func (dn *Daemon) performRebootAction(action rebootAction, configName string) error { | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I would rename this so that it doesn't include "reboot" in the name (since we might not be rebooting). I'm also not sure that an enum is going to be expressive enough (feel free to tell me to stop over-complicating things and to come back in six months). I had envisioned this being expressed as a list of actions that needed to be taken (e.g. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I've changed it to "PostConfigChangeAction" in an attempt to be more descriptive but that doesn't quite feel right either. Naming open to suggestions. As for the enum, I've changed it to a list. I initially created the enum as a skeleton structure since we were handling very limited cases, but I agree that parsing a list will be needed in the future. I've updated that at: #2259 There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Agree with better regrouping of actions. |
||
switch action { | ||
case rebootActionNone: | ||
dn.logSystem("Node has Desired Config %s, skipping reboot", configName) | ||
case rebootActionReloadCrio: | ||
if err := reloadCrioConfig(); err != nil { | ||
dn.logSystem("Reloading crio configuration failed, rebooting: %v", err) | ||
dn.reboot(fmt.Sprintf("Node will reboot into config %s", configName)) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. One thing I'm debating is whether we want to hard-reboot here or block + degrade. It seems to me that if we are explicitly supporting "rebootless updates" that we should not reboot unexpectedly because of a failure. We should probably do e.g. what we do when a file write fails. This might also allow us not to drain (although recovery is another thing we need to consider) What do you think @sinnykumari ? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. f you degrade, the customer can't really recover from it and do something to make it work on a retry... so degrading might mean that it can't proceed. not sure how that cuts but since it's an optimization it feels like rebooting should be the fall back. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. (at least to me rebootless upgrades isn't a promise, it's a best effort optimization in certain cases where it can be safely done) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We can but isn't it better to reboot (crio will restart on reboot) and fix the problem by itself rather than degrade? I think we are not promising rebootless update rather it is a best effort basis i.e. MCO tries its best not to reboot for certain config change but if an error occur it reboots. As this is new feature, we can frame documentation accordingly. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. we can definitely become smarter in future. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. ha jinx! sinny and i both wrote our responses at the same time 😆 |
||
} | ||
dn.logSystem("crio config reloaded successfully! Desired config %s has been applied, skipping reboot", configName) | ||
default: | ||
// Defaults to rebooting node | ||
dn.logSystem("Rebooting node") | ||
return dn.reboot(fmt.Sprintf("Node will reboot into config %s", configName)) | ||
} | ||
|
||
// We are here, which means reboot was not needed to apply the configuration. | ||
|
||
// Get current state of node, in case of an error reboot | ||
state, err := dn.getStateAndConfigs(configName) | ||
if err != nil { | ||
glog.Errorf("Error processing state and configs, rebooting: %v", err) | ||
return dn.reboot(fmt.Sprintf("Node will reboot into config %s", configName)) | ||
} | ||
|
||
var inDesiredConfig bool | ||
if inDesiredConfig, err = dn.updateConfigAndState(state); err != nil { | ||
glog.Errorf("Setting node's state to Done failed, rebooting: %v", err) | ||
return dn.reboot(fmt.Sprintf("Node will reboot into config %s", configName)) | ||
} | ||
if inDesiredConfig { | ||
return nil | ||
} | ||
|
||
// currentConfig != desiredConfig, kick off an update | ||
return dn.triggerUpdateWithMachineConfig(state.currentConfig, state.desiredConfig) | ||
} | ||
|
||
// finalizeBeforeReboot is the last step in an update() and then we take appropriate rebootAction. | ||
// It can also be called as a special case for the "bootstrap pivot". | ||
func (dn *Daemon) finalizeBeforeReboot(newConfig *mcfgv1.MachineConfig) (retErr error) { | ||
if out, err := dn.storePendingState(newConfig, 1); err != nil { | ||
return errors.Wrapf(err, "failed to log pending config: %s", string(out)) | ||
} | ||
|
@@ -114,8 +161,7 @@ func (dn *Daemon) finalizeAndReboot(newConfig *mcfgv1.MachineConfig) (retErr err | |
dn.recorder.Eventf(getNodeRef(dn.node), corev1.EventTypeNormal, "PendingConfig", fmt.Sprintf("Written pending config %s", newConfig.GetName())) | ||
} | ||
|
||
// reboot. this function shouldn't actually return. | ||
return dn.reboot(fmt.Sprintf("Node will reboot into config %v", newConfig.GetName())) | ||
return nil | ||
} | ||
|
||
func (dn *Daemon) drain() error { | ||
|
@@ -515,7 +561,19 @@ func (dn *Daemon) update(oldConfig, newConfig *mcfgv1.MachineConfig) (retErr err | |
glog.Info("Updated kernel tuning arguments") | ||
} | ||
|
||
return dn.finalizeAndReboot(newConfig) | ||
if err := dn.finalizeBeforeReboot(newConfig); err != nil { | ||
return err | ||
} | ||
|
||
// TODO: Need Jerry's work to determine exact reboot action | ||
var action rebootAction | ||
action = dn.getRebootAction() | ||
return dn.performRebootAction(action, newConfig.GetName()) | ||
} | ||
|
||
func (dn *Daemon) getRebootAction() rebootAction { | ||
// Until we have logic, always reboot | ||
return rebootActionReboot | ||
} | ||
|
||
// removeRollback removes the rpm-ostree rollback deployment. It | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this no longer needed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good catch, seems like it got lost during function split up.