-
Notifications
You must be signed in to change notification settings - Fork 413
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
report rpm-ostree overrides #945
Comments
super true this, I always oc get nodes -o wide to understand if I'm on RHCOS+RHEL |
Or, another important question here: should we report or go degraded? Today if one manually overrides the booted deployment entirely via e.g. |
Issues go stale after 90d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle stale |
Stale issues rot after 30d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle rotten |
Rotten issues close after 30d of inactivity. Reopen the issue by commenting /close |
@openshift-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Today if one e.g.
rpm-ostree override replace ./kernel-blah.x86_64.rpm
on a node (as our hacking doc mentions), the MCO is totally unaware of this and os upgrades will preserve that override, which people may not always want.Further, we should at least roll up somewhere into at least the pool status the fact that there are overrides.
(Also tangentially related to this topic, I think
MachineConfigPool
should have a status flag set if any nodes aren't RHCOS)The text was updated successfully, but these errors were encountered: