Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OCPBUGS-13558: fix: ensure LVMVolumeGroupNodeStatus is removed by dedicated cleanup controller in case of multi-node #372

Merged
merged 1 commit into from
Aug 14, 2023

Conversation

jakobmoellerdev
Copy link
Contributor

@jakobmoellerdev jakobmoellerdev commented Jul 31, 2023

In case the operator gets deployed in a multi-node environemnt the LVMVolumeGroupNodeStatus is orphaned when a node is removed. This is unintended. We need a logic that can succesfully remove this NodeStatus whenever there is a removed Node.

Unfortunately since the status update of LVMCluster is currently assuming the correct presence of the LVMVolumeGroupNodeStatus, the only way I found to fix this issue would be to

  1. delete the LVMVolumeGroupNodeStatus in the status update of LVMCluster in case the node does not exist anymore and the Status check finds an orphaned status without a node. potentially expensive since now LVMCluster has to compare all nodes and check if the LVMNodeVolumeStatus exists. Also the removal will be delayed from the Node.
  2. delete the LVMVolumeGroupNodeStatus in a new reconcile loop that listens on node changes and uses a finalizer to protect the node deletion until we have the LVMVolumeGroupNodeStatus cleaned up. This is the "clean" solution to how we should handle deletion, however it should be noted This reconcile loop would only need to run outside of SNO, in SNO it can be disabled. Danger here is that if the finalizer is not removed properly, node removal can fail. Good thing is that in theory we can remove the created vgs from the node easily with that hook later on if necessary

Currently the PR uses a new controller.

Currently the main integration test was changed to now remove the node (which triggers the Status object removal) instead of removing the object directly which automatically covers this use case.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jul 31, 2023

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@openshift-ci openshift-ci bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Jul 31, 2023
@jakobmoellerdev jakobmoellerdev changed the title fix: ensure LVMVolumeGroupNodeStatus is removed by dedicated cleanup controller in case of multi-node OCPBUGS-13558: fix: ensure LVMVolumeGroupNodeStatus is removed by dedicated cleanup controller in case of multi-node Jul 31, 2023
@openshift-ci openshift-ci bot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Jul 31, 2023
@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Jul 31, 2023
@openshift-ci-robot
Copy link

@jakobmoellerdev: This pull request references Jira Issue OCPBUGS-13558, which is invalid:

  • expected the bug to target the "4.14.0" version, but no target version was set

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

The bug has been updated to refer to the pull request using the external bug tracker.

In response to this:

In case the operator gets deployed in a multi-node environemnt the LVMVolumeGroupNodeStatus is orphaned when a node is removed. This is unintended. We need a logic that can succesfully remove this NodeStatus whenever there is a removed Node.

Currently the PR uses a new controller, but I am assuming I can also introduce a behavior based on NodeStatus instead. This would drop the NodeStatus in case the Node does no longer exist. The concept would be similar but we wouldn't need another controller

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci-robot openshift-ci-robot added the jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. label Jul 31, 2023
@openshift-ci-robot
Copy link

@jakobmoellerdev: This pull request references Jira Issue OCPBUGS-13558, which is invalid:

  • expected the bug to target the "4.14.0" version, but it targets "4.15.0" instead

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

In response to this:

In case the operator gets deployed in a multi-node environemnt the LVMVolumeGroupNodeStatus is orphaned when a node is removed. This is unintended. We need a logic that can succesfully remove this NodeStatus whenever there is a removed Node.

Unfortunately since the status update of LVMCluster is currently assuming the correct presence of the LVMVolumeGroupNodeStatus, the only way I found to fix this issue would be to

  1. delete the LVMVolumeGroupNodeStatus in the status update of LVMCluster in case the node does not exist anymore and the Status check finds an orphaned status without a node. potentially expensive since now LVMCluster has to compare all nodes and check if the LVMNodeVolumeStatus exists. Also the removal will be delayed from the Node.
  2. delete the LVMVolumeGroupNodeStatus in a new reconcile loop that listens on node changes and uses a finalizer to protect the node deletion until we have the LVMVolumeGroupNodeStatus cleaned up. This is the "clean" solution to how we should handle deletion, however it should be noted This reconcile loop would only need to run outside of SNO, in SNO it can be disabled. Danger here is that if the finalizer is not removed properly

Currently the PR uses a new controller.

During my investigation I had some trouble understanding the status update due to the "vgCount" being aggregated for the entire LVMCluster. I now rewrote it so it does the check per deviceClass (which can later be extended) and introduced some debug logs that can be activated when passing --zap-devel flag to the controller. The functionality / logic of the check is still the same.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci-robot
Copy link

@jakobmoellerdev: This pull request references Jira Issue OCPBUGS-13558, which is invalid:

  • expected the bug to target the "4.14.0" version, but it targets "4.15.0" instead

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

In response to this:

In case the operator gets deployed in a multi-node environemnt the LVMVolumeGroupNodeStatus is orphaned when a node is removed. This is unintended. We need a logic that can succesfully remove this NodeStatus whenever there is a removed Node.

Unfortunately since the status update of LVMCluster is currently assuming the correct presence of the LVMVolumeGroupNodeStatus, the only way I found to fix this issue would be to

  1. delete the LVMVolumeGroupNodeStatus in the status update of LVMCluster in case the node does not exist anymore and the Status check finds an orphaned status without a node. potentially expensive since now LVMCluster has to compare all nodes and check if the LVMNodeVolumeStatus exists. Also the removal will be delayed from the Node.
  2. delete the LVMVolumeGroupNodeStatus in a new reconcile loop that listens on node changes and uses a finalizer to protect the node deletion until we have the LVMVolumeGroupNodeStatus cleaned up. This is the "clean" solution to how we should handle deletion, however it should be noted This reconcile loop would only need to run outside of SNO, in SNO it can be disabled. Danger here is that if the finalizer is not removed properly, node removal can fail

Currently the PR uses a new controller.

During my investigation I had some trouble understanding the status update due to the "vgCount" being aggregated for the entire LVMCluster. I now rewrote it so it does the check per deviceClass (which can later be extended) and introduced some debug logs that can be activated when passing --zap-devel flag to the controller. The functionality / logic of the check is still the same.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci-robot
Copy link

@jakobmoellerdev: This pull request references Jira Issue OCPBUGS-13558, which is invalid:

  • expected the bug to target the "4.14.0" version, but it targets "4.15.0" instead

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

In response to this:

In case the operator gets deployed in a multi-node environemnt the LVMVolumeGroupNodeStatus is orphaned when a node is removed. This is unintended. We need a logic that can succesfully remove this NodeStatus whenever there is a removed Node.

Unfortunately since the status update of LVMCluster is currently assuming the correct presence of the LVMVolumeGroupNodeStatus, the only way I found to fix this issue would be to

  1. delete the LVMVolumeGroupNodeStatus in the status update of LVMCluster in case the node does not exist anymore and the Status check finds an orphaned status without a node. potentially expensive since now LVMCluster has to compare all nodes and check if the LVMNodeVolumeStatus exists. Also the removal will be delayed from the Node.
  2. delete the LVMVolumeGroupNodeStatus in a new reconcile loop that listens on node changes and uses a finalizer to protect the node deletion until we have the LVMVolumeGroupNodeStatus cleaned up. This is the "clean" solution to how we should handle deletion, however it should be noted This reconcile loop would only need to run outside of SNO, in SNO it can be disabled. Danger here is that if the finalizer is not removed properly, node removal can fail. Good thing is that in theory we can remove the created vgs from the node easily with that hook

Currently the PR uses a new controller.

During my investigation I had some trouble understanding the status update due to the "vgCount" being aggregated for the entire LVMCluster. I now rewrote it so it does the check per deviceClass (which can later be extended) and introduced some debug logs that can be activated when passing --zap-devel flag to the controller. The functionality / logic of the check is still the same.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci-robot
Copy link

@jakobmoellerdev: This pull request references Jira Issue OCPBUGS-13558, which is invalid:

  • expected the bug to target the "4.14.0" version, but it targets "4.15.0" instead

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

In response to this:

In case the operator gets deployed in a multi-node environemnt the LVMVolumeGroupNodeStatus is orphaned when a node is removed. This is unintended. We need a logic that can succesfully remove this NodeStatus whenever there is a removed Node.

Unfortunately since the status update of LVMCluster is currently assuming the correct presence of the LVMVolumeGroupNodeStatus, the only way I found to fix this issue would be to

  1. delete the LVMVolumeGroupNodeStatus in the status update of LVMCluster in case the node does not exist anymore and the Status check finds an orphaned status without a node. potentially expensive since now LVMCluster has to compare all nodes and check if the LVMNodeVolumeStatus exists. Also the removal will be delayed from the Node.
  2. delete the LVMVolumeGroupNodeStatus in a new reconcile loop that listens on node changes and uses a finalizer to protect the node deletion until we have the LVMVolumeGroupNodeStatus cleaned up. This is the "clean" solution to how we should handle deletion, however it should be noted This reconcile loop would only need to run outside of SNO, in SNO it can be disabled. Danger here is that if the finalizer is not removed properly, node removal can fail. Good thing is that in theory we can remove the created vgs from the node easily with that hook later on if necessary

Currently the PR uses a new controller.

During my investigation I had some trouble understanding the status update due to the "vgCount" being aggregated for the entire LVMCluster. I now rewrote it so it does the check per deviceClass (which can later be extended) and introduced some debug logs that can be activated when passing --zap-devel flag to the controller. The functionality / logic of the check is still the same.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@jakobmoellerdev
Copy link
Contributor Author

/test all

@openshift-ci-robot
Copy link

@jakobmoellerdev: This pull request references Jira Issue OCPBUGS-13558, which is invalid:

  • expected the bug to target the "4.14.0" version, but it targets "4.15.0" instead

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

In response to this:

In case the operator gets deployed in a multi-node environemnt the LVMVolumeGroupNodeStatus is orphaned when a node is removed. This is unintended. We need a logic that can succesfully remove this NodeStatus whenever there is a removed Node.

Unfortunately since the status update of LVMCluster is currently assuming the correct presence of the LVMVolumeGroupNodeStatus, the only way I found to fix this issue would be to

  1. delete the LVMVolumeGroupNodeStatus in the status update of LVMCluster in case the node does not exist anymore and the Status check finds an orphaned status without a node. potentially expensive since now LVMCluster has to compare all nodes and check if the LVMNodeVolumeStatus exists. Also the removal will be delayed from the Node.
  2. delete the LVMVolumeGroupNodeStatus in a new reconcile loop that listens on node changes and uses a finalizer to protect the node deletion until we have the LVMVolumeGroupNodeStatus cleaned up. This is the "clean" solution to how we should handle deletion, however it should be noted This reconcile loop would only need to run outside of SNO, in SNO it can be disabled. Danger here is that if the finalizer is not removed properly, node removal can fail. Good thing is that in theory we can remove the created vgs from the node easily with that hook later on if necessary

Currently the PR uses a new controller.

During my investigation I had some trouble understanding the status update due to the "vgCount" being aggregated for the entire LVMCluster. I now rewrote it so it does the check per deviceClass (which can later be extended) and introduced some debug logs that can be activated when passing --zap-devel flag to the controller. The functionality / logic of the check is still the same.

TODO: an integration test mimicking the nodes (if thats even possible with envtest)

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@jakobmoellerdev jakobmoellerdev marked this pull request as ready for review August 1, 2023 15:18
@openshift-ci openshift-ci bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Aug 1, 2023
@openshift-ci openshift-ci bot requested review from jerpeter1 and qJkee August 1, 2023 15:18
@openshift-ci-robot
Copy link

@jakobmoellerdev: This pull request references Jira Issue OCPBUGS-13558, which is invalid:

  • expected the bug to target the "4.14.0" version, but it targets "4.15.0" instead

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

In response to this:

In case the operator gets deployed in a multi-node environemnt the LVMVolumeGroupNodeStatus is orphaned when a node is removed. This is unintended. We need a logic that can succesfully remove this NodeStatus whenever there is a removed Node.

Unfortunately since the status update of LVMCluster is currently assuming the correct presence of the LVMVolumeGroupNodeStatus, the only way I found to fix this issue would be to

  1. delete the LVMVolumeGroupNodeStatus in the status update of LVMCluster in case the node does not exist anymore and the Status check finds an orphaned status without a node. potentially expensive since now LVMCluster has to compare all nodes and check if the LVMNodeVolumeStatus exists. Also the removal will be delayed from the Node.
  2. delete the LVMVolumeGroupNodeStatus in a new reconcile loop that listens on node changes and uses a finalizer to protect the node deletion until we have the LVMVolumeGroupNodeStatus cleaned up. This is the "clean" solution to how we should handle deletion, however it should be noted This reconcile loop would only need to run outside of SNO, in SNO it can be disabled. Danger here is that if the finalizer is not removed properly, node removal can fail. Good thing is that in theory we can remove the created vgs from the node easily with that hook later on if necessary

Currently the PR uses a new controller.

During my investigation I had some trouble understanding the status update due to the "vgCount" being aggregated for the entire LVMCluster. I now rewrote it so it does the check per deviceClass (which can later be extended) and introduced some debug logs that can be activated when passing --zap-devel flag to the controller. The functionality / logic of the check is still the same.

Currently the main integration test was changed to now remove the node (which triggers the Status object removal) instead of removing the object directly which automatically covers this use case.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@codecov-commenter
Copy link

codecov-commenter commented Aug 1, 2023

Codecov Report

Merging #372 (59b26bc) into main (a962b90) will increase coverage by 40.42%.
Report is 8 commits behind head on main.
The diff coverage is 64.93%.

Additional details and impacted files

Impacted file tree graph

@@             Coverage Diff             @@
##             main     #372       +/-   ##
===========================================
+ Coverage   16.59%   57.01%   +40.42%     
===========================================
  Files          24       28        +4     
  Lines        2061     2138       +77     
===========================================
+ Hits          342     1219      +877     
+ Misses       1693      828      -865     
- Partials       26       91       +65     
Files Changed Coverage Δ
controllers/node_removal_controller.go 56.00% <56.00%> (ø)
controllers/node_removal_controller_watches.go 61.53% <61.53%> (ø)
pkg/cluster/leaderelection.go 66.66% <66.66%> (ø)
pkg/cluster/sno.go 72.72% <72.72%> (ø)
controllers/lvmcluster_controller_watches.go 91.42% <100.00%> (+91.42%) ⬆️

... and 10 files with indirect coverage changes

@jakobmoellerdev
Copy link
Contributor Author

/test all

@jakobmoellerdev
Copy link
Contributor Author

/hold as testing revealed that I potentially introduce a delete issue

@openshift-ci openshift-ci bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Aug 2, 2023
@jakobmoellerdev
Copy link
Contributor Author

jakobmoellerdev commented Aug 2, 2023

seems like there is an edge case sometimes where the vg is not removed. not sure why, but unrelated to the PR. when I can reproduce I will open a separate issue.

main.go Outdated Show resolved Hide resolved
@jakobmoellerdev
Copy link
Contributor Author

/hold

@jakobmoellerdev
Copy link
Contributor Author

/jira refresh

@openshift-ci-robot
Copy link

@jakobmoellerdev: This pull request references Jira Issue OCPBUGS-13558, which is invalid:

  • expected the bug to target the "4.14.0" version, but it targets "4.15.0" instead

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

In response to this:

/jira refresh

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci openshift-ci bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Aug 9, 2023
@openshift-ci-robot
Copy link

@jakobmoellerdev: This pull request references Jira Issue OCPBUGS-13558, which is invalid:

  • expected the bug to target the "4.14.0" version, but it targets "4.15.0" instead

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

In response to this:

In case the operator gets deployed in a multi-node environemnt the LVMVolumeGroupNodeStatus is orphaned when a node is removed. This is unintended. We need a logic that can succesfully remove this NodeStatus whenever there is a removed Node.

Unfortunately since the status update of LVMCluster is currently assuming the correct presence of the LVMVolumeGroupNodeStatus, the only way I found to fix this issue would be to

  1. delete the LVMVolumeGroupNodeStatus in the status update of LVMCluster in case the node does not exist anymore and the Status check finds an orphaned status without a node. potentially expensive since now LVMCluster has to compare all nodes and check if the LVMNodeVolumeStatus exists. Also the removal will be delayed from the Node.
  2. delete the LVMVolumeGroupNodeStatus in a new reconcile loop that listens on node changes and uses a finalizer to protect the node deletion until we have the LVMVolumeGroupNodeStatus cleaned up. This is the "clean" solution to how we should handle deletion, however it should be noted This reconcile loop would only need to run outside of SNO, in SNO it can be disabled. Danger here is that if the finalizer is not removed properly, node removal can fail. Good thing is that in theory we can remove the created vgs from the node easily with that hook later on if necessary

Currently the PR uses a new controller.

Currently the main integration test was changed to now remove the node (which triggers the Status object removal) instead of removing the object directly which automatically covers this use case.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@jakobmoellerdev jakobmoellerdev requested a review from qJkee August 9, 2023 20:10
@qJkee
Copy link
Contributor

qJkee commented Aug 9, 2023

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Aug 9, 2023
@jakobmoellerdev
Copy link
Contributor Author

/cc @suleymanakbas91 to get another opinion on that controller approach.

@openshift-ci openshift-ci bot removed the lgtm Indicates that a PR is ready to be merged. label Aug 11, 2023
@jakobmoellerdev
Copy link
Contributor Author

/test all

@jakobmoellerdev
Copy link
Contributor Author

/test verify

@suleymanakbas91
Copy link
Contributor

/lgtm
/approve

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Aug 14, 2023
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Aug 14, 2023

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: jakobmoellerdev, suleymanakbas91

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Aug 14, 2023
@openshift-ci openshift-ci bot removed the lgtm Indicates that a PR is ready to be merged. label Aug 14, 2023
@suleymanakbas91
Copy link
Contributor

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Aug 14, 2023
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Aug 14, 2023

@jakobmoellerdev: all tests passed!

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@suleymanakbas91 suleymanakbas91 merged commit bb74044 into openshift:main Aug 14, 2023
@openshift-ci-robot
Copy link

@jakobmoellerdev: Jira Issue OCPBUGS-13558: All pull requests linked via external trackers have merged:

Jira Issue OCPBUGS-13558 has been moved to the MODIFIED state.

In response to this:

In case the operator gets deployed in a multi-node environemnt the LVMVolumeGroupNodeStatus is orphaned when a node is removed. This is unintended. We need a logic that can succesfully remove this NodeStatus whenever there is a removed Node.

Unfortunately since the status update of LVMCluster is currently assuming the correct presence of the LVMVolumeGroupNodeStatus, the only way I found to fix this issue would be to

  1. delete the LVMVolumeGroupNodeStatus in the status update of LVMCluster in case the node does not exist anymore and the Status check finds an orphaned status without a node. potentially expensive since now LVMCluster has to compare all nodes and check if the LVMNodeVolumeStatus exists. Also the removal will be delayed from the Node.
  2. delete the LVMVolumeGroupNodeStatus in a new reconcile loop that listens on node changes and uses a finalizer to protect the node deletion until we have the LVMVolumeGroupNodeStatus cleaned up. This is the "clean" solution to how we should handle deletion, however it should be noted This reconcile loop would only need to run outside of SNO, in SNO it can be disabled. Danger here is that if the finalizer is not removed properly, node removal can fail. Good thing is that in theory we can remove the created vgs from the node easily with that hook later on if necessary

Currently the PR uses a new controller.

Currently the main integration test was changed to now remove the node (which triggers the Status object removal) instead of removing the object directly which automatically covers this use case.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants