-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OCPVE-635: fix: allow multi-node readiness with master nodes with NoSchedule Taints #383
OCPVE-635: fix: allow multi-node readiness with master nodes with NoSchedule Taints #383
Conversation
@jakobmoellerdev: This pull request references OCPVE-635 which is a valid jira issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@jakobmoellerdev: This pull request references OCPVE-635 which is a valid jira issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Codecov Report
Additional details and impacted files@@ Coverage Diff @@
## main #383 +/- ##
===========================================
+ Coverage 16.59% 56.91% +40.31%
===========================================
Files 24 25 +1
Lines 2061 2091 +30
===========================================
+ Hits 342 1190 +848
+ Misses 1693 819 -874
- Partials 26 82 +56
|
/hold still verifying fix |
40d3c6d
to
eddb374
Compare
/unhold manually verified |
eddb374
to
2aa8726
Compare
/hold |
/unhold |
2aa8726
to
8c1ef88
Compare
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: jakobmoellerdev, suleymanakbas91 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@jakobmoellerdev: all tests passed! Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
/cherry-pick release-4.14 |
@jakobmoellerdev: new pull request created: #388 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Description of problem:
Whenever starting a cluster with multiple nodes and trying to attach multiple devices to them, the Cluster does not become ready.
In this case all worker nodes have 2 loop devices with 3GB Block Storage attached. All the VolumeGroupNodeStatus Objects show as ready.
Version-Release number of selected component (if applicable):
4.13-4.15
How reproducible:
100%
Steps to Reproduce:
Actual results:
Cluster does not get ready.
When injecting a log message on the readiness check one can see in comes from the VG comparison:
{"level":"info","ts":"2023-08-16T08:24:40Z","logger":"lvmcluster-controller","msg":"Verifying readiness","Request.Name":"my-lvmcluster","Request.Namespace":"openshift-storage","expectedVGCount":6,"readyVGCount":3}
For some reason, the expectedVGCount is 6 while readyVGCount is only 3.
Expected results:
Cluster becomes ready and VGCounts match.
Additional Notes:
Can not be automatically tested without multi-node tests, after that it should be covered automatically.