-
Notifications
You must be signed in to change notification settings - Fork 90
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix maxUnavailable nodes calculation from percentages to only consider matching nodes #719
Fix maxUnavailable nodes calculation from percentages to only consider matching nodes #719
Conversation
/hold |
bebf389
to
f2b73dd
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, let's try to remove duplication and encapsulate things propertly.
if err != nil { | ||
return 0, err | ||
} | ||
func (r *NodeNetworkConfigurationPolicyReconciler) maxUnavailableNodeCount(policy *nmstatev1beta1.NodeNetworkConfigurationPolicy, matchingNodes int) (int, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove the matchingNodes
argument and call enactmentCount.Matching()
inside at the end this is what we want to do, we don't need to parameterize it also remove the (r *NodeNetworkConfigurationPolicyReconciler)
receiver since is not used, we can even move this undier ./pkg and make it public to use it at integratin tests.
test/e2e/handler/utils.go
Outdated
@@ -37,7 +37,7 @@ const TestPolicy = "test-policy" | |||
var ( | |||
bridgeCounter = 0 | |||
bondConunter = 0 | |||
maxUnavailable = environment.GetIntVarWithDefault("NMSTATE_MAX_UNAVAILABLE", 1) | |||
maxUnavailable = environment.GetVarWithDefault("NMSTATE_MAX_UNAVAILABLE", "50%") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's make the default value public somewhere at pkg
so we use it at tests too.
test/e2e/handler/utils.go
Outdated
@@ -539,3 +539,15 @@ func skipIfNotKubernetes() { | |||
Skip("Tutorials use interface naming that is available only on Kubernetes providers") | |||
} | |||
} | |||
|
|||
func maxUnavailableNodeCount() int { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As said let's try to use the same logic from porduct moving the function under pkg
and usint it on tests.
f2b73dd
to
28be74c
Compare
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: qinqon The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Instead of calculating number of maxUnavailable nodes from all nmstate enabled nodes, use matching nodes only. Signed-off-by: Radim Hrazdil <[email protected]>
28be74c
to
b2f374b
Compare
@rhrazdil: The following test failed, say
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
/lgtm |
/hold cancel This no longer depends on the lanes aggregation. |
Change how maxUnavailable node cound is computed from percentages
In the original commit that introduced maxUnavailable field, the number of nodes
that can run in parallel is computed from the entire cluster node count, when percentage
value is specified. This commit changes that to use only label matching nodes.
So if a nodeSelector is specified that selects half of cluster nodes, then specifying 50% maxUnavailable
will only allow half of the nodes that have the matching label to run in parallel.
Is this a BUG FIX or a FEATURE ?:
/kind bug
What this PR does / why we need it:
Special notes for your reviewer:
Release note: