-
Notifications
You must be signed in to change notification settings - Fork 39.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
make node lease renew interval more heuristic #80173
make node lease renew interval more heuristic #80173
Conversation
Hi @gaorong. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
cc @wojtek-t |
@wojtek-t can you confirm the interaction between the NodeStatusUpdateFrequency/NodeStatusReportFrequency/NodeLeaseDurationSeconds settings in the kubelet and how the node lease feature interacts with them? |
Described that the issue: #80172 I think the cleanest way would be add a field to KubeletConfig to allow configuring it by users. @liggitt but that can't be cherrypicked back IIUC (because it introduces new fields to api type that KubeletConfig is). So maybe we actually should:
|
I volunteer to implement this feature,if wangzhen127@ has no bandwidth with this. |
pkg/kubelet/nodelease/controller.go
Outdated
var leaseClient coordclientset.LeaseInterface | ||
if client != nil { | ||
leaseClient = client.CoordinationV1().Leases(corev1.NamespaceNodeLease) | ||
} | ||
var renewInterval time.Duration |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there ever a valid case for renewInterval being >= leaseDurationSeconds?
protecting against a mismatch between leaseDurationSeconds and nodeStatusUpdateFrequency by forcing renewInterval to be at least X seconds before or < X% of leaseDurationSeconds would make sense to me
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
personally, not found that case.
IIUC, the leaseDurationSeconds
seems to be a redundancy filed and not been used by node healthy-related work at least for now.
As may be used in the future, I agree we should have some restriction to make renewInterval being smaller than leaseDurationSeconds
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
forcing renewInterval to be at least X seconds before or < X% of leaseDurationSeconds
I found it difficult to generate a smart enough renewInterval
value based on the leaseDurationSeconds
, because different user may have different leaseDurationSeconds
.
As leaseDurationSeconds
have not been used by node healthy-related work for now, we can ignore that restriction. only forcing this value based on nodeStatusUpdateFrequency
is enough.
so we can cherry-pick this PR to v1.14 and 1.15 safely.
After this fix, we should add a new field to KubeletConfig at head, let user customize the renewInterval
value based on their own circumstance. we don't need to worry about this value anymore other than write some documents.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@wojtek-t what do you think about my proposal?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there ever a valid case for renewInterval being >= leaseDurationSeconds?
No - I don't see any usecase for that.
But yeah - as @gaorong pointed out, leaseDuriationSeconds is currently only propagated as part of Lease object and not really used in any meaningful way.
I'm personally not 100% convinced that connecting "renewInterval" with "nodeStatusUpdateFrequency" is exactly what we want. In the ideal world, this should be connected with "node-monitor-grace-period", but that's not even parameter to kubelet, so we don't have any way to reasonably validate it.
So as a workaround, I'm temporarily fine with the logic you proposed in the PR, if we add a very explicit comment of why we're doing that.
So let's change it something like:
renewInterval := defaultRenewInterval
// Users are able to decrease the timeout after which nodes are being
// marked as Ready: Unknown by NodeLifecycleController to values
// smaller than defaultRenewInterval. Until the knob to configure
// lease renew interval is exposed to user, we temporarily decrease
// renewInterval based on the NodeStatusUpdateFrequency.
if renewInterval > nodeStatusUpdateFrequency {
renewInterval = nodeStatusUpdateFrequency
}
In the meantime if you could also work on exposing it via kubelet config, it would be great.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
changed as comments.
I'm going to working on exposing it via kubelet config.
f09a16c
to
b5dac1e
Compare
b5dac1e
to
95f3e64
Compare
/ok-to-test Holding to let Jordan also take a look if he wants. |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: gaorong, wojtek-t The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/hold cancel |
/retest |
As all 'parent' PRs of a cherry-pick PR must have one of the "release-note" or "release-note-action-required" labels, I changed the release note and cherry-pickied to the previous release. |
…3-upstream-release-1.14 Automated cherry pick of #80173: make node lease renew interval more heuristic
…3-upstream-release-1.15 Automated cherry pick of #80173: make node lease renew interval more heuristic
What type of PR is this?
/kind bug
What this PR does / why we need it:
The node lease feature becomes beta and enables by default in v1.14. After upgrading our cluster to v1.14, we found some nodes become not ready occasionally and later become ready again after a few seconds.
In order to have a quick failure detection and recovery, our kubelet has a flag setting --node-status-update-frequency=5s and controller-manager has another flag setting --node-monitor-grace-period=10s.
If we use node lease feature, the kubelet doesn't report it's status as frequently as specified by flag --node-status-update-frequency, and use node lease mechanism to report that node is alive.
the default interval at which node lease renew is hardcoded 10s, but it can't be exactly 10s and may be a little bit more than 10s because of some network latency. when controller-manager doesn't receive messages from kubelet in every 10s in our case, it will mark this node as unreachable and not ready state, but in the next reconcile cycle, it receives the lease update message and mark this node as reachable again, so the node status is flapping between ready and not ready state.
This seems a breaking change that setting kubelet's lease renewing interval to a hardcoded 10s, even though our node status update frequency has a little bit small value, node lease should align kubelet's previous behavior and renew as frequently as --node-status-update-frequency to reporting alive message, otherwise controller-manager will mark this node as unreachable.
Which issue(s) this PR fixes:
Fixes #80172
Special notes for your reviewer:
Does this PR introduce a user-facing change?: