-
Notifications
You must be signed in to change notification settings - Fork 905
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add Timeout in WaitForCacheSync #894
Conversation
/priority important-soon |
b873572
to
e964f21
Compare
/cc @Garrybest |
ed19c4f
to
2cbb560
Compare
/assign @Garrybest |
ctx, cancel := context.WithTimeout(s.ctx, cacheSyncTimeout) | ||
defer cancel() | ||
s.lock.Lock() | ||
defer s.lock.Unlock() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as the front.
Signed-off-by: lihanbo <[email protected]>
2cbb560
to
ac3878e
Compare
/lgtm |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: RainbowMango The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@@ -146,9 +136,20 @@ func (c *ClusterStatusController) syncClusterStatus(cluster *clusterv1alpha1.Clu | |||
klog.V(2).Infof("Cluster(%s) still offline after retry, ensuring offline is set.", cluster.Name) | |||
currentClusterStatus.Conditions = generateReadyCondition(false, false) | |||
setTransitionTime(&cluster.Status, ¤tClusterStatus) | |||
c.InformerManager.Stop(cluster.Name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mrlihanbo Hello Hanbo, how are you doing?
Can you remember the reason why stopping the informer here, in case of a cluster offline?
#2930 is now trying to solve an issue due to this change.
also, cc @Garrybest to help recall.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe because we want to re-establish the informer after the apiserver is healthy? 🤔
The reason may be not so convincible because I don't remember this line as well. 🤣
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I talked to @mrlihanbo, he said this is probably for disabling repetitive warning logs, especially for those clusters offline for a long time.
Signed-off-by: lihanbo [email protected]
What type of PR is this?
/kind bug
What this PR does / why we need it:
There exists a scenario that some of joined cluster are unhealth in karmada. At the moment, if karmada controller manager restart, it will be blocked in
WaitForCacheSync
process until the unhealth cluster recovers.Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Does this PR introduce a user-facing change?:
"NONE"