-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix status update in case of connection errors #3267
Conversation
24cfe47
to
b931f5e
Compare
831074c
to
7db1db0
Compare
No regression test? |
Writing one :) |
4540fea
to
8522c80
Compare
Codecov Report
@@ Coverage Diff @@
## master #3267 +/- ##
==========================================
- Coverage 48.36% 48.31% -0.05%
==========================================
Files 75 75
Lines 5554 5561 +7
==========================================
+ Hits 2686 2687 +1
- Misses 2523 2529 +6
Partials 345 345
Continue to review full report at Codecov.
|
431f54b
to
5f357e5
Compare
411f16e
to
91156fa
Compare
1c1cd8d
to
a5934a4
Compare
@ElvinEfendi this is now ready with e2e test 😉 |
@@ -333,6 +352,13 @@ func (s *statusSync) updateStatus(newIngressPoint []apiv1.LoadBalancerIngress) { | |||
sort.SliceStable(newIngressPoint, lessLoadBalancerIngress(newIngressPoint)) | |||
|
|||
for _, ing := range ings { | |||
curIPs := ing.Status.LoadBalancer.Ingress |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This block of code was moved here to avoid running unnecessary goroutines without an actual update to run
a5934a4
to
6a10e02
Compare
6a10e02
to
fed013a
Compare
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: aledbf, ElvinEfendi The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
thanks for the regression test ❤️ |
What this PR does / why we need it:
In case of any error (like the API server not being temporarily available) updating the ingress status, the leader election code stops working. This change forces the termination of the running elector, starting a new one.
Which issue this PR fixes:
fixes #3180
fixes #3033
Test image:
quay.io/aledbf/nginx-ingress-controller:0.415
To test this we need to simulate network issues:
kubectl proxy --address=0.0.0.0 --accept-hosts=.*
kubectl proxy
kubectl proxy
againthe election of a new leader should appear in the log after the reconnection