-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New option to set a shutdown grace period #5855
Conversation
Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). 📝 Please follow instructions at https://git.k8s.io/community/CLA.md#the-contributor-license-agreement to sign the CLA. It may take a couple minutes for the CLA signature to be fully registered; after that, please reply here with a new comment and we'll verify. Thanks.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Welcome @bcarlsson! |
Hi @bcarlsson. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@bcarlsson please add e2e tests. |
/ok-to-test |
/hold |
@bcarlsson: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
/wait-shutdown preStop will trigger |
@aledbf I've now added e2e tests to verify both endpoints during the shutdown grace period. |
/assign @bowei |
@bcarlsson please don't assign PRs manually |
@aledbf ok sorry, tried to follow an earlier comment by the robot. |
Codecov Report
@@ Coverage Diff @@
## master #5855 +/- ##
==========================================
+ Coverage 55.75% 55.82% +0.07%
==========================================
Files 91 94 +3
Lines 6451 6579 +128
==========================================
+ Hits 3597 3673 +76
- Misses 2411 2447 +36
- Partials 443 459 +16
Continue to review full report at Codecov.
|
@aledbf as I wrote in an earlier comment, we do no longer see the point of a new |
@bcarlsson please squash the commits |
@aledbf I've squashed the commits. |
It looks like this is good to go? (except for the PR title) We've been eagerly waiting to get this merged in as on clusters that scale heavily we get some failed requests when the ingress-nginx pods are shutting down but the (GKE) load balancers are still sending traffic to it. |
What's the state of this? Anything blocking? |
This change is missing the update of the helm chart values to show how to configure the endpoint and also an e2e test using it. |
@aledbf I apologize for any confusion. As I wrote in a comment earlier on September 14th, we realized that a new As commented on Oct 7, I did revert all changes for a new Is a update of the helm chart needed for this as there are no changes to a I know that only the last part of the PR description reflects the actual change now. Do you want me to update the title and description? |
Personally I'd update the description (and title) to reflect the actual change. That said I do think it's gook to keep the initial description around for historic reference as to how this started out and how it came to the current state. That's something we do internally too. |
Yes, please. |
I've now updated the title and description. |
/test pull-ingress-nginx-e2e-1-19 |
/lgtm |
@bcarlsson thanks! |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: aledbf, bcarlsson The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@bcarlsson please remove the "Old description for reference" section. There is no code related to that description. Even if that was the initial intention, is confusing. |
Done |
/hold cancel |
What this PR does / why we need it:
When running the ingress controller behind a load balancer in AWS, we need to have a grace period for the shutdown process in case of a scale down event. If not, the pod might be terminated before it has been removed from the load balancer target group.
This PR gives the user an option to set a grace period for when the shutdown phase has been triggered and the actual shutdown begins.
Types of changes
Which issue/s this PR fixes
fixes #4726
How Has This Been Tested?
It has been deployed in a running Kubernetes cluster and verified by monitoring the
/healthz
endpoint.Checklist: