-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature request: Drain based on annotation #4188
Comments
@anton-johansson please check #4514 (not merged yet) |
@aledbf That looks interersting indeed. But I'm not sure if it's related to marking certain pods as draining (i.e. not receiving new sessions). |
@anton-johansson can you not edit specific pod manifest and change readiness probe in a way that it fails. Then ingress-nginx will remove that pod from its list and stop proxying new connections to it (it'll still process the existing ones). |
Are you sure that's how it works? I get the feeling that pods that turn Unready will be removed from load balancing all together, including existing sessions. If I'm wrong though, your solution seems perfectly valid and is surely something that I'd like to try out. :D |
Give it a try ;) |
We have this exact problem. Stateful app, no way to migrate sessions, need draining until sessions expire when updating. I implemented a PoC based on nginx-ingress 0.25.1 and it seems to be working, but I had to patch sticky.lua to modify So... Ideally, I'd like to see session draining based on annotations implemented in nginx-ingress, but if that's not going to happen, I'd at least like to be able to implement it myself without the need for patching upstream code. It could be done with minimal effort - say, Once I'm done with turning the PoC into something production-ready, I'd be happy to open a PR, but it would only be a PR for the hook I described above, since every complete implementation would/coud be different. |
@wknapik: Cool! I'd love to see your patch. I don't want to run a patched version either, but I'm still interested in seeing your solution to it. Either way, an annotation based solution seems like the optimal solution. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle rotten |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
This would be a nice feature to have. The only ingress I have found that supports this without an additional cost/deployment outside of a cluster is: jcmoraisjr/haproxy-ingress |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle rotten |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Rotten issues close after 30d of inactivity. Send feedback to sig-contributor-experience at kubernetes/community. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Even the if issue is closed (and outdated) I add our experience in these days. We used the canary deployment strategy; you can create a new ingress with the same host, that points to a service with a reduced set of pods (in order to select only pods where new users has to land) and:
All new users are directed to the subset, old users keeps their session to old pods and when the migration has finished, you can remove the canary ingress. The persistence cookie will be still valid and the standard ingress will honor it. |
Is this a request for help?
No
What keywords did you search in NGINX Ingress controller issues before filing this one?
nginx ingress controller drain annotation
nginx ingress controller drain label
I found #2322, which is a similar request, which was closed because
drain
is commercial only. I understand that, but maybe we can work out a solution that works without the NGINX function (as done with sticky sessions already). If I understand things correctly, we use LUA-scripting to handle the balancing and sticky sessions. It should be possible to check upstream pod annotations here to decide whether or not pods should be considered for new sessions.Is this a BUG REPORT or FEATURE REQUEST?
Feature request
NGINX Ingress controller version:
Kubernetes version (use
kubectl version
):Environment:
Feature request:
I have a scenario similar to the one described in issue #2322. Our application does not have session replication and we need a better way of running version rollouts. We need a way to tell NGINX to not send new sessions to older deployments. I was thinking that we could use an annotation for this on Pod-level, maybe:
... or this one (to not mix this with NGINX Plus' built in functionality):
This would of course only have any effect at all if using sticky sessions, which could be a little confusing.
Looking around in the code, I assume that this functionality would take place somewhere in the
pick_new_upstream
function of sticky.lua.Thoughts? Ideas? I could try and see if I can develop the changes myself, but I need to know if it's something that is actually wanted and if it's the right approach.
The text was updated successfully, but these errors were encountered: