-
Notifications
You must be signed in to change notification settings - Fork 303
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Zonal NEG being removed from Load Balancer #1589
Comments
/kind support @Scalahansolo Are there any node pool changes that occur around that time? About how long does it take for the zones to be added back? What cluster version are you using? Can you open a support ticket with GKE. They/we will be able to investigate further as they will have access to the cluster and the master logs (where ingress controller logs are). |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
@Scalahansolo which GKE version was this on? |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
During rollouts of new pods, there are instances where the load balancer for the GCE ingress will drop a zone out of all the serving backends, and it takes an unknown amount of time before they get back into the pool.
There are plenty of issues about this particular issue, and we have tried a whole handful of things to alleviate the problem, but there has yet to be any particular resolution to fully solve the problem.
Without being able to solve the issue, we will have to move back to a single zonal cluster, which seems like a step backward.
The text was updated successfully, but these errors were encountered: