Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AWS LB Controller with EndpointSlices enabled not compatible with Kubernetes v1.25 #3071

Closed
mikestef9 opened this issue Feb 22, 2023 · 6 comments · Fixed by #3072
Closed
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@mikestef9
Copy link

mikestef9 commented Feb 22, 2023

Describe the bug
Updated EKS cluster to v1.25, running latest version of LB controller v2.4.6, with following config

# enableEndpointSlices enables k8s EndpointSlices for IP targets instead of Endpoints (default false)
enableEndpointSlices: true

LB controller pod is crash looping, with following logs, seems we may have missed this https://kubernetes.io/docs/reference/using-api/deprecation-guide/#endpointslice-v125

{"level":"error","ts":1677101790.503112,"logger":"controller.targetGroupBinding","msg":"Could not wait for Cache to sync","reconciler group":"[elbv2.k8s.aws](http://elbv2.k8s.aws/)","reconciler kind":"TargetGroupBinding","error":"failed to wait for targetGroupBinding caches to sync: no matches for kind \"EndpointSlice\" in version \"[discovery.k8s.io/v1beta1\](http://discovery.k8s.io/v1beta1/)""}
{"level":"info","ts":1677101790.503688,"logger":"controller.service","msg":"Starting workers","worker count":3}
{"level":"info","ts":1677101790.5037487,"logger":"controller.service","msg":"Shutdown signal received, waiting for all workers to finish"}
{"level":"info","ts":1677101790.5042624,"logger":"controller.service","msg":"All workers finished"}
{"level":"error","ts":1677101790.5041447,"logger":"controller.ingress","msg":"Could not wait for Cache to sync","error":"failed to wait for ingress caches to sync: timed out waiting for cache to be synced"}
{"level":"error","ts":1677101790.5045505,"msg":"error received after stop sequence was engaged","error":"failed to wait for ingress caches to sync: timed out waiting for cache to be synced"}
{"level":"info","ts":1677101790.504128,"logger":"controller-runtime.webhook","msg":"shutting down webhook server"}
{"level":"error","ts":1677101790.504633,"logger":"setup","msg":"problem running manager","error":"failed to wait for targetGroupBinding caches to sync: no matches for kind \"EndpointSlice\" in version \"[discovery.k8s.io/v1beta1\](http://discovery.k8s.io/v1beta1/)""}

Steps to reproduce
Run Kubernetes v1.25 cluster with endpointSlices: true in the controller config.

Expected outcome
Controller should work as normal

Environment

  • AWS Load Balancer controller version v2.4.6
  • Kubernetes version 1.25
  • Using EKS (yes/no), if so version? Y

Additional Context:

@kishorj
Copy link
Collaborator

kishorj commented Feb 22, 2023

/kind bug

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Feb 22, 2023
@kishorj
Copy link
Collaborator

kishorj commented Feb 22, 2023

We will fix this in the patch release v2.4.7

@kishorj
Copy link
Collaborator

kishorj commented Feb 23, 2023

/reopen
will close on v2.4.7 release

@k8s-ci-robot k8s-ci-robot reopened this Feb 23, 2023
@k8s-ci-robot
Copy link
Contributor

@kishorj: Reopened this issue.

In response to this:

/reopen
will close on v2.4.7 release

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@stevehipwell
Copy link
Contributor

@kishorj is there an ETA for the v2.4.7 patch release?

@kishorj
Copy link
Collaborator

kishorj commented Feb 23, 2023

Patch release is published, closing the issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants