-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
EndpointSlices causes crash (panic) with headless service #9606
Comments
This issue is currently awaiting triage. If Ingress contributors determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@iMartyn the controller pod is in a crashloopbackoff and the service type is NodePort and the ArgoCD config shows hostnetwork. These are not default configurations. Does it work if you do a install as per the documented procedures and not use hostnetwork or service type NodePort. |
The controller pod is in crashloop backoff because of the issue in question. The controller works perfectly if I do not have a headless service. I knew you would try and dismiss this as a configuration error which is why I dutifully gave all the completely irellevant information. Please read the issue again. |
/kind bug |
I edited my comments in the previous post. Sorry for incorrect comments (now edited) earlier. |
There's no world where the hostnetwork setup would cause a crash when interacting with the api when the controller works normally for other services, so whilst I have not tested this, it would be a waste of time to do so. Either the controller can read from the k8s api or it cannot. If it couldn't, then it would not work for other service types. I have had to move everything into the same namespace and point at a normal service, and this is working, so the controller is simply not dealing with headless services correctly. |
I don't think the CI tests against headless service. But before diving into that, can you post |
And "YES", there was a change to use endpointslices which means you can also test and report status on using a version of the controller before the endpointslices implementation #8890 |
I cannot currently retest this because to do so takes my whole cluster offline. The service was working with port-forward and from other pods. |
kubectl -n jellyfin describe svc jellyfin |
again, my system isn't in the state where I can do this now. I had to fix my system by moving everything into one namespace so I cannot give you the output of that. I can tell you the service was working with both port-forward and from other pods by name. |
I wonder if its related to this issue #9550 |
@iMartyn were you using a service of --type externalName or just with a selector |
No, ExternalName kills clusters using the controller, because it can't resolve |
@iMartyn are the namespace of the ingress and the namespace of the backend service the ingress routes to, are different |
This is so frustrating.... I've already given this information in the initial report. The whole point of this is for separation. |
Community folks intend to move towards a resolution volunteering on their free time so frustration is not a great direction to head into, as this is different from a paid support type of engagement (to state the obvious). The reason to ask about namespace is lack of explicit clarification on ingress as well as backend-service being in the same namespace , as per spec.
And manually creating endpointslices is also not a normal use case. So fleshing the issue with explicit clear description of problem helps to complete the triage. |
@strongjz I could see it being related yes, the stack trace seems to point to the same thing at least. |
@schoentoon were you trying to setup a endpointslice manually that pointed to an ipaddress from a different namespace, as in the ingress was in one namespace and the endpointslice was intended to contain a ipddress not from the same namespace |
@longwuyuan kind of. I was manually creating it pointing it to an ip address outside of the cluster entirely. Ironically similar to OP, to the host where I run my Jellyfin |
@iMartyn could you please share endpointsclice yaml and have I understand correctly, that you crafted endpointslice by hand? |
Are we expecting this issue to be fixed by #9550 |
Headless service without selectors does not create endpointslices, so we should validate that and stop it in the admission controller since ingress-nginx uses endpoints to identify the backends. https://kubernetes.github.io/ingress-nginx/user-guide/miscellaneous/#why-endpoints-and-not-services https://kubernetes.io/docs/concepts/services-networking/service/#without-selectors How you configured the ingress and the service caused the panic; there are no endpoint slices since Jellyfin Service has no selectors. So ingress is working fine, minus the pointer issue. The endpoint ready point was nil since no endpoints are ready to serve traffic. As far as the panic, I do believe that it is fixed in #9550 We need to add the admission controller a check for this, and for security reasons, we need to add a config option to allow cross-namespace talk like this. Namespaces are used to segment the cluster for multiple reasons. Allowing one service in one namespace to attempt to use endpoints for a service in another namespace is a security concern that should be explicitly allowed. |
I did create EndpointSlices manually, but until the panic is resolved I can't retest this because it will bring down the entire cluster. As I have stated multiple times, the services were ready and accepting traffic from other pods and using Once there's a release containing #9550 I will see if I can recreate the situation on that version. |
This is still a security concern; how does ingress know you should have access to those endpoints? Service A controlled by Team A If an admin allows this, fine. Network policies might be the actual answer here, though. Maybe this is out of the purview of ingress? |
This is stale, but we won't close it automatically, just bare in mind the maintainers may be busy with other tasks and will reach your issue ASAP. If you have any question or request to prioritize this, please reach |
Yes, K8S in general dealt with it by controlling access to EndpointSlices with RBAC: kubernetes/kubernetes#103675 |
Hi, Reading this again after so many months and wanting to take it further. Even though there has been discussions, there is a fundamental aspect that needs to be stated as the obvious. The Ingress-API spec wants the backed destination to be in the same namespace as the ingress. The intricate use of endpointslices or RBAC or serviceaccount etc etc are all geared to work with this tenet. Since the final goal as described in the original issue description is to traverse namespaces, there are no resources available in the project to work on that kind of functionality. The shortage of resources has required the project to deprecate features that are far away from implications of the Ingress-API as the focus is to secure the controller by default as well as implement the Gateway-API. Although there is use of service type externalName by some users, that by itself has its own complciations. This is one example of a feature that users have option for but its too far from the Ingress-API for the project to allocate resources for. Similarly headless-service or other type of complicated use for cross namespace routing of external traffic to internal workloads is not something that the project has resources to allocate to. As such this issue is adding to the tally of open issues without tracking any real action item. Hence I will close the issue for now. /close |
@longwuyuan: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
What happened:
I want to create some namespace separation for a
service
, so I have aservice
in one namespace and a headless service withEndpointSlices
in another."Actual" Service in
cluster-ingress
namespace :Service in
jellyfin
namespace (the one the ingress points to) :ingress in
jellyfin
namespace :logs of pod:
What you expected to happen:
I expect Ingress controller to work with headless services.
NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):
(from logs because controller is no longer running (crashloop)
Kubernetes version (use
kubectl version
):Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.6", GitCommit:"ad3338546da947756e8a88aa6822e9c11e7eac22", GitTreeState:"clean", BuildDate:"2022-04-14T08:43:11Z", GoVersion:"go1.17.9", Compiler:"gc", Platform:"linux/amd64"}
Environment:
uname -a
): Linux ssh-54c8746bdc-6r84w 5.4.0-96-generic Proposal: Build a web interface for Ingress administration #109-Ubuntu SMP Wed Jan 12 16:49:16 UTC 2022 x86_64 Linuxkubectl get nodes -o wide
ArgoCD helm chart. App manifest:
kubectl describe ingressclasses
kubectl -n <ingresscontrollernamespace> get all -A -o wide
kubectl -n <ingresscontrollernamespace> describe po <ingresscontrollerpodname>
kubectl -n <ingresscontrollernamespace> describe svc <ingresscontrollerservicename>
Current state of ingress object, if applicable:
kubectl -n <appnnamespace> get all,ing -o wide
kubectl -n <appnamespace> describe ing <ingressname>
Others:
kubectl describe ...
of any custom configmap(s) created and in useHow to reproduce this issue:
service
in one namespace, look up it's service IP (which wouldn't have to be done if ExternalName didn't completely crash the cluster because the ingress-controller eats resources instead of just doing the lookup).service
andEndpointSlice
in another namespace, pointing at the IP of the first service.The text was updated successfully, but these errors were encountered: