-
Notifications
You must be signed in to change notification settings - Fork 9
Headless service is not supported (Original: Seeing it load balance between client interfaces) #22
Comments
Thank you for the feedback for that. To understanding your situation/environment, could you please provide your Pod/Service/NetworkAttachmentDefinition?
To clarify my understanding, could you please provide example in yaml? |
Attached are the resource definitions. The issue seems to be that when I specify clusterIP as |
Currently we cannot support headless service, To support headless service, no need to change multus-service, need to change Kubernetes upstream. Currently kpng also have a problem about that and proceed how to fix that, in kubernetes-retired/kpng#349. I will update README.md to explicitly mention this and close this issue, thanks. |
As per k8snetworkplumbingwg#22 discussion, headless service is not support, hence add this limitation into README.md
As per k8snetworkplumbingwg#22 discussion, headless service is not support, hence add this limitation into README.md
@s1061123 I think kubernetes-retired/kpng#349 is closed, so If I correct understand you can check it |
Unfortunately, it is not. Currently the issue is in kubernetes/kubernetes#112491 and still open. But we have a good news that Kubernetes Multi-network WG is working on multi-network including Kubernetes service feature, so the WG will provide a solution for that. As far as I know of, they want the use-cases for service on multi-network. I strongly recommend to join the call and share your use-cases to them. Hence I decide not to continue this development (because it is prototype and code is obsolate) and I am looking forward to see above implementation. But this code is open source, so you can implement what you want, of course! |
I have a set up with two a
src
anddst
pod. Each pod has two interfaces, the default k8s interface and a secondary interface created using multus. Thedst
pod is listening on all interfaces on a specific port, i.e.,:8080
.The
src
pod continuing creates a GRPC connection to thedst
pod, makes a RPC call, and then closes the connection at 1 second intervals.The
dst
pod is using the GPRCpeer
package to get the peer IP from which the request comes. What I am seeing is that the peer is sometimes reporting the default address and sometime reporting the secondary address, when I would have expected it only to report the secondary address. This is leading me to believe the the request is not always proxied via the multus-proxy and sometimes it is using the default proxy.thoughts?
on a side note, what if the
mutus-service-controller
created a "second" service without a selector and created endpoints that matched that service. this would allow (i think) the default proxies to work. to make this really work it means we would have to have aMultusService
(which isn't great) or use additional annotations to specify the selector.The text was updated successfully, but these errors were encountered: