-
Notifications
You must be signed in to change notification settings - Fork 760
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A pod with host networking drops traffic if k8s describes secondary IP for a node. #58
Comments
Hi Where is the IP address 10.192.131.176 coming from? If so it could be related to the kubelet configuration on the host. Is the node-ip flag set to the main IP of the primary interface? If not the kubelet could end up using a wrong IP (I'm not using EKS but I had a similar issue with the EKS CNI plugin and had to modify my kubelet options to make it work properly) Laurent |
Yes, the 10.192.131.176 IP is a secondary IP (when looking in the EC2 description), but it's not configured in the OS. I think you're likely right about the kubelet being configured incorrectly, but what controls that? This didn't happen to every node. Cloud Formation was used for the spinup of the set. |
Here is the kubernetes bug tracking the issue. kubernetes/kubernetes#61921 using --node-ip should straighten this out until the bug is fixed in the aws cloudprovider. |
Thanks, is there a recommended way to apply that workaround? Ideally it would be applied at spinup incase a node goes away |
Fixed in kubernetes |
TL;DR: k8s advertises a secondary IP as the private IP for a node where I'm using host networking for a pod (and a headless service to direct traffic to it). The pod drops the packets that are destined for a non-configured IP.
NodeA - Pod1 uses host based networking for an application that is configured as the backend for a headless service (service-pod1).
NodeB - Pod2 resolves an A record for "service-pod1" and receives 10.192.131.176.
The private IP that's configured on the OS for NodeA is 10.192.159.32 (listed as one of the primary private IPs in the EC2 view).
When NodeB/Pod2 sends traffic to the IP 10.192.131.176 it gets dropped, likely by the kernel because the IP isn't configured on the OS.
To isolate the issue I tested outside of the pods, from NodeB to NodeA using ping and ncat
When pinging from NodeB to 10.192.131.176 I receive a response from NodeA's only configured IP, 10.192.159.32:
From 10.192.159.32 icmp_seq=1 Destination Host Unreachable
When trying to connect to an nc listener from NodeB (on NodeA: nc -l 7000) I get an error on NodeB: Ncat: No route to host.
I confirmed using tcpdump in both of these tests that the traffic is reaching NodeA, and also that using the configured IP as the destination (10.192.159.32) works as expected.
I terminated the node and the issue is still present, but with different IPs in play.
There is also another node where describe shows one of the non-configured secondary IPs, but that seems to not present an issue when the pods aren't using host networking (traffic is destined to the pod-configured IP).
My main concern is that the kubernetes headless service is returning an IP that's unusable.
A workaround for this might be to use a non-local bind for the application, but that seems like it should be unnecessary.
k8s version 1.9, node versions v1.9.2-eks.1
Please let me know if more info is needed.
The text was updated successfully, but these errors were encountered: