-
Notifications
You must be signed in to change notification settings - Fork 923
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubectl fails to resolve names except through DNS #48
Comments
can you please paste the output of |
|
@jaraco if you see your |
@jaraco The version command prints the cluster version as well, that is why it connects to the server. |
@dims, Sorry to confuse matters with two names, but I find kubectl can't resolve either name, while other tools will resolve both. Here's the output after making the suggested change.
|
@pwittrock: I see that now. I've updated the OP to strike that aspect. |
@jaraco AFAIK, ping and socket.gethostbyname are both ipv4 only. Try also can you please run |
I think IPv6 is a red herring. As you can see, the IPv6 address that's appearing in the output is just the IPv6 address for the local WiFi router (192.168.14.1 and 2601:14d:8701:59f8:f299:bfff:fe02:cee3 are the same host). It's true I have IPv6 enabled in my environment, but the kubernetes environment and the VPN I use to connect to it is IPv4 only, so I expect kubectl to resolve the name It's interesting that I only see the delay resolving the name when But in any case, I still believe |
I'm able to work around the issue by manually maintaining a name mapping in
But surely this isn't sustainable. Ideally, there'd be a fix for the issue so we don't have to maintain this mapping on each host. |
[MILESTONENOTIFIER] Milestone Issue Current Issue Labels
|
This appears to be a normal issue with go apps. The resolution process is described here https://golang.org/pkg/net/. I think we need to get |
I changed Obviously not a fix. Need someone that knows the go build details to see how to work around this issue. If CGO is disabled because of cross-compiling, I think it works if you have the darwin binaries available during the build but I'm not really sure. If CGO was disabled for other reasons then it might be a bigger issue to fix. To see what name resolution is being used do the following:
|
Still an issue with 1.8.x. Seriously, the fix is known, stop cross-compiling this for macOS. |
@knutster would you be interested in contributing a fix? |
Closed in favor of kubernetes/release#469. We will need to continue to cross compile since the build process builds the binaries for all os distros. However we can try to use the technique described here to make cross compilation work with cgo. |
hey guys, but getting the following error even after adding new cluster to kubeconfig kubectl version --short
can anyone suggest something? |
Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see http://kubernetes.io/docs/troubleshooting/.): Not exactly
What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.): kubectl "unable to connect to the server" "read udp"
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT
Kubernetes version (use
kubectl version
):Environment:
uname -a
): n/aWhat happened:
It takes 15+ seconds to fail to connect to the cluster master and fails because it can't figure out the name.
192.168.14.1 is the IP address of my local wifi router. It doesn't (and shouldn't) know anything about kub1. As you can see
ping
andgethostbyname
both resolve the name through the Cisco VPN client installed and connected on the host.What you expected to happen:
kubectl should connect to
kub1
andkub1.mycorp.local
like any other application on my system. It shouldn't be making UDP calls to the nameserver directly but should use the IP stack on the host.Additionally, the command probably shouldn't be attempting to connect for aversion
command. Preferable would be for the command to return the version immediately... and for this issue only to appear if a cluster-relevant command were issued.How to reproduce it (as minimally and precisely as possible): See above.
Anything else we need to know:
The text was updated successfully, but these errors were encountered: