-
Notifications
You must be signed in to change notification settings - Fork 472
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is hairpin-mode required for kube-dns? #316
Comments
@t3hmrman So you are not able to get functioning kube-dns service? There is no need for hairpin mode. Any pod should be able to access cluster ip of DNS service. even if you delete the pod you still have issue with DNS resolution? |
Hey @murali-reddy yeah that's exactly what's happening. Good to know hairpin mode is not required -- it must be something wrong with the resource definition I'm using for kube-dns. I will try deleting the pod again -- it's especially weird that I can acess the other kube-dashboard resource by it's service IP but not the dns... [EDIT] - I'm starting to think this is an issue with containers inside the pod communicating with each other -- Starting to think this isn't a kube-router problem at all, but just the way these containers were wired? |
An update -- this very likely is not the fault of Here's a snippet of the logs (
It may very well be the case that |
@t3hmrman did you get a chance to try CoreDNS instead of kube-dns?
No. Kube-router only deals with pod-to-pod connectivity and services. |
Hey @murali-reddy, thanks for taking the time to help -- my next step is to try CoreDNS next, absolutely. I didn't want to give up, since the solution seems so close :) I'm taking notes as I go so I can make a blog post about my floundering, and at least file a ticket with What I've done lately is use a headless service to expose the services:
|
Hey @murali-reddy I managed to narrow it down some more -- it looks like actually the traffic going to If I try to get the kubernetes dashboard by it's fully qualified name:
Now, if I try the same exact request, but with the pod IP for
I doubt the external name service stuff I did was necessary, but I'll file an issue with This issue does seem to be related to kube-router, because it's a pod <-> service endpoint issue -- I'm going to start re-reading the docs on [EDIT] - Current theory is that I don't have IPVS installed properly, since that is what's supposed to help [EDIT2] - Nope, IPVS is installed..., I see routes when I check |
Found it! the issue was I apologize for the wasted cycles, I just need to update my rules to allow traffic between the locally created networks! |
@t3hmrman great you could figure out. |
For posterity the fix was:
To connect my service cluster to the regular pods to interact |
Hey all, thanks for the work on
kube-router
!I apologize if this is a simple mistake, but am I right in assumign that
kube-router
does not replacekube-dns
? I'm trying to getkube-dns
to work on top ofkube-router
(ClusterIP <-> ClusterIP connections work just fine), but I'm having a problem getting names to actually resolve.As far as why this might be happening, I'm thinking that this log line from
sidecar
inside thekube-dns
pod might be helpful?I also thought maybe it might be related to the alpine bug regarding resolv.conf, but people are at least getting IPs returned there, I get:
If I force the ServiceIP of the DNS service:
The Direct IP of the pod doesn't fare any better. I know the service has endpoints, and I know the Service is reachable (I'm also running the kubernetes dashboard and I can curl to it by service IP).
What I thought I might be missing was
--hairpin-mode=true
, since the error message from thesidecar
container says that it's trying to access127.0.0.1:53
, but that hasn't worked either... Does anyone know what I might be missing?If I try to
dig
the ClusterIP of the dns pod from inside analpine
container:NOTE After a machine restart I no longer get the probe error from
sidecar
, so that is no longer an issue I think, but DNS still doesn't work so maybe this question is more general nowThe text was updated successfully, but these errors were encountered: