-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support internal DNS names when exporting kubecfg #800
Comments
Interesting! Can you explain a little bit more about your setup?
|
A bit more information about my setup: I have a few VPCs, all of which existing before I spun up my k8s cluster. One is a management network (172.16.0.0/16) which has an OpenVPN server and a few other monitoring servers in it. The other VPCs are just for different environments (dev/staging/production). Most of our servers are in private subnets without public IPs, with most internet traffic coming though ELBs in public subnets. The management network has VPC peering connections with the other VPCs, and the OpenVPN server sets up routes to the other VPCs for clients. So when we have a VPN connection established, we can connect to the machines from our workstations via the local IP. It works pretty well, and has been so much easier to manage than our previous solution of using jump boxes. If an engineer leaves or their account is compromised or something we can just cut network access off at the VPN server immediately before disable all their other accounts/credentials/other lose ends. None of that has anything to do with k8s, but I figured it may be useful background info. When I went to create my staging k8s cluster, my first thought was that it would be great to put all of the masters and nodes into private subnets without public IPs and all of us engineers can just access the cluster using local IPs over the VPN connection. That would keep all of our k8s instances from being routable over the internet, and have the only incoming internet traffic be though ELBs in public subnets create by k8s services. That isn't quite possible yet, but I know people are putting awesome work into it (#428)! I figured the default approach of putting the k8s instances in public subnets with public IPs and hostnames was fine for the time being, especially if we could limit access to the k8s api to just local traffic from our management network. I don't really want the k8s api directly accessible over the internet, if possible. It turned out that kops has the This is the command that I used to create my cluster (changed a identifiers)
Everything came up flawlessly. I actually couldn't believe it. I had just spent the past two weeks working through "kubernetes the hard way" setting everything up manually and tweaking things, then dealing with the fallout of my tweaks, so the fact that a tool did all of that in a matter of minutes was amazing. Anyway, the only hiccup I had was that when I exported my kubectl config and tried running a command, I couldn't connect to the API. Upon closer inspection, I noticed that my In Route53 I saw that there is a corresponding DNS entry setup for So all of that works, but my I'm not sure how kops can get the internal name when generating the config, must it must be available somewhere, since it's creating it in the first place! I'll poke around a bit and see if I can't figure it out. I do know that if I run |
@jschneiderhan, Hey there. AWS uses the split-horizon DNS model--if you resolve one of their generated host names from within a VPC's address space you'll get VPC IP addresses; outside of their networks you are served the EIP of the instance. |
@gladiatr72 thanks - I've noticed that before but didn't know the formal name. I'm not using AWS generated host name here, though. I'm using the kops created DNS names Either way, I'm running my kubectl from my workstation outside the VPC, so my DNS resolution is going to happen from outside the VPC. |
When you connect to your VPN does it assign the AWS internal DNS server to your host (in your case 172.16.0.2) resolvers? |
@shrabok-surge no - I don't have my VPN server changing any DNS settings client-side. I'm not familiar with private hosted zones so I'll read up on them. Maybe this is a non-issue and is just me not being experienced enough with the networking side of things. If there is a way to have DNS requests for |
Ah, I missed that you have a different name space for your internal dns. api.k8s.example.com (public zone) When you connect to your VPN you inject the AWS DNS server 172.16.0.2 in your case and you will automatically resolve to api.k8s.example.com internal entries and not the external dns zone entries. In your case, have you tried specifying the internal DNS zone (api.internal.example.com) as the dns zone parameter on cluster creation?
Not sure if it will work and might require a name change ( Hopefully that provides some help. |
Ok, so I think I understand what is being recommended. Have That would work, but it would then require me to manually manage the DNS entry in the private hosted zone, since I don't think kops will currently do that for me. kops does manage the With the private topology recently being merged, perhaps I'm just trying to put a square peg in a round hole. The main thing is that I don't want my k8s API accessible over the internet - I want the master security group to only allow traffic from my management subnet, and have kubectl use the internal IP addresses. I assume that when using the private topology, this is the case. I'll have to test that out. If you think this is the case, and maybe I'm trying to do something that the public topology just isn't intended to do, I'm happy to close this issue. Thank you everyone who has chimed in so far! |
I think we are in the same boat @jschneiderhan. Currently we are looking to deploy using kops and we require internal only deployment which relies on internal dns, private subnets and using existing routes with nat gateways. Currently from what I can tell is some of the private networking stuff is getting added to the master branch, but I only think part of the solution is there. And the DNS management built into the kubernetes/kops deployment gets confused if you have two of the same domain. Not sure if there is an option to specify the |
We have a similar need - we want the kops created subnets to be public, that is to have an internet gateway, but have restricted api and ssh access to an internal ip range. Everything works great when Is there any reason not to do what @jschneiderhan suggested and have an option in the kops cluster yaml that refers specifically to the endpoint should be used to populate the ~/.kube/config I'm thinking something like this:
This would populate your ~/.kube/config with the private dns entry. I noticed that there is already an api section that allows you to choose between dns and a loadbalancer when selecting how your masters are exposed, but this does not seem to address what actually gets populated in the config. We may want kops to create both public and private dns or loadbalancers, but only choose one of those for the default way to access the cluster. |
Just for your info - I'm using Pritunl as a VPN server, and changed the Pritunl server side setting so that the VPN client should try to resolve DNS through VPN (I specified I believe OpenVPN is more flexible than Pritunl so there may be some way to resolve the issue by tweaking DNS server setting. That being said, I also think the improvement mentioned above sounds nice-to-have :) |
Same issue here. It looks like the I've resolved by updating the |
FWIW we've recently recreated our clusters using a private topology and this is no longer an issue for us. The ~/.kube/config is setup using internal DNS entries (by making use of an AWS private hosted zone) and we updated our OpenVPN client's to route DNS through the VPN connection. Works perfectly. |
We've played around with both topologies. It's unfortunate how sparse the documentation of them is - we seem to want some features of both types of topologies, and it's quite hard to tell what each provides by default, and what can be overridden. Specifically, we want:
We achieved this by building a new VPC on EC2, with subnets configured manually to accept the traffic we want, and route through a self-managed NAT. If you point The problem with a |
@jkinkead if you leave out the kops/cmd/kops/create_cluster.go Line 109 in 32f0b39
|
The answer above is spot on. Can we close this issue now or do we have a feature request still? |
@chrislovecnm for me personally this is still a useful feature. I don't mind (and prefer) to have my nodes public, but would prefer the API access to be as private as possible, which includes the DNS records. |
@chrislovecnm I, personally, no longer need this feature as we've moved to using a private topology. Looks like some others still think it would be useful, though. |
The |
@jkinkead I know the NAT Gateways are placed in the |
Our existing subnets already have NAT gateways set up, so we don't want that. :) |
@justinsb is this supported now? |
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle rotten As mentioned above, I would still appreciate having this feature. |
/lifecycle frozen |
When I created my cluster, I specified the
--admin-access
flag to the CIDR of a management subnet in my VPC. I connect to the management subnet using a VPN server, which gives me network access to the machines in my cluster.When I run a
kops export kubecfg
, it populates my~/.kube.config
using the public DNS entry for my cluster instead, which isn't reachable from my workstation. If I change theserver
for the cluster in my~/.kube/config
file to the "internal" DNS name that was create for the cluster, everything works. Unfortuneatly, every time I update the cluster, the~/.kube/config
file resets theserver
attribute to the public DNS name.Would it make sense to have a flag to support using the internal hostname in
~/.kube/config
?If so, I'd be happy to give implementing it a shot, although I may need a few hints on implementation ideas.
If not, is there a recommended way to setup kubectl when accessing the cluster over a private network? I'd be happy to submit a PR with some documentation updates if someone wants to help me explain the approach.
The text was updated successfully, but these errors were encountered: