-
Notifications
You must be signed in to change notification settings - Fork 465
UX regarding DNS pointing to kube api node #257
Comments
I add both the IP address and the FQDN to my kubeconfig, commenting out the FQDN until dig reports the record is updated. Then I go back into kubeconfig and swap the lines. |
@mgoodness That doesn't sound anything like Automation? |
Regardless, that's my workaround. With everything kube-aws does, moving a couple #s around when necessary doesn't strike me as particularly onerous. |
The kubeconfig can use the IP, but you will likely need to add The issue is that we don't have a public IP of the master node at launch time (when we generate the TLS assets). We can sign the certs, however, with an expected DNS name. During testing I usually will just go the /etc/hosts route and not bother with setting up an actual DNS record. |
@aaronlevy Thanks for the explanation. We are using kube-aws as part of our tooling for production deployment (multiple clusters multi-tenancy on-demand thing), anything manual needs to be removed. The work around of adding a line in /etc/hosts does work, which is what I'm doing, since I'm running kube-aws within a container, and each run of the container deploy a cluster therefore it's ok to mess with the hosts file, but it wouldn't be wise if multiple run target the same hosts file. I wonder given the kube api IP is an EIP of aws, user might already own EIP or want to designate fixed values from their business requirement, maybe we could allow config this in the yaml, so that you can include the IP in the cert signing. Or separate EIP creation with the rest? All my discussion was more of a flow thing, that's why I used the word UX. :) |
Would it be fair to describe this as a feature request for providing custom TLS assets? We've talked about this, as we make a lot of assumptions currently, but can't cover all use-cases. The workflow would essentially allow you to provide custom certificates (with any additional IPs, for example), then those would be used in the deployment process. Initially this would likely point to docs similar to these: https://coreos.com/kubernetes/docs/latest/openssl.html#kubernetes-api-server-keypair Then you would probably just need to place those assets in a known location during deployment. |
This is part of the list for #340 |
The route53 stuff should be optional. There are the modes we should support:
|
Currently it's on the user to create a record, via Route53 or otherwise, in order to make the controller IP accessible via externalDNSName. This commit adds an option to automatically create a Route53 record in a given hosted zone. Related to: coreos#340, coreos#257
I'm going to close this as it seems this work is captured by #340 -- please re-open if necessary |
Dear All,
Currently after I finish kube-aws up I'll set an A record in aws route53 for the domain name, so then I can fire kubectl. The problem is it seems route53 need some time to settle down on the new IP value, it either can't resolve or pointing to previous values (I'm doing rapid testing loops, not always cleaning up the route53 record sets)
Now that means I can't really immediately start to fire my bunch of kubectl since it would just error if DNS can't resolve or trying to use an old value which is more of my fault.
So I wonder should the kubeconfig just use the new IP address other than trying to do DNS?
My temp fix is to add a line to /etc/hosts, ignoring DNS all together. Comments?
The text was updated successfully, but these errors were encountered: