Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Generate a ~/.kube/config file for users? #257

Closed
jnoller opened this issue May 20, 2016 · 28 comments
Closed

Generate a ~/.kube/config file for users? #257

jnoller opened this issue May 20, 2016 · 28 comments
Labels
Milestone

Comments

@jnoller
Copy link

jnoller commented May 20, 2016

This is an enhancement request - since the playbooks have all the info needed to generate a kubectl configuration file, this would be a nice to have to drop in the ~/.kube/ dir either on the nodes or in the directory where the playbooks were run. Remote kubectl access would be the norm in my mind

@jnoller
Copy link
Author

jnoller commented May 20, 2016

It's pretty 'simple' - once cargo is done you want to grab these from the master:

  • /etc/kubernetes/ssl/admin-key.pem
  • /etc/kubernetes/ssl/ca.pem
  • /etc/kubernetes/ssl/admin.pem

The user can then run:

kubectl config set-cluster default-cluster --server=https://${MASTER} \
    --certificate-authority=/path/to/ca.pem 

kubectl config set-credentials default-admin \
    --certificate-authority=/path/to/ca.pem \
    --client-key=/path/to/admin-key.pem \
    --client-certificate=/path/to/admin.pem      

kubectl config set-credentials default-admin \
    --certificate-authority=/path/to/ca.pem \
    --client-key=/path/to/admin-key.pem \
    --client-certificate=/path/to/admin.pem 

kubectl config set-context default-system --cluster=default-cluster --user=default-admin
kubectl config use-context default-system

@Smana
Copy link
Contributor

Smana commented May 21, 2016

Yes good idea, maybe a role named postinstall with the insecure_registry needed by @rsmitty in #256.

@rsmitty
Copy link
Contributor

rsmitty commented May 21, 2016

Would you be interested in me updating my merge request to setup the
postinstall role to come at the very end? Would at least create the
structure for this to be added.
On May 21, 2016 03:38, "Smaine Kahlouch" [email protected] wrote:

Yes good idea, maybe a role named postinstall with the insecure_registry
needed by @rsmitty https://github.com/rsmitty in #256
#256.


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#257 (comment)

@galthaus
Copy link
Contributor

I have some thoughts in this. My open contrail playbook request had this
because open contrail needs extra stuff at the end.

Though for docker I think we have a different problem to think about.

I'll try and elaborate later today

Greg

On Saturday, May 21, 2016, Spencer Smith [email protected] wrote:

Would you be interested in me updating my merge request to setup the
postinstall role to come at the very end? Would at least create the
structure for this to be added.
On May 21, 2016 03:38, "Smaine Kahlouch" <[email protected]
javascript:_e(%7B%7D,'cvml','[email protected]');> wrote:

Yes good idea, maybe a role named postinstall with the insecure_registry
needed by @rsmitty https://github.com/rsmitty in #256
#256.


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#257 (comment)


You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub
#257 (comment)

@joshuacox
Copy link

+1 in the very least @jnoller 's comment should be added to the docs

@jcsirot
Copy link
Contributor

jcsirot commented Jun 3, 2016

I had an almost similar/related request for kargo-cli kubespray/kubespray-cli#19

@rsmitty
Copy link
Contributor

rsmitty commented Jun 6, 2016

I've added pull #288 to address grabbing the certs, but we'll want to also update the docs to show @jnoller's commands with the output/certs/ directory.

@sc68cal
Copy link
Contributor

sc68cal commented Oct 27, 2016

Or at the very least, when the ansible playbook run is complete, do as #534 would be nice.

@joshuacox
Copy link

I have automated the process with a makefile, it's for a provisioner I made for cloudatcost, I've been meaning to make all this a module like GCE or AWS that could be invoked instead, but I haven't gotten around to it, and have merely bolted on some extra steps using bash.

it looks like this:

kargoConfig:
    $(eval TMP := $(shell mktemp -d --suffix=DOCKERTMP))
    $(eval SSH_PORT := $(shell cat SSH_PORT))
    $(eval PWD := $(shell pwd))
    head -n1 workingList > $(TMP)/masterList
    echo  '#!/bin/bash' > $(TMP)/mkargo.sh
    echo 'export ANSIBLE_SCP_IF_SSH=y'>> $(TMP)/mkargo.sh
    while read SID HOSTNAME NAME IP ROOTPASSWORD ID; \
        do \
        mkdir -p certs/$$NAME ; \
        echo "scp -P $(SSH_PORT) root@$$IP:/etc/kubernetes/ssl/admin-key.pem certs/$$NAME/" >> $(TMP)/mkargo.sh ; \
        echo "scp -P $(SSH_PORT) root@$$IP:/etc/kubernetes/ssl/ca.pem certs/$$NAME/" >> $(TMP)/mkargo.sh ; \
        echo "scp -P $(SSH_PORT) root@$$IP:/etc/kubernetes/ssl/admin.pem certs/$$NAME/" >> $(TMP)/mkargo.sh ; \
        echo -n "kubectl config set-cluster default-cluster " >> $(TMP)/mkargo.sh ; \
        echo -n " --kubeconfig=/root/.kube/config " >> $(TMP)/mkargo.sh ; \
        echo -n " --embed-certs=true  " >> $(TMP)/mkargo.sh ; \
        echo -n " --server=https://$$IP " >> $(TMP)/mkargo.sh ; \
        echo " --certificate-authority=$(PWD)/certs/$$NAME/ca.pem " >> $(TMP)/mkargo.sh ; \
        echo -n "kubectl config set-credentials default-admin " >> $(TMP)/mkargo.sh ; \
        echo -n " --kubeconfig=/root/.kube/config " >> $(TMP)/mkargo.sh ; \
        echo -n " --embed-certs=true " >> $(TMP)/mkargo.sh ; \
        echo -n " --certificate-authority=$(PWD)/certs/$$NAME/ca.pem " >> $(TMP)/mkargo.sh ; \
        echo -n " --client-key=$(PWD)/certs/$$NAME/admin-key.pem " >> $(TMP)/mkargo.sh ; \
        echo " --client-certificate=$(PWD)/certs/$$NAME/admin.pem " >> $(TMP)/mkargo.sh ; \
        echo -n "kubectl config set-context default-system " >> $(TMP)/mkargo.sh ; \
        echo -n " --kubeconfig=/root/.kube/config " >> $(TMP)/mkargo.sh ; \
        echo " --cluster=default-cluster --user=default-admin " >> $(TMP)/mkargo.sh ; \
        echo "kubectl config use-context default-system " >> $(TMP)/mkargo.sh ; \
        done < $(TMP)/masterList
    @bash $(TMP)/mkargo.sh
@rm -Rf $(TMP)

https://github.com/joshuacox/mkcloudatcost

@bogdando bogdando added this to the v2.2.0 milestone Jan 10, 2017
@adamcstephens
Copy link

adamcstephens commented Feb 25, 2017

Here is an ansible way of fetching and setting up a kubectl config for kargo. It is probably missing a lot of flexibilty, and probably should get its own cert....

---
- hosts: kube-master[0]
  gather_facts: no
  become: yes
  tasks:
    - fetch:
        src: "/etc/kubernetes/ssl/{{ item }}.pem"
        dest: "{{ playbook_dir }}/kubectl/{{ item }}.pem"
        flat: True
      with_items:
        - admin-{{ inventory_hostname }}-key
        - admin-{{ inventory_hostname }}
        - ca
    - name: export hostname
      set_fact:
        kubectl_name: "{{ inventory_hostname }}"

- hosts: localhost
  connection: local
  vars:
    kubectl_name: "{{ hostvars[groups['kube-master'][0]].kubectl_name }}"
  tasks:
    - name: check if context exists
      command: kubectl config get-contexts kargo
      register: kctl
      failed_when: kctl.rc == 0
    - block:
      - name: create cluster kargo
        command: kubectl config set-cluster kargo-admin --certificate-authority={{ playbook_dir }}/kubectl/ca.pem
      - name: create credentials kargo-admin
        command: kubectl config set-credentials kargo-admin --certificate-authority={{ playbook_dir }}/kubectl/ca.pem --client-key={{ playbook_dir }}/kubectl/admin-{{ kubectl_name }}-key.pem --client-certificate={{ playbook_dir }}/kubectl/admin-{{ kubectl_name }}.pem
      - name: create context kargo
        command: kubectl config set-context kargo --cluster=kargo-cluster --user=kargo-admin
      when: kctl.rc != 0

@gsaslis
Copy link
Contributor

gsaslis commented Jun 12, 2017

It took me 20 mins to get the vagrant setup going, so i can try out kargo.
It took me over 2 hours to work out how to configure kubectl to connect to the vagrant cluster from the host.

Imho, a fix to this would really improve the out-of-the-box experience for new users and would help attract even more users for the project.

Btw, thanks @jnoller ! 👍 (p.s. you might want to remove one of the two set-credentials commands)

@delfer
Copy link
Contributor

delfer commented Jul 9, 2017

API moved to port 6443 according to #1083

@shadycuz
Copy link

I second @gsaslis , at work we runs kops and it generates the config, but I wanted something for personal use. Everything went fine until it was time to kubectl. I have it working now thanks to this post and a guy in the slack channel but I still have questions. I have 3 masters so I have a key set per master, is that normal?

@ArgonQQ
Copy link

ArgonQQ commented Jul 18, 2017

@shadycuz I think it is. Personally this is really a very time consuming and frustrating step. Hope this is getting some attention soon :)

@ArgonQQ
Copy link

ArgonQQ commented Jul 18, 2017

@jnoller So. I still have a lot of trouble with the kubectl config.

My small script looks like the following:

export MASTER_HOST='kube-admin-node1'
export CA_CERT='/path/to/admin-node1/ca.pem'
export ADMIN_KEY='/path/to/admin-node1-key.pem'
export ADMIN_CERT='/path/to/admin-node1.pem '

kubectl config set-cluster default-cluster --server=https://${MASTER_HOST}:6443 \
    --certificate-authority=${CA_CERT}

kubectl config set-credentials default-admin \
    --certificate-authority=${CA_CERT} \
    --client-key=${ADMIN_KEY} \
    --client-certificate=${ADMIN_CERT}      

kubectl config set-context default-system --cluster=default-cluster --user=default-admin
kubectl config use-context default-system`

First problem was that I ran into a certificate conflict (tried to connect with IP to server)

# kubectl get node 
Unable to connect to the server: x509: cannot validate certificate for XX.XX.XX.XX because it doesn't contain any IP SANs

"Fixed" this with editing the /etc/hosts to
XX.XX.XX.XX kube-admin-node1

Now I get the following error:

# kubectl get node 
Unable to connect to the server: x509: certificate signed by unknown authority

Maybe someone has an idea for this.

Greetings
~ ArgonQQ

@Paxa
Copy link

Paxa commented Jul 25, 2017

This took some of my time as well, here is easy solution (tested with kubernetes v1.6.7):

  1. Make sure your server reachable on port 6443 (centos may have firewall)
$ curl -k https://YOUR_MASTER_PUBLIC_IP:6443
Unauthorized
  1. Create ~/.kube/config:
apiVersion: v1
clusters:
- cluster:
    # needed if you get error "Unable to connect to the server: x509: certificate signed by unknown authority"
    insecure-skip-tls-verify: true
    # port 6443 is for secure connection
    server: https://YOUR_MASTER_PUBLIC_IP:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: LS0tLS1CRUdJTi... (base64 /etc/kubernetes/ssl/node-node1.pem)
    client-key-data: LS0tLS1CRUdJTiBS.. (base64 /etc/kubernetes/ssl/node-node1-key.pem)
  1. Inline certificates:

Set client-certificate-data: with output of:

cat /etc/kubernetes/ssl/node-node1.pem | base64 -w 0

Set client-certificate-data with output of:

cat /etc/kubernetes/ssl/node-node1-key.pem | base64 -w 0

Make sure it's running:

$ kubectl get nodes
NAME      STATUS    AGE       VERSION
my-kube   Ready     2h        v1.6.7+coreos.0

If you save config with other name then can tell kubectl via env variable:

export KUBECONFIG=/path/to/kubectl_config

@ArgonQQ
Copy link

ArgonQQ commented Jul 25, 2017

@Paxa That's more of an dirty hack than a real solution 😄 . I am still trying to investigate how to get kubectl up and running without using insecure-skip-tls-verify: true.

Why are you trying to base64 the certificates?

Simply replace client-certificate-data with client-certificate and client-key-data with client-key and the full path behind. That's a cleaner solution, at least that's what I am doing.

Greetings
~ ArgonQQ

@Paxa
Copy link

Paxa commented Jul 25, 2017

@ArgonQQ I prefer to have all in 1 file, easier to share with my team

We can use real certificate instead of insecure-skip-tls-verify:

apiVersion: v1
clusters:
- cluster:
    certificate-authority: /etc/kubernetes/ssl/ca.pem # copy this file from server, can be stored anywhere
    server: https://YOUR_MASTER_PUBLIC_IP:6443

In my case, cloud service gives public IP different then IP in ifconfig, so I need to specify public IP on setup and I always forget :(

@ArgonQQ
Copy link

ArgonQQ commented Jul 26, 2017

@Paxa That's true. All in one has its benefits too 👍

If I copy the ca file from the master and setup the master ip the only thing I get in response is:
Unable to connect to the server: x509: certificate signed by unknown authority

Maybe you have any idea what I am doing wrong.

Greetings
~ ArgonQQ

@xech3l0nx
Copy link

@Paxa That's true. All in one has its benefits too 👍

If I copy the ca file from the master and setup the master ip the only thing I get in response is:
Unable to connect to the server: x509: certificate signed by unknown authority

Maybe you have any idea what I am doing wrong.

Greetings
~ ArgonQQ

Same here

@mattymo mattymo self-assigned this Sep 9, 2017
@mattymo
Copy link
Contributor

mattymo commented Sep 9, 2017

I see that this issue is quite popular and nobody has contributed a working solution yet. I will make a PR with the following approach:
Create /etc/kubernetes/admin-kubeconfig.yaml which points to the first master (or loadbalancer if defined) with all the cert content baked in base64 format.
Copy the above file to /root/.kube/config
(above two will be enabled by default)
Create an optional play in the playbook to download kubectl and copy the kubeconfig to the ansible host which is running the playbook.

@Starefossen
Copy link
Contributor

@mattymo #1247

@mattymo mattymo closed this as completed Oct 18, 2017
@shadycuz
Copy link

For all those who might be wondering if this is implemented. #1647

@DanielRamosAcosta
Copy link

@jnoller I dont have anything at /etc/kubernetes/ssl, but there are some certs in /etc/kubernetes/pki:

apiserver.crt
apiserver.key
apiserver-kubelet-client.crt
apiserver-kubelet-client.key
ca.crt
ca.key
front-proxy-ca.crt
front-proxy-ca.key
front-proxy-client.crt
front-proxy-client.key
sa.key
sa.pub

This certificates suites the snippet you posted?

@chrisevett
Copy link

chrisevett commented Mar 28, 2018

I found this issue googling, I noticed some certs on one of my masters at /srv/kubernetes/

apiserver-aggregator-ca.cert  
apiserver-aggregator.cert  
apiserver-aggregator.key  
assets  
basic_auth.csv  
ca.crt  
ca.key  
known_tokens.csv  
proxy-client.cert  
proxy-client.key  
server.cert  
server.key

@perklet
Copy link

perklet commented Oct 4, 2018

Just copy /etc/kubernetes/admin.conf from one of the master node as ~/.kube/config works for me

@atrakic
Copy link

atrakic commented Mar 5, 2019

I have working setup kubectl with vagrant.
On vagrant host I can use kubectl (after I have copied content with: vagrant ssh k8s-1 -c "cat ~/.kube/config" >kubespray && export KUBECONFIG=kubespray).

How do I access cluster from outside vagrant host but without using a hack "insecure-skip-tls-verify" ?

@anencore94
Copy link

@atrakic I guess it's too late, or you already know this. But here the information about you said.

See Accessing Kubernetes API in this document :
kubespray guide

You can copy just from your inventory directory, rather than copying from vm. Or, you can use just binary in your inventory directory.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests