-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Generate a ~/.kube/config file for users? #257
Comments
It's pretty 'simple' - once cargo is done you want to grab these from the master:
The user can then run:
|
Would you be interested in me updating my merge request to setup the
|
I have some thoughts in this. My open contrail playbook request had this Though for docker I think we have a different problem to think about. I'll try and elaborate later today Greg On Saturday, May 21, 2016, Spencer Smith [email protected] wrote:
|
+1 in the very least @jnoller 's comment should be added to the docs |
I had an almost similar/related request for kargo-cli kubespray/kubespray-cli#19 |
Or at the very least, when the ansible playbook run is complete, do as #534 would be nice. |
I have automated the process with a makefile, it's for a provisioner I made for cloudatcost, I've been meaning to make all this a module like GCE or AWS that could be invoked instead, but I haven't gotten around to it, and have merely bolted on some extra steps using bash. it looks like this:
|
Here is an ansible way of fetching and setting up a kubectl config for kargo. It is probably missing a lot of flexibilty, and probably should get its own cert.... ---
- hosts: kube-master[0]
gather_facts: no
become: yes
tasks:
- fetch:
src: "/etc/kubernetes/ssl/{{ item }}.pem"
dest: "{{ playbook_dir }}/kubectl/{{ item }}.pem"
flat: True
with_items:
- admin-{{ inventory_hostname }}-key
- admin-{{ inventory_hostname }}
- ca
- name: export hostname
set_fact:
kubectl_name: "{{ inventory_hostname }}"
- hosts: localhost
connection: local
vars:
kubectl_name: "{{ hostvars[groups['kube-master'][0]].kubectl_name }}"
tasks:
- name: check if context exists
command: kubectl config get-contexts kargo
register: kctl
failed_when: kctl.rc == 0
- block:
- name: create cluster kargo
command: kubectl config set-cluster kargo-admin --certificate-authority={{ playbook_dir }}/kubectl/ca.pem
- name: create credentials kargo-admin
command: kubectl config set-credentials kargo-admin --certificate-authority={{ playbook_dir }}/kubectl/ca.pem --client-key={{ playbook_dir }}/kubectl/admin-{{ kubectl_name }}-key.pem --client-certificate={{ playbook_dir }}/kubectl/admin-{{ kubectl_name }}.pem
- name: create context kargo
command: kubectl config set-context kargo --cluster=kargo-cluster --user=kargo-admin
when: kctl.rc != 0 |
It took me 20 mins to get the vagrant setup going, so i can try out kargo. Imho, a fix to this would really improve the out-of-the-box experience for new users and would help attract even more users for the project. Btw, thanks @jnoller ! 👍 (p.s. you might want to remove one of the two |
API moved to port 6443 according to #1083 |
I second @gsaslis , at work we runs kops and it generates the config, but I wanted something for personal use. Everything went fine until it was time to kubectl. I have it working now thanks to this post and a guy in the slack channel but I still have questions. I have 3 masters so I have a key set per master, is that normal? |
@shadycuz I think it is. Personally this is really a very time consuming and frustrating step. Hope this is getting some attention soon :) |
@jnoller So. I still have a lot of trouble with the My small script looks like the following:
First problem was that I ran into a certificate conflict (tried to connect with IP to server)
"Fixed" this with editing the /etc/hosts to Now I get the following error:
Maybe someone has an idea for this. Greetings |
This took some of my time as well, here is easy solution (tested with kubernetes v1.6.7):
apiVersion: v1
clusters:
- cluster:
# needed if you get error "Unable to connect to the server: x509: certificate signed by unknown authority"
insecure-skip-tls-verify: true
# port 6443 is for secure connection
server: https://YOUR_MASTER_PUBLIC_IP:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: LS0tLS1CRUdJTi... (base64 /etc/kubernetes/ssl/node-node1.pem)
client-key-data: LS0tLS1CRUdJTiBS.. (base64 /etc/kubernetes/ssl/node-node1-key.pem)
Set
Set
Make sure it's running:
If you save config with other name then can tell kubectl via env variable: export KUBECONFIG=/path/to/kubectl_config |
@Paxa That's more of an dirty hack than a real solution 😄 . I am still trying to investigate how to get kubectl up and running without using insecure-skip-tls-verify: true. Why are you trying to base64 the certificates? Simply replace client-certificate-data with client-certificate and client-key-data with client-key and the full path behind. That's a cleaner solution, at least that's what I am doing. Greetings |
@ArgonQQ I prefer to have all in 1 file, easier to share with my team We can use real certificate instead of apiVersion: v1
clusters:
- cluster:
certificate-authority: /etc/kubernetes/ssl/ca.pem # copy this file from server, can be stored anywhere
server: https://YOUR_MASTER_PUBLIC_IP:6443 In my case, cloud service gives public IP different then IP in |
@Paxa That's true. All in one has its benefits too 👍 If I copy the ca file from the master and setup the master ip the only thing I get in response is: Maybe you have any idea what I am doing wrong. Greetings |
If I copy the ca file from the master and setup the master ip the only thing I get in response is: Maybe you have any idea what I am doing wrong. Greetings Same here |
I see that this issue is quite popular and nobody has contributed a working solution yet. I will make a PR with the following approach: |
For all those who might be wondering if this is implemented. #1647 |
@jnoller I dont have anything at
This certificates suites the snippet you posted? |
I found this issue googling, I noticed some certs on one of my masters at /srv/kubernetes/
|
Just copy |
I have working setup kubectl with vagrant. How do I access cluster from outside vagrant host but without using a hack "insecure-skip-tls-verify" ? |
@atrakic I guess it's too late, or you already know this. But here the information about you said. See Accessing Kubernetes API in this document : You can copy just from your inventory directory, rather than copying from vm. Or, you can use just binary in your inventory directory. |
This is an enhancement request - since the playbooks have all the info needed to generate a kubectl configuration file, this would be a nice to have to drop in the ~/.kube/ dir either on the nodes or in the directory where the playbooks were run. Remote kubectl access would be the norm in my mind
The text was updated successfully, but these errors were encountered: