Skip to content
This repository has been archived by the owner on Jul 30, 2021. It is now read-only.

Etcd with tls #389

Closed
devent opened this issue Mar 17, 2017 · 2 comments
Closed

Etcd with tls #389

devent opened this issue Mar 17, 2017 · 2 comments

Comments

@devent
Copy link

devent commented Mar 17, 2017

Hello,
How to use bootkube and a etcd cluster that is secured by TLS and client auth?
In the kube-apiserver.yaml I need to add

        - --etcd-cafile=ca_cert.pem
        - --etcd-certfile=etcd_cert.pem
        - --etcd-keyfile=etcd_key.pem

But the kube-apiserver.yaml is generated from bootkube render. How to add those extra arguments? And how can the DaemonSet find the certificates?

I tried to patch kube-apiserver.yaml as follow, but no success.

apiVersion: "extensions/v1beta1"
kind: DaemonSet
metadata:
  name: kube-apiserver
  namespace: kube-system
  labels:
    k8s-app: kube-apiserver
spec:
  template:
    metadata:
      labels:
        k8s-app: kube-apiserver
      annotations:
        checkpointer.alpha.coreos.com/checkpoint: "true"
    spec:
      nodeSelector:
        master: "true"
      hostNetwork: true
      containers:
      - name: kube-apiserver
        image: quay.io/coreos/hyperkube:v1.5.4_coreos.0
        command:
        - /usr/bin/flock
        - --exclusive
        - --timeout=30
        - /var/lock/api-server.lock
        - /hyperkube
        - apiserver
        - --bind-address=0.0.0.0
        - --secure-port=443
        - --insecure-port=8080
        - --advertise-address=$(POD_IP)
        - --etcd-servers=https://xxx:2379
        - --etcd-cafile=/etc/etcd-ssl/ca_cert.pem
        - --etcd-certfile=/etc/etcd-ssl/client_cert.pem
        - --etcd-keyfile=/etc/etcd-ssl/client_key_insecure.pem
        - --storage-backend=etcd3
        - --allow-privileged=true
        - --service-cluster-ip-range=10.3.0.0/24
        - --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota
        - --runtime-config=api/all=true
        - --tls-cert-file=/etc/kubernetes/secrets/apiserver.crt
        - --tls-private-key-file=/etc/kubernetes/secrets/apiserver.key
        - --kubelet-client-certificate=/etc/kubernetes/secrets/apiserver.crt
        - --kubelet-client-key=/etc/kubernetes/secrets/apiserver.key
        - --service-account-key-file=/etc/kubernetes/secrets/service-account.pub
        - --client-ca-file=/etc/kubernetes/secrets/ca.crt
        - --authorization-mode=RBAC
        - --cloud-provider=
        - --anonymous-auth=false
        env:
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        volumeMounts:
        - mountPath: /etc/etcd-ssl
          name: ssl-certs-etcd
          readOnly: true
        - mountPath: /etc/ssl/certs
          name: ssl-certs-host
          readOnly: true
        - mountPath: /etc/kubernetes/secrets
          name: secrets
          readOnly: true
        - mountPath: /var/lock
          name: var-lock
          readOnly: false
      volumes:
      - name: ssl-certs-etcd
        hostPath:
          path: /etc/ssl/certs/etcd
      - name: ssl-certs-host
        hostPath:
          path: /usr/share/ca-certificates
      - name: secrets
        secret:
          secretName: kube-apiserver
      - name: var-lock
        hostPath:
          path: /var/lock
@devent
Copy link
Author

devent commented Mar 20, 2017

Edit. I had a mistake in etcd-client.pem, I fixed it. But the error still persists.

Hello again,
I tried now to add the etcd certificates to the kube-apiserver-secrets.yaml. But the result is the same, please see the bootkube logs. The data of the certificates was created by the following command.

cat etcd/ca_cert.pem | base64 | tr -d '\n'
[ 3964.131983] bootkube[5]: E0320 10:20:02.447691       5 create.go:35] Error creating assets: [error when creating "/core/assets/manifests/kube-apiserver-secret.yaml": secrets "kube-apiserver" is forbidden: not yet ready to handle request, error when creating "/core/assets/manifests/kube-apiserver.yaml": daemonsets.extensions "kube-apiserver" is forbidden: not yet ready to handle request, error when creating "/core/assets/manifests/kube-controller-manager-disruption.yaml": poddisruptionbudgets.policy "kube-controller-manager" is forbidden: not yet ready to handle request, error when creating "/core/assets/manifests/kube-controller-manager-secret.yaml": secrets "kube-controller-manager" is forbidden: not yet ready to handle request, error when creating "/core/assets/manifests/kube-controller-manager.yaml": deployments.extensions "kube-controller-manager" is forbidden: not yet ready to handle request, error when creating "/core/assets/manifests/kube-dns-deployment.yaml": deployments.extensions "kube-dns" is forbidden: not yet ready to handle request, error when creating "/core/assets/manifests/kube-dns-svc.yaml": services "kube-dns" is forbidden: not yet ready to handle request, error when creating "/core/assets/manifests/kube-flannel-cfg.yaml": configmaps "kube-flannel-cfg" is forbidden: not yet ready to handle request, error when creating "/core/assets/manifests/kube-flannel.yaml": daemonsets.extensions "kube-flannel" is forbidden: not yet ready to handle request, error when creating "/core/assets/manifests/kube-proxy.yaml": daemonsets.extensions "kube-proxy" is forbidden: not yet ready to handle request, error when creating "/core/assets/manifests/kube-scheduler-disruption.yaml": poddisruptionbudgets.policy "kube-scheduler" is forbidden: not yet ready to handle request, error when creating "/core/assets/manifests/kube-scheduler.yaml": deployments.extensions "kube-scheduler" is forbidden: not yet ready to handle request, error when creating "/core/assets/manifests/kube-system-rbac-role-binding.yaml": Timeout: request did not complete withi
[ 3964.147177] bootkube[5]: n allowed duration, error when creating "/core/assets/manifests/pod-checkpoint-installer.yaml": daemonsets.extensions "checkpoint-installer" is forbidden: not yet ready to handle request]
[ 3964.152845] bootkube[5]: Error creating assets: [error when creating "/core/assets/manifests/kube-apiserver-secret.yaml": secrets "kube-apiserver" is forbidden: not yet ready to handle request, error when creating "/core/assets/manifests/kube-apiserver.yaml": daemonsets.extensions "kube-apiserver" is forbidden: not yet ready to handle request, error when creating "/core/assets/manifests/kube-controller-manager-disruption.yaml": poddisruptionbudgets.policy "kube-controller-manager" is forbidden: not yet ready to handle request, error when creating "/core/assets/manifests/kube-controller-manager-secret.yaml": secrets "kube-controller-manager" is forbidden: not yet ready to handle request, error when creating "/core/assets/manifests/kube-controller-manager.yaml": deployments.extensions "kube-controller-manager" is forbidden: not yet ready to handle request, error when creating "/core/assets/manifests/kube-dns-deployment.yaml": deployments.extensions "kube-dns" is forbidden: not yet ready to handle request, error when creating "/core/assets/manifests/kube-dns-svc.yaml": services "kube-dns" is forbidden: not yet ready to handle request, error when creating "/core/assets/manifests/kube-flannel-cfg.yaml": configmaps "kube-flannel-cfg" is forbidden: not yet ready to handle request, error when creating "/core/assets/manifests/kube-flannel.yaml": daemonsets.extensions "kube-flannel" is forbidden: not yet ready to handle request, error when creating "/core/assets/manifests/kube-proxy.yaml": daemonsets.extensions "kube-proxy" is forbidden: not yet ready to handle request, error when creating "/core/assets/manifests/kube-scheduler-disruption.yaml": poddisruptionbudgets.policy "kube-scheduler" is forbidden: not yet ready to handle request, error when creating "/core/assets/manifests/kube-scheduler.yaml": deployments.extensions "kube-scheduler" is forbidden: not yet ready to handle request, error when creating "/core/assets/manifests/kube-system-rbac-role-binding.yaml": Timeout: request did not complete within allowed duration, error when creating "/co
[ 3964.162104] bootkube[5]: re/assets/manifests/pod-checkpoint-installer.yaml": daemonsets.extensions "checkpoint-installer" is forbidden: not yet ready to handle request]
[ 3964.165903] bootkube[5]: NOTE: Bootkube failed to create some cluster assets. It is important that manifest errors are resolved and resubmitted to the apiserver.
[ 3964.172094] bootkube[5]: For example, after resolving issues: kubectl create -f <failed-manifest>

I will post the kube-apiserver-secrets.yaml and kube-apiserver.yaml.

kube-apiserver-secrets.yaml

core@conf_vm03 ~ $ cat assets/manifests/kube-apiserver-secret.yaml 
apiVersion: v1
data:
  etcd-ca.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUdNekNDQkJ1Z0F3S...9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
  apiserver.crt: LS0tLS1CRUd...tRU5EIENFUlRJRklDQVRFLS0tLS0K
  apiserver.key: LS0tLS1CRUdJTi...U5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
  ca.crt: LS0tLS1CRUdJTi...0VSVElGSUNBVEUtLS0tLQo=
  etcd-client.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUd...9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
  etcd-key.pem: LS0tLS1CR...QRlhtc09iU1NmZmJpcWxGN1l4bFB1ay8rVU1GYXBLSHZnMXpYZz09Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==  
  service-account.pub: LS0tLS1CRUdJTiBQVU...TElDIEtFWS0tLS0tCg==
kind: Secret
metadata:
  name: kube-apiserver
  namespace: kube-system
type: Opaque

kube-apiserver.yaml

core@conf_vm03 ~ $ cat assets/manifests/kube-apiserver.yaml        
apiVersion: "extensions/v1beta1"
kind: DaemonSet
metadata:
  name: kube-apiserver
  namespace: kube-system
  labels:
    k8s-app: kube-apiserver
spec:
  template:
    metadata:
      labels:
        k8s-app: kube-apiserver
      annotations:
        checkpointer.alpha.coreos.com/checkpoint: "true"
    spec:
      nodeSelector:
        master: "true"
      hostNetwork: true
      containers:
      - name: kube-apiserver
        image: quay.io/coreos/hyperkube:v1.5.4_coreos.0
        command:
        - /usr/bin/flock
        - --exclusive
        - --timeout=30
        - /var/lock/api-server.lock
        - /hyperkube
        - apiserver
        - --bind-address=0.0.0.0
        - --secure-port=443
        - --insecure-port=8080
        - --advertise-address=$(POD_IP)
        - --etcd-servers=https://192.168.56.110:2379,https://192.168.56.111:2379
        - --etcd-cafile=/etc/kubernetes/secrets/etcd-ca.pem
        - --etcd-certfile=/etc/kubernetes/secrets/etcd-client.pem
        - --etcd-keyfile=/etc/kubernetes/secrets/etcd-key.pem
        - --storage-backend=etcd3
        - --allow-privileged=true
        - --service-cluster-ip-range=10.3.0.0/24
        - --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota
        - --runtime-config=api/all=true
        - --tls-cert-file=/etc/kubernetes/secrets/apiserver.crt
        - --tls-private-key-file=/etc/kubernetes/secrets/apiserver.key
        - --kubelet-client-certificate=/etc/kubernetes/secrets/apiserver.crt
        - --kubelet-client-key=/etc/kubernetes/secrets/apiserver.key
        - --service-account-key-file=/etc/kubernetes/secrets/service-account.pub
        - --client-ca-file=/etc/kubernetes/secrets/ca.crt
        - --authorization-mode=RBAC
        - --cloud-provider=
        - --anonymous-auth=false
        env:
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        volumeMounts:
        - mountPath: /etc/ssl/certs
          name: ssl-certs-host
          readOnly: true
        - mountPath: /etc/kubernetes/secrets
          name: secrets
          readOnly: true
        - mountPath: /var/lock
          name: var-lock
          readOnly: false
      volumes:
      - name: ssl-certs-host
        hostPath:
          path: /usr/share/ca-certificates
      - name: secrets
        secret:
          secretName: kube-apiserver
      - name: var-lock
        hostPath:
          path: /var/lock

@devent
Copy link
Author

devent commented Mar 27, 2017

I was hoping to find here some feedback. I tested that setup with both bootkube version v0.3.10 and v0.3.11, and both failed with the same error. The SSL certificates are working fine if I first start the k8s cluster with etcd http://127.0.0.1:2379 and update the DaemonSet and Secret of the apiserver to switch to https://192.168.56.110:2379,https://192.168.56.111:2379.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants