Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Could I use kube-vip on baremetal k8s to advertise kube api server? #76

Closed
immanuelfodor opened this issue Sep 11, 2020 · 8 comments
Closed

Comments

@immanuelfodor
Copy link

For example, if we use MetalLB to create and advertise the LB IP for api server, kubelet cannot talk to the control plane until MetalLB has started and configured the LB IP. But MetalLB cannot start until kubelet can talk to the control plane and discover that it should be running the pod.

This issue is not specifically about MetalLB, I'm just wondering if kube-vip has the same issue. Could somebody please enlighten me if I could use kube-vip on baremetal k8s to advertise the kube api server? So I could use https://$LB_IP:6443 instead of https://$A_NODE_IP:6443 in my kubeconfig file.

Related open issues:

@thebsdbox
Copy link
Collaborator

Hi! Kube-vip.io Has a ton of stuff about using kube-vip for your control plane.

@immanuelfodor
Copy link
Author

I read all the pages there before opening this issue but I did not find the answer to my question. My particular use case is load balancing an existing cluster's api server. The cluster is built up by RKE, and I can't just rebuild it by kubeadm. I need a solution that can load balance the api server in an existing cluster that is why I checked the other projects as well, but it seems they haven't solved this use case yet. I don't need the LoadBalancer services for deployments, just the virtual IP management for the nodes without SSH access, that's why I'm looking for an in-cluster service. I thought kube-vip might solve the floating IP problem without deploying the other two parts but I'm still looking for the answer if it can load balance the kube api server without circular dependencies.

@thebsdbox
Copy link
Collaborator

Ah ok, apologies.. I wasn't aware that this was for an existing cluster.. this should be possible to do the load-balancing, but the SAN for the VIP won't exist in the kube-api-server certs.

@immanuelfodor
Copy link
Author

immanuelfodor commented Sep 15, 2020

Well, I could add the IP manually to the SANs list but I still don't get how it works for the api server, do I need all the 3 services, or just kube-vip is enough?

According to https://kube-vip.io/kubernetes/#deploy-%60kube-vip%60 I could deploy kube-vip and also add this config map as it's reading it: https://kube-vip.io/kubernetes/#the-%60plndr-cloud-provider%60-%60configmap%60 But which IP ($VIP) will be assigned to the host interface (ens192 in the examples, or in my VMs it's eth0) from the 2 ranges where I can access https://$VIP:6443?

data:
  cidr-default: 192.168.0.200/29
  cidr-plunder: 192.168.0.210/29

@thebsdbox
Copy link
Collaborator

For the control plane, you only need Kube-vip itself. Somewhat in the same way as -> https://kube-vip.io/control-plane/

However, RKE appears not to support static pods .. so we'd need to create a daemonset that has the required taints for running on control plane nodes only.

@immanuelfodor
Copy link
Author

I see now where the VIP address comes from:

What I don't get now is how the DaemonSet would work. It seems to me from the TLDR example (https://kube-vip.io/control-plane/#load-balancing-a-kubernetes-cluster-(control-plane)) that maybe the kubeadm init and kubeadm join commands could be skipped for RKE but we still need the docker run commands to get the config files. This would be what the daemonset would do? Or I'd need to run the containers once before I run the daemonset, maybe on my laptop, get the same YAML file, fill the prameters, put it on the nodes, and then deploy the daemonset? But all the docker run commands are --rm-ed, so nothing will be running after the example commands have run, am I right? What should be the daemonset running then? And should this config file have the same contents for all nodes: https://kube-vip.io/control-plane/#modify-the-configuration ? There is this vip_localpeer value which seems suspicious to me, shouldn't it be different on all nodes? Or this is just for the first bootstrap? But what if the first peer is not available?

Sorry for the lots of questions, I just really want that VIP on the nodes 😀

@thebsdbox
Copy link
Collaborator

The daemonset will be a separate operation from the kubeadm steps (I'm implementing it seperately at the moment).

@thebsdbox
Copy link
Collaborator

https://kube-vip.io/control-plane/#k3s <- the same steps should be fine for creating a daemonset!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants