k8s-cloud-provider-bmc
is the Kubernetes CCM implementation for PhoenixNAP. Read more about the CCM in the official Kubernetes documentation.
This repository is Maintained!
At the current state of Kubernetes, running the CCM requires a few things. Please read through the requirements carefully as they are critical to running the CCM on a Kubernetes cluster.
Recommended versions of PhoenixNAP CCM based on your Kubernetes version:
- PhoenixNAP CCM version v1.0.0+ supports Kubernetes version >=1.20.0
If you plan on using services of type=LoadBalancer
, then you have several prerequisites:
- PhoenixNAP public network on which all nodes are connected.
- VLAN-specific interface on each node.
- Software that can announce the IP address of the load balancer to the upstream switch via ARP.
Every PhoenixNAP server deployed includes one private network and, optionally, one public network. The private network links all of your servers, while the public network is used to connect your servers to the Internet. The public network sits on a VLAN which is connected only to your server and the upstream switch.
To route a newly assigned IP to any one of your servers, you need a new public network. Ensure that all of the servers to which you want to route traffic are connected to a dedicated public network, not including the one that came with your server by default.
Read this KnowledgeBase article on public networks.
In order for each server to be able to handle traffic from the dedicated public network, it needs a virtual interface, on top of its default physical interface, with the correct VLAN.
Read this KnowledgeBase article on configuring a VLAN-specific interface for your public network.
As all of the networking is in a VLAN, i.e. layer 2, load-balancing software must support announcing IP addresses via Layer 2 ARP.
As of this writing, the supported load-balancer software is kube-vip.
In the future, this CCM may support other arp-based load-balancer software, such as metallb. It may also support BGP, if and when PhoenixNAP BGP support is in place.
TL;DR
- Set Kubernetes binary arguments correctly
- Get your PhoenixNAP client ID and client secret
- Deploy your PhoenixNAP client ID and client secret to your cluster in a secret
- Deploy the CCM
- Deploy the load balancer (optional)
Control plane binaries in your cluster must start with the correct flags:
kubelet
: All kubelets in your cluster MUST set the flag--cloud-provider=external
. This must be done for every kubelet. Note that k3s sets its own CCM by default. If you want to use the CCM with k3s, you must disable the k3s CCM and enable this one, as--disable-cloud-controller --kubelet-arg cloud-provider=external
.kube-apiserver
andkube-controller-manager
must NOT set the flag--cloud-provider
. They then will use no cloud provider natively, leaving room for the PhoenixNAP CCM.
WARNING: setting the kubelet flag --cloud-provider=external
will taint all nodes in a cluster with node.cloudprovider.kubernetes.io/uninitialized
.
The CCM itself will untaint those nodes when it initializes them.
Any pod that does not tolerate that taint will be unscheduled until the CCM is running.
You must set the kubelet flag the first time you run the kubelet. Stopping the kubelet, adding it after, and then restarting it will not work.
By default, the kubelet will name nodes based on the node's hostname. PhoenixNAP device hostnames are set based on the name of the device. It is important that the Kubernetes node name matches the device name.
To run k8s-cloud-provider-bmc
, you need your PhoenixNAP client ID and client secret that your cluster is running in.
You can generate them from the PhoenixNAP portal.
Ensure it at least has the scopes of "bmc"
, "bmc.read"
, "tags"
and "tags.read"
.
Once you have this information you will be able to fill in the config needed for the CCM.
Copy deploy/template/secret.yaml to someplace useful:
cp deploy/template/secret.yaml /tmp/secret.yaml
Replace the placeholder in the copy with your token. When you're done, the yaml
should look something like this:
apiVersion: v1
kind: Secret
metadata:
name: pnap-cloud-config
namespace: kube-system
stringData:
cloud-sa.json: |
{
"clientID": "abc123abc123abc123",
"clientSecret": "def456def456def456",
}
Then apply the secret, e.g.:
kubectl apply -f /tmp/secret.yaml
You can confirm that the secret was created with the following:
$ kubectl -n kube-system get secrets pnap-cloud-config
NAME TYPE DATA AGE
pnap-cloud-config Opaque 1 2m
To apply the CCM itself, select your release and apply the manifest:
RELEASE=v2.0.0
kubectl apply -f https://github.com/phoenixnap/k8s-cloud-provider-bmc/releases/download/${RELEASE}/deployment.yaml
The CCM uses multiple configuration options. See the configuration section for all of the options.
If you want load balancing to work as well, deploy a supported load-balancer.
CCM provides the correct logic, if necessary, to manage load balancer configs for supported load-balancers.
See further in this document under loadbalancing, for details.
By default, ccm does minimal logging, relying on the supporting infrastructure from kubernetes. However, it does support
optional additional logging levels via the --v=<level>
flag. In general:
--v=2
: log most function calls for devices and facilities, when relevant logging the returned values--v=3
: log additional data when logging returned values, usually entire go structs--v=5
: log every function call, including those called very frequently
The PhoenixNAP CCM has multiple configuration options. These include several different ways to set most of them, for your convenience.
- Command-line flags, e.g.
--option value
or--option=value
; if not set, then - Environment variables, e.g.
CCM_OPTION=value
; if not set, then - Field in the configuration secret; if not set, then
- Default, if available; if not available, then an error
This section lists each configuration option, and whether it can be set by each method.
Purpose | CLI Flag | Env Var | Secret Field | Default |
---|---|---|---|---|
Path to config secret | cloud-config |
error | ||
Client ID | PNAP_CLIENT_ID |
clientID |
error | |
Client Secret | PNAP_CLIENT_SECRET |
clientSecret |
error | |
Location in which to create LoadBalancer IP Blocks | PNAP_LOCATION |
location |
Service-specific annotation, else error | |
Base URL to PhoenixNAP API | base-url |
Official PhoenixNAP API | ||
Load balancer setting | PNAP_LOAD_BALANCER |
loadbalancer |
none | |
Kubernetes Service annotation to set IP block location | PNAP_ANNOTATION_IP_LOCATION |
annotationIPLocation |
"phoenixnap.com/ip-location" |
|
Kubernetes API server port for IP | PNAP_API_SERVER_PORT |
apiServerPort |
Same as kube-apiserver on control plane nodes, same as 0 |
Location Note: In all cases, where a "location" is required, use the 3-letter short-code of the location. For example,
"SEA"
or "ASH"
.
The Kubernetes CCM for PhoenixNAP deploys as a Deployment
into your cluster with a replica of 1
. It provides the following services:
- lists and retrieves instances by ID, returning PhoenixNAP instances
- manages load balancers
PhoenixNAP does not offer managed load balancers like AWS ELB or GCP Load Balancers. Instead, if configured to do so, PhoenixNAP CCM will interface with and configure loadbalancing using PhoenixNAP IP blocks and tags.
For a Service of type=LoadBalancer
CCM will create one using the PhoenixNAP API and assign it to the network,
so load balancers can consume them.
PhoenixNAP's API does not support adding tags to individual IP addresses, while it has full support for tags on blocks.
Each block created is of type /29
. The first IP is for the network, the second is for the gateway,
the third is for the Service.
PhoenixNAP CCM uses tags to mark IP blocks as assigned to specific services.
Each block is given 3 tags:
usage=cloud-provider-phoenixnap-auto
- identifies that the IP block was reserved automatically using the phoenixnap CCMcluster=<clusterID>
- identifies the cluster to which the IP block belongsservice=<serviceID>
- which service this IP block is assigned to
Note that the <serviceID>
includes both the namespace and the name, e.g. namespace5/nginx
. While all valid characters
for a namespace and a service name are valid for a tag value, the /
character is not. Therefore, the CCM replaces
/
with .
in the service ID.
When CCM encounters a Service
of type=LoadBalancer
, it will use the PhoenixNAP API to:
- Look for a block of public IP addresses with the cluster and constant PhoenixNAP tags, as well as the tag
service=<serviceID>
. Else: - Request a new, location-specific
/29
IP block and tag it appropriately. - Use the first available IP in the block, i.e. the third, for the Service.
- Set the IP to
Service.Spec.LoadBalancerIP
. - Pass control to the specific load-balancer implementation.
The CCM needs to determine where to request the IP block or find a block with available IPs.
It does not attempt to figure out where the nodes are, as that can change over time,
the nodes might not be in existence when the CCM is running or Service
is created, and you could run a Kubernetes cluster across
multiple locations, or even cloud providers.
The CCM uses the following rules to determine where to create the IP:
- if location is set globally using the environment variable
PNAP_LOCATION
, use it; else - if the
Service
for which the IP is being created has the annotation indicating the location, use it; else - Return an error, cannot use an IP from a block or create a block.
The overrides of environment variable and config file are provided so that you can control explicitly where the IPs are created at a system-wide level, ignoring the annotations.
Using these flags and annotations, you can run the CCM on a node in a different location, or even outside of PhoenixNAP entirely.
Loadbalancing is enabled as follows.
- If the environment variable
PNAP_LOAD_BALANCER
is set, read that. Else... - If the config file has a key named
loadbalancer
, read that. Else... - Load balancing is disabled.
The value of the loadbalancing configuration is <type>:///<detail>
where:
<type>
is the named supported type, of one of those listed below<detail>
is any additional detail needed to configure the implementation, details in the description below
For loadbalancing for Kubernetes Service
of type=LoadBalancer
, the following implementations are supported:
CCM itself does not deploy the load-balancer or any part of it, including maintenance ConfigMaps. It only works with existing resources to configure them.
When the kube-vip
option is enabled, for user-deployed Kubernetes Service
of type=LoadBalancer
,
the PhoenixNAP CCM assigns a block, and the third IP from that block, for each such Service
. If
necessary, it first creates the block.
To enable it, set the configuration PNAP_LOAD_BALANCER
or config loadbalancer
to:
kube-vip://<public-network-ID>
Directions on configuring kube-vip in arp mode are available at the kube-vip site.
If kube-vip
management is enabled, then CCM does the following.
- For each node currently in the cluster or added:
- retrieve the node's PhoenixNAP ID via the node provider ID
- add the information to appropriate annotations on the node
- For each service of
type=LoadBalancer
currently in the cluster or added, ensure that:- an IP block with the appropriate tags exists or create it
- the IP block is associated with the public network
- the
Service
has that IP address affiliated with it in the service spec or affiliate it
- For each service of
type=LoadBalancer
deleted from the cluster, ensure:- the IP address is removed from the service spec
- the IP block is disassociated from the public network
- the IP block is deleted
On startup, the CCM sets up the following control loop structures:
- Implement the cloud-provider interface, providing primarily the following API calls:
Initialize()
InstancesV2()
LoadBalancer()
If a loadbalancer is enabled, CCM creates a PhoenixNAP IP block and reserves an IP in the block for each Service
of
type=LoadBalancer
. It tags the Reservation with the following tags:
usage="k8s-cloud-provider-bmc-auto"
service="<serviceID>"
where<serviceID>
is<namespace>.<service-name>
.cluster=<clusterID>
where<clusterID>
is the UID of the immutablekube-system
namespace. We do this so that if someone runs two clusters in the same account, and there is oneService
in each cluster with the same namespace and name, then the two IPs will not conflict.
You can run the CCM locally on your laptop or VM, i.e. not in the cluster. This dramatically speeds up development. To do so:
- Deploy everything except for the
Deployment
and, optionally, theSecret
- Build it for your local platform
make build
- Set the environment variable
CCM_SECRET
to a file with the secret contents as a json, i.e. the content of the secret'sstringData
, e.g.CCM_SECRET=ccm-secret.yaml
- Set the environment variable
KUBECONFIG
to a kubeconfig file with sufficient access to the cluster, e.g.KUBECONFIG=mykubeconfig
- Set the environment variable
PNAP_LOCATION
to the correct location where the cluster is running, e.g.PNAP_LOCATION="SEA
- If you want to run a loadbalancer, and it is not yet deployed, deploy it appropriately.
- Enable the loadbalancer by setting the environment variable
PNAP_LOAD_BALANCER=kube-vip://<network-id>
- Run the command.
There are multiple ways to run the command.
In all cases, for lots of extra debugging, add --v=2
or even higher levels, e.g. --v=5
.
docker run --rm -e PNAP_LOCATION=${PNAP_LOCATION} -e PNAP_LOAD_BALANCER=${PNAP_LOAD_BALANCER} phoenixnap/k8s-cloud-provider-bmc:latest --cloud-provider=phoenixnap --leader-elect=false --authentication-skip-lookup=true --cloud-config=$CCM_SECRET --kubeconfig=$KUBECONFIG
PNAP_LOCATION=${PNAP_LOCATION} PNAP_LOAD_BALANCER=${PNAP_LOAD_BALANCER} go run . --cloud-provider=phoenixnap --leader-elect=false --authentication-skip-lookup=true --cloud-config=$CCM_SECRET --kubeconfig=$KUBECONFIG
PNAP_LOCATION=${PNAP_LOCATION} dist/bin/k8s-cloud-provider-bmc-darwin-amd64 --cloud-provider=phoenixnap --leader-elect=false --authentication-skip-lookup=true --cloud-config=$CCM_SECRET --kubeconfig=$KUBECONFIG