Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

operator: Add Keepalived Controller (v2) #3864

Merged
merged 34 commits into from
Sep 15, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
34 commits
Select commit Hold shift + click to select a range
e0d3f39
operator: Move reconcile result output in an utils module
TeddyAndrieux Aug 19, 2022
d0829a9
operator: Add Go variable with the MetalK8s Version
TeddyAndrieux Aug 19, 2022
1014487
operator: Move ClusterConfig controller logic in `pkg`
TeddyAndrieux Sep 6, 2022
647539f
operator: Add ObjectHandler struct to manage Object manipulation
TeddyAndrieux Aug 19, 2022
a3e5573
operator: Create the VirtualIPPool API and resource
TeddyAndrieux Sep 6, 2022
6b8c601
operator: Move VirtualIPPool controller in `pkg`
TeddyAndrieux Sep 6, 2022
2b17b8e
operator: Add VirtualIPPool specification
TeddyAndrieux Sep 6, 2022
fdcdcb6
operator: Add `ConfigMap` as owns by VirtualIPPool controller
TeddyAndrieux Sep 6, 2022
3e63311
operator: Add RBAC on VirtualIPPool controller to Get `Node` objects
TeddyAndrieux Sep 6, 2022
98bc24b
operator: Add Struct to manage the High Level Config content
TeddyAndrieux Aug 31, 2022
a9717ca
operator: Handle MetalK8s VIP ConfigMaps creation and update
TeddyAndrieux Sep 7, 2022
4ee1d71
operator: Add simple function to GetImageName
TeddyAndrieux Sep 1, 2022
c9cd5e0
operator: Add `DaemonSet` as owns by VirtualIPPool controller
TeddyAndrieux Sep 7, 2022
78f6439
operator: Add Standard mutate for DaemonSet in ObjectHandler
TeddyAndrieux Sep 1, 2022
bec5bcf
operator: Handle MetalK8s VIP DaemonSet creation and update
TeddyAndrieux Sep 7, 2022
dc5a04d
operator: Add VirtualIPPool status conditions
TeddyAndrieux Sep 7, 2022
80cae9d
operator: Add "Configured" condition status for VirtualIPPool
TeddyAndrieux Sep 7, 2022
3a1af67
operator: Add RBAC on VirtualIPPool controller to send `Events`
TeddyAndrieux Sep 7, 2022
440bd10
operator: Add some events in VirtualIPPool reconciler
TeddyAndrieux Sep 7, 2022
5231af9
operator: Add "Ready" and "Available" condition for VirtualIPPool
TeddyAndrieux Sep 7, 2022
13d3770
operator: Add VirtualIPPool spec in ClusterConfig CRD
TeddyAndrieux Sep 7, 2022
c324e86
operator: Add some logic to manage sub reconciler
TeddyAndrieux Sep 7, 2022
028b739
operator: Add ClusterConfig status conditions and `Events` RBAC
TeddyAndrieux Sep 7, 2022
e5e39cb
operator: Add `Namespace` as owns by ClusterConfig controller
TeddyAndrieux Sep 9, 2022
2196587
operator: Handle `metalk8s-vips` namespace from ClusterConfig
TeddyAndrieux Sep 7, 2022
0d66dfe
operator: Add `VirtualIPPool` as owns by ClusterConfig controller
TeddyAndrieux Sep 7, 2022
559745d
operator: Handle VirtualIPPool creation, update and deletion
TeddyAndrieux Sep 7, 2022
3958d1d
operator: Check for VirtualIPPool readiness in ClusterConfig controller
TeddyAndrieux Sep 7, 2022
7a441a0
salt: Add Virtual IPs from ClusterConfig CR as portmap IPs
TeddyAndrieux Sep 1, 2022
3176ddf
operator: Add `vipp` short name for VirtualIPPool
TeddyAndrieux Sep 8, 2022
cd8d8e0
operator: Add `cc` short name for ClusterConfig
TeddyAndrieux Sep 8, 2022
439a9b2
docs: Add documentation about Workload Plane VIPs setup
TeddyAndrieux Sep 13, 2022
b95f09e
tests: Add end-to-end tests for Workload Plane VIPs
TeddyAndrieux Sep 12, 2022
2ea50ae
changelog: Update changelog to mention Workload Plane Ingress VIPs
TeddyAndrieux Sep 13, 2022
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 2 additions & 3 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,8 @@

### Additions

- Add `metalk8s-operator` to manage NOTHING (TODO: To be updated
once the operator will do something)
(PR[#3822](https://github.com/scality/metalk8s/pull/3822))
- Add `metalk8s-operator` to manage Workload Plane Ingress virtual IPs
(PR[#3864](https://github.com/scality/metalk8s/pull/3864))

### Removals

Expand Down
1 change: 1 addition & 0 deletions docs/operation/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ do not have a working MetalK8s_ setup.
downgrade
disaster_recovery/index
solutions
workload_plane_ingress_vips
changing_node_hostname
changing_control_plane_ingress_ip
metalk8s-utils
Expand Down
115 changes: 115 additions & 0 deletions docs/operation/workload_plane_ingress_vips.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,115 @@
The Workload Plane Ingress Virtual IPs
======================================


By default, the Workload Plane Ingress is exposed on every node Workload Plane
IP, which means that to have a highly available Workload Plane Ingress you
need to set up a Load Balancer in front of this Workload Plane Ingress, that
will check for the health of every Workload Plane node and redirect the traffic
to a Workload Plane node IP that is ready.

The MetalK8s Operator allows configuring some Virtual IPs that will be managed
by MetalK8s and used to expose the Workload Plane Ingress, so that you can
have a highly available Workload Plane Ingress without Load Balancer in front.

.. note::

The Load Balancer setup is still better in terms of performance, because
the traffic will be spread appropriately on nodes even if there is a node
that goes down.

You can add, remove, change the Virtual IPs at any point in time just by
editing the ``main`` ``ClusterConfig`` object and running 2 salt commands.

.. warning::

Any extra ``ClusterConfig`` created will be automatically deleted, you
really need to edit the ``main`` one.

The ``ClusterConfig`` layout
----------------------------

.. code-block:: yaml

apiVersion: metalk8s.scality.com/v1alpha1
kind: ClusterConfig
metadata:
name: main
spec:
workloadPlane:
virtualIPPools:
# An arbitrary name for the pool
# that will be used as Kubernetes object name
default:
# Classic nodeSelector to select on which node the
# Virtual IPs should be deployed
nodeSelector: {}
# Tolerations that are needed to run the Pod on the nodes
tolerations: {}
# A list of Virtual IPs that will be managed by the product
# There is no constraint on the number of Virtual IPs
addresses:
- 192.168.1.200
- 192.168.1.201
- 192.168.1.202

.. note::

The Virtual IPs will be automatically spread on every nodes.

Updating a pool
---------------

#. Update the ``main`` ``ClusterConfig`` object as you wish.

.. code-block:: console

kubectl --kubeconfig=/etc/kubernetes/admin.conf \
edit clusterconfig main

#. Wait for the ``ClusterConfig`` to be ready

.. code-block:: console

kubectl --kubeconfig=/etc/kubernetes/admin.conf \
wait --for=condition=Ready \
clusterconfig main

#. Reconfigure the CNI to expose the Ingress on the new Virtual IPs

.. parsed-literal::

kubectl exec -n kube-system -c salt-master \\
--kubeconfig /etc/kubernetes/admin.conf \\
$(kubectl --kubeconfig /etc/kubernetes/admin.conf \\
get pods -n kube-system -l app=salt-master -o name) \\
-- salt-run state.sls metalk8s.kubernetes.cni.calico.deployed \\
saltenv=metalk8s-|version|

#. Regenerate the Workload Plane Ingress server certificate

.. parsed-literal::

kubectl exec -n kube-system -c salt-master \\
--kubeconfig /etc/kubernetes/admin.conf \\
$(kubectl --kubeconfig /etc/kubernetes/admin.conf \\
get pods -n kube-system -l app=salt-master -o name) \\
-- salt '*' state.sls metalk8s.addons.nginx-ingress.certs \\
saltenv=metalk8s-|version|

#. Restart the Workload Plane Ingress controller

.. code-block:: console

kubectl --kubeconfig=/etc/kubernetes/admin.conf \
rollout restart -n metalk8s-ingress \
daemonset ingress-nginx-controller

#. Wait for the Workload Plane Ingress controller restart to be completed

.. code-block:: console

kubectl --kubeconfig=/etc/kubernetes/admin.conf \
rollout status -n metalk8s-ingress \
daemonset ingress-nginx-controller \
--timeout 5m
2 changes: 2 additions & 0 deletions eve/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -461,6 +461,7 @@ models:
PYTEST_FILTERS: "post and ci"
BOOTSTRAP_BACKUP_ARCHIVE: ""
CONTROL_PLANE_INGRESS_VIP: "192.168.1.253"
WORKLOAD_PLANE_INGRESS_VIPS: "192.168.2.200,192.168.2.201,192.168.2.202,192.168.2.203,192.168.2.204"
command: >
ssh -F ssh_config bastion --
"cd metalk8s &&
Expand All @@ -469,6 +470,7 @@ models:
export TEST_HOSTS_LIST=\"${TEST_HOSTS_LIST}\" &&
export BOOTSTRAP_BACKUP_ARCHIVE=\"${BOOTSTRAP_BACKUP_ARCHIVE}\" &&
export CONTROL_PLANE_INGRESS_VIP=\"${CONTROL_PLANE_INGRESS_VIP}\" &&
export WORKLOAD_PLANE_INGRESS_VIPS=\"${WORKLOAD_PLANE_INGRESS_VIPS}\" &&
tox -e tests -- ${PYTEST_ARGS:-""} -m \"${PYTEST_FILTERS}\""
workdir: *terraform_workdir
haltOnFailure: true
Expand Down
7 changes: 6 additions & 1 deletion operator/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,14 @@ RUN go mod download
COPY main.go main.go
COPY api/ api/
COPY controllers/ controllers/
COPY pkg/ pkg/
COPY version/ version/

# Version of the project, e.g. `git describe --always --long --dirty --broken`
ARG METALK8S_VERSION

# Build
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -a -o manager main.go
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -a -o manager -ldflags "-X 'github.com/scality/metalk8s/operator/version.Version=${METALK8S_VERSION}'" main.go

# Use distroless as minimal base image to package the manager binary
# Refer to https://github.com/GoogleContainerTools/distroless for more details
Expand Down
8 changes: 8 additions & 0 deletions operator/PROJECT
Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,12 @@ resources:
kind: ClusterConfig
path: github.com/scality/metalk8s/operator/api/v1alpha1
version: v1alpha1
- api:
crdVersion: v1
namespaced: true
controller: true
domain: metalk8s.scality.com
kind: VirtualIPPool
path: github.com/scality/metalk8s/operator/api/v1alpha1
version: v1alpha1
version: "3"
42 changes: 41 additions & 1 deletion operator/api/v1alpha1/clusterconfig_types.go
Original file line number Diff line number Diff line change
Expand Up @@ -20,10 +20,18 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)

const (
vIPConfiguredConditionName = "VirtualIPPool" + configuredConditionName
vIPReadyConditionName = "VirtualIPPool" + readyConditionName
)

// EDIT THIS FILE! THIS IS SCAFFOLDING FOR YOU TO OWN!
// NOTE: json tags are required. Any new fields you add must have json tags for the fields to be serialized.

type WorkloadPlaneSpec struct {
// Information about Virtual IP Pools
// +optional
VirtualIPPools map[string]VirtualIPPoolSpec `json:"virtualIPPools,omitempty"`
}

// ClusterConfigSpec defines the desired state of ClusterConfig
Expand All @@ -40,11 +48,18 @@ type ClusterConfigSpec struct {
type ClusterConfigStatus struct {
// INSERT ADDITIONAL STATUS FIELD - define observed state of cluster
// Important: Run "make" to regenerate code after modifying this file

// List of conditions for the ClusterConfig
// +patchMergeKey=type
// +patchStrategy=merge
// +listType=map
// +listMapKey=type
Conditions []Condition `json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions"`
}

//+kubebuilder:object:root=true
//+kubebuilder:subresource:status
//+kubebuilder:resource:scope=Cluster
//+kubebuilder:resource:scope=Cluster,shortName=cc

// ClusterConfig is the Schema for the clusterconfigs API
type ClusterConfig struct {
Expand All @@ -55,6 +70,31 @@ type ClusterConfig struct {
Status ClusterConfigStatus `json:"status,omitempty"`
}

// Set a condition on ClusterConfig
func (v *ClusterConfig) SetCondition(kind string, status metav1.ConditionStatus, reason string, message string) {
setCondition(v.Generation, &v.Status.Conditions, kind, status, reason, message)
}

// Get a condition from ClusterConfig
func (v *ClusterConfig) GetCondition(kind string) *Condition {
return getCondition(v.Status.Conditions, kind)
}

// Set Ready Condition
func (v *ClusterConfig) SetReadyCondition(status metav1.ConditionStatus, reason string, message string) {
v.SetCondition(readyConditionName, status, reason, message)
}

// Set VirtualIPPool Configured Condition
func (v *ClusterConfig) SetVIPConfiguredCondition(status metav1.ConditionStatus, reason string, message string) {
v.SetCondition(vIPConfiguredConditionName, status, reason, message)
}

// Set VirtualIPPool Ready Condition
func (v *ClusterConfig) SetVIPReadyCondition(status metav1.ConditionStatus, reason string, message string) {
v.SetCondition(vIPReadyConditionName, status, reason, message)
}

//+kubebuilder:object:root=true

// ClusterConfigList contains a list of ClusterConfig
Expand Down
85 changes: 85 additions & 0 deletions operator/api/v1alpha1/clusterconfig_types_test.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
package v1alpha1

import (
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)

var _ = Describe("ClusterConfig", func() {
Describe("GetSetCondition", func() {
It("can add and get a condition", func() {
now := metav1.Now()
c := ClusterConfig{
ObjectMeta: metav1.ObjectMeta{Generation: 12},
}

c.SetCondition("MyCondition", metav1.ConditionTrue, "Foo", "Bar")

cond := c.GetCondition("MyCondition")
Expect(cond.Type).To(Equal("MyCondition"))
Expect(cond.Status).To(Equal(metav1.ConditionTrue))
Expect(cond.ObservedGeneration).To(BeEquivalentTo(12))
Expect(cond.LastTransitionTime.Time).To(BeTemporally(">", now.Time))
Expect(cond.Reason).To(Equal("Foo"))
Expect(cond.Message).To(Equal("Bar"))
})
})

Describe("ReadyCondition", func() {
It("can set Ready condition", func() {
now := metav1.Now()
c := ClusterConfig{
ObjectMeta: metav1.ObjectMeta{Generation: 12},
}

c.SetReadyCondition(metav1.ConditionTrue, "Foo", "Bar")

cond := c.GetCondition(readyConditionName)
Expect(cond.Type).To(Equal(readyConditionName))
Expect(cond.Status).To(Equal(metav1.ConditionTrue))
Expect(cond.ObservedGeneration).To(BeEquivalentTo(12))
Expect(cond.LastTransitionTime.Time).To(BeTemporally(">", now.Time))
Expect(cond.Reason).To(Equal("Foo"))
Expect(cond.Message).To(Equal("Bar"))
})
})

Describe("VIPConfiguredCondition", func() {
It("can set VIP Configured condition", func() {
now := metav1.Now()
c := ClusterConfig{
ObjectMeta: metav1.ObjectMeta{Generation: 12},
}

c.SetVIPConfiguredCondition(metav1.ConditionTrue, "Foo", "Bar")

cond := c.GetCondition(vIPConfiguredConditionName)
Expect(cond.Type).To(Equal(vIPConfiguredConditionName))
Expect(cond.Status).To(Equal(metav1.ConditionTrue))
Expect(cond.ObservedGeneration).To(BeEquivalentTo(12))
Expect(cond.LastTransitionTime.Time).To(BeTemporally(">", now.Time))
Expect(cond.Reason).To(Equal("Foo"))
Expect(cond.Message).To(Equal("Bar"))
})
})

Describe("VIPReadyCondition", func() {
It("can set VIP Ready condition", func() {
now := metav1.Now()
c := ClusterConfig{
ObjectMeta: metav1.ObjectMeta{Generation: 12},
}

c.SetVIPReadyCondition(metav1.ConditionTrue, "Foo", "Bar")

cond := c.GetCondition(vIPReadyConditionName)
Expect(cond.Type).To(Equal(vIPReadyConditionName))
Expect(cond.Status).To(Equal(metav1.ConditionTrue))
Expect(cond.ObservedGeneration).To(BeEquivalentTo(12))
Expect(cond.LastTransitionTime.Time).To(BeTemporally(">", now.Time))
Expect(cond.Reason).To(Equal("Foo"))
Expect(cond.Message).To(Equal("Bar"))
})
})
})
Loading