Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubernetes: Sync Kube control plane manifests with kubeadm #3152

Merged
merged 8 commits into from
Mar 1, 2021

Conversation

TeddyAndrieux
Copy link
Collaborator

Component:

'salt', 'kubernetes'

Context:

Sync control plane manifest with nowaday kubeadm

Summary:

  • Add a simple tool to retrieve various control plane manifests from kubeadm
  • Sync apiserver manifest with kubeadm (liveness and readiness probes updates)
  • Sync controller manager manifest with kubeadm (https instead of http)
  • Sync scheduler manifest with kubeadm (https instead of http)
  • Sync etcd manifest with kubeadm (liveness probe update)

@TeddyAndrieux TeddyAndrieux added topic:deployment Bugs in or enhancements to deployment stages topic:etcd Anything related to etcd complexity:medium Something that requires one or few days to fix topic:salt Everything related to SaltStack in our product labels Feb 25, 2021
@TeddyAndrieux TeddyAndrieux requested a review from a team February 25, 2021 18:29
@bert-e
Copy link
Contributor

bert-e commented Feb 25, 2021

Hello teddyandrieux,

My role is to assist you with the merge of this
pull request. Please type @bert-e help to get information
on this process, or consult the user documentation.

Status report is not available.

@bert-e
Copy link
Contributor

bert-e commented Feb 25, 2021

Integration data created

I have created the integration data for the additional destination branches.

The following branches will NOT be impacted:

  • development/1.0
  • development/1.1
  • development/1.2
  • development/1.3
  • development/2.0
  • development/2.1
  • development/2.2
  • development/2.3
  • development/2.4
  • development/2.5
  • development/2.6
  • development/2.7

You can set option create_pull_requests if you need me to create
integration pull requests in addition to integration branches, with:

@bert-e create_pull_requests

@bert-e
Copy link
Contributor

bert-e commented Feb 25, 2021

Waiting for approval

The following approvals are needed before I can proceed with the merge:

  • the author

  • one peer

Peer approvals must include at least 1 approval from the following list:

Copy link
Contributor

@alexandre-allard alexandre-allard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly typos, otherwise LGTM.

CHANGELOG.md Outdated Show resolved Hide resolved
@@ -0,0 +1,28 @@
FROM centos:7
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not directly related to this PR : could be cool to align all the centos 7 images we're using (for now we've 2 diffrerent images centos:7 and centos:7.6.1810)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree, don't know which one we want to keep

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't really know either, I think we should just choose one and stick to it.

tools/get-kubeadm-manifests/Dockerfile Outdated Show resolved Hide resolved
tools/get-kubeadm-manifests/README.md Outdated Show resolved Hide resolved
tools/get-kubeadm-manifests/README.md Outdated Show resolved Hide resolved
Add a simple Dockerfile and a Readme to have a container running kubeadm
and to get the default control plane manifests deployed by kubeadm, so
that it's easier to sync our Salt states with what deployed by kubeadm
in every versions

NOTE: It's still a manual check for the moment
In apiserver installed salt state split all arguments that come from
kubeadm and the one added in MetalK8s context, so that in case of
Kubernetes version upgrade it's easily to update the various apiserver
arguments if needed

Note:
- Update livenessprobe use `/livez` instead of `/healthz`
- Add readinessprobe on `/readyz`
In controller manager installed salt state, split all arguments that
come from kubeadm and the one added in MetalK8s context, so that in
case of Kubernetes version upgrade it's easily to update the various
controller manager arguments if needed

Note:
- Arguments added:
```
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --cluster-name=kubernetes
- --port=0
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
```

- Use `--bind-address` instead of `--address` and update ports and
  livenessprobe accordingly
In scheduler installed salt state, split all arguments that come
from kubeadm and the one added in MetalK8s context, so that in case of
Kubernetes version upgrade it's easily to update the various scheduler
arguments if needed.

Note:
- Arguments added:
```
- --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
- --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
- --port=0
```

- Use `--bind-address` instead of `--address` and update ports and
  livenessprobe accordingly
In etcd installed salt state, split all arguments that come from kubeadm
and the one added in MetalK8s context, so that in case of Kubernetes
version upgrade it's easily to update the various etcd arguments if
needed

Note:
- Nothing changed about arguments
- Updated the livenessprobe to used httpGet instead of an etcdctl
  command
Kubernetes controller manager and scheduler manifest were updated to
serve metrics on https, so update the kube-prometheus-stack manifest to
reflect this change.

kube-prometheus chart render command:
```
./charts/render.py prometheus-operator \
  charts/kube-prometheus-stack.yaml \
  charts/kube-prometheus-stack/ \
  --namespace metalk8s-monitoring \
  --service-config grafana \
  metalk8s-grafana-config \
  metalk8s/addons/prometheus-operator/config/grafana.yaml \
  metalk8s-monitoring \
  --service-config prometheus \
  metalk8s-prometheus-config \
  metalk8s/addons/prometheus-operator/config/prometheus.yaml \
  metalk8s-monitoring \
  --service-config alertmanager \
  metalk8s-alertmanager-config \
  metalk8s/addons/prometheus-operator/config/alertmanager.yaml \
  metalk8s-monitoring \
  --service-config dex \
  metalk8s-dex-config \
  metalk8s/addons/dex/config/dex.yaml.j2 metalk8s-auth \
  --drop-prometheus-rules charts/drop-prometheus-rules.yaml \
  > salt/metalk8s/addons/prometheus-operator/deployed/chart.sls
```
Kubernetes controller manager and scheduler are now exposed on HTTPS so
add this in the changelog
@TeddyAndrieux TeddyAndrieux force-pushed the improvement/sync-kube-manifest-with-kubeadm branch from d131961 to 3ce1023 Compare March 1, 2021 09:58
@bert-e
Copy link
Contributor

bert-e commented Mar 1, 2021

History mismatch

Merge commit #b608990e94fd4a5bd55957e150c9d3c40e2732fe on the integration branch
w/2.9/improvement/sync-kube-manifest-with-kubeadm is merging a branch which is neither the current
branch improvement/sync-kube-manifest-with-kubeadm nor the development branch
development/2.9.

It is likely due to a rebase of the branch improvement/sync-kube-manifest-with-kubeadm and the
merge is not possible until all related w/* branches are deleted or updated.

Please use the reset command to have me reinitialize these branches.

@TeddyAndrieux
Copy link
Collaborator Author

/reset

@bert-e
Copy link
Contributor

bert-e commented Mar 1, 2021

Reset complete

I have successfully deleted this pull request's integration branches.

@bert-e
Copy link
Contributor

bert-e commented Mar 1, 2021

Integration data created

I have created the integration data for the additional destination branches.

The following branches will NOT be impacted:

  • development/1.0
  • development/1.1
  • development/1.2
  • development/1.3
  • development/2.0
  • development/2.1
  • development/2.2
  • development/2.3
  • development/2.4
  • development/2.5
  • development/2.6
  • development/2.7

You can set option create_pull_requests if you need me to create
integration pull requests in addition to integration branches, with:

@bert-e create_pull_requests

@bert-e
Copy link
Contributor

bert-e commented Mar 1, 2021

Waiting for approval

The following approvals are needed before I can proceed with the merge:

  • the author

  • one peer

Peer approvals must include at least 1 approval from the following list:

@TeddyAndrieux
Copy link
Collaborator Author

/approve

@bert-e
Copy link
Contributor

bert-e commented Mar 1, 2021

In the queue

The changeset has received all authorizations and has been added to the
relevant queue(s). The queue(s) will be merged in the target development
branch(es) as soon as builds have passed.

The changeset will be merged in:

  • ✔️ development/2.8

  • ✔️ development/2.9

The following branches will NOT be impacted:

  • development/1.0
  • development/1.1
  • development/1.2
  • development/1.3
  • development/2.0
  • development/2.1
  • development/2.2
  • development/2.3
  • development/2.4
  • development/2.5
  • development/2.6
  • development/2.7

There is no action required on your side. You will be notified here once
the changeset has been merged. In the unlikely event that the changeset
fails permanently on the queue, a member of the admin team will
contact you to help resolve the matter.

IMPORTANT

Please do not attempt to modify this pull request.

  • Any commit you add on the source branch will trigger a new cycle after the
    current queue is merged.
  • Any commit you add on one of the integration branches will be lost.

If you need this pull request to be removed from the queue, please contact a
member of the admin team now.

The following options are set: approve

@bert-e
Copy link
Contributor

bert-e commented Mar 1, 2021

I have successfully merged the changeset of this pull request
into targetted development branches:

  • ✔️ development/2.8

  • ✔️ development/2.9

The following branches have NOT changed:

  • development/1.0
  • development/1.1
  • development/1.2
  • development/1.3
  • development/2.0
  • development/2.1
  • development/2.2
  • development/2.3
  • development/2.4
  • development/2.5
  • development/2.6
  • development/2.7

Please check the status of the associated issue None.

Goodbye teddyandrieux.

@bert-e bert-e merged commit 3ce1023 into development/2.8 Mar 1, 2021
@bert-e bert-e deleted the improvement/sync-kube-manifest-with-kubeadm branch March 1, 2021 12:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
complexity:medium Something that requires one or few days to fix topic:deployment Bugs in or enhancements to deployment stages topic:etcd Anything related to etcd topic:salt Everything related to SaltStack in our product
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants