-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expose anti-affinity
for coredns deployment
#3574
Labels
complexity:medium
Something that requires one or few days to fix
kind:enhancement
New feature or request
severity:major
Major impact on live deployments (e.g. some non-critical feature is not working at all)
topic:deployment
Bugs in or enhancements to deployment stages
topic:networking
Networking-related issues
Comments
TeddyAndrieux
added
kind:enhancement
New feature or request
topic:networking
Networking-related issues
topic:deployment
Bugs in or enhancements to deployment stages
complexity:medium
Something that requires one or few days to fix
severity:major
Major impact on live deployments (e.g. some non-critical feature is not working at all)
labels
Oct 21, 2021
We need to do the same for Control Plane Ingress Controller deployment (when we use MetalLB) and for Dex |
TeddyAndrieux
added a commit
that referenced
this issue
Oct 26, 2021
Add ability for the user to change podAntiAffinity for CoreDNS deployment and also have a default soft podAntiAffinity on hostname so that if it's possible each CoreDNS replica will sit on a different node by default. Trigger a rollout restart of CoreDNS deployment after deploying a new master node in order to "apply" soft anti-affinity if possible Fixes: #3574
TeddyAndrieux
added a commit
that referenced
this issue
Oct 26, 2021
Add ability for the user to change podAntiAffinity for CoreDNS deployment and also have a default soft podAntiAffinity on hostname so that if it's possible each CoreDNS replica will sit on a different node by default. Trigger a rollout restart of CoreDNS deployment after deploying a new infra node in order to "apply" soft anti-affinity if possible Fixes: #3574
TeddyAndrieux
added a commit
that referenced
this issue
Oct 26, 2021
Add ability for the user to change podAntiAffinity for CoreDNS deployment and also have a default soft podAntiAffinity on hostname so that if it's possible each CoreDNS replica will sit on a different node by default. Trigger a rollout restart of CoreDNS deployment after deploying a new infra node in order to "apply" soft anti-affinity if possible Fixes: #3574
TeddyAndrieux
added a commit
that referenced
this issue
Oct 26, 2021
TeddyAndrieux
added a commit
that referenced
this issue
Oct 26, 2021
Add ability for the user to change podAntiAffinity for CoreDNS deployment and also have a default soft podAntiAffinity on hostname so that if it's possible each CoreDNS replica will sit on a different node by default. Trigger a rollout restart of CoreDNS deployment after deploying a new infra node in order to "apply" soft anti-affinity if possible Fixes: #3574
TeddyAndrieux
added a commit
that referenced
this issue
Oct 26, 2021
TeddyAndrieux
added a commit
that referenced
this issue
Oct 26, 2021
Add ability for the user to change podAntiAffinity for CoreDNS deployment and also have a default soft podAntiAffinity on hostname so that if it's possible each CoreDNS replica will sit on a different node by default. Trigger a rollout restart of CoreDNS deployment after deploying a new infra node in order to "apply" soft anti-affinity if possible Fixes: #3574
TeddyAndrieux
added a commit
that referenced
this issue
Oct 26, 2021
TeddyAndrieux
added a commit
that referenced
this issue
Oct 27, 2021
Add ability for the user to change podAntiAffinity for CoreDNS deployment and also have a default soft podAntiAffinity on hostname so that if it's possible each CoreDNS replica will sit on a different node by default. Trigger a rollout restart of CoreDNS deployment after deploying a new infra node in order to "apply" soft anti-affinity if possible Fixes: #3574
TeddyAndrieux
added a commit
that referenced
this issue
Oct 27, 2021
TeddyAndrieux
added a commit
that referenced
this issue
Nov 22, 2021
TeddyAndrieux
added a commit
that referenced
this issue
Nov 24, 2021
Since we want to expose a really simple way to setup Pod affinity we do not use the exact same syntax as what need to provided in the Kubernetes objects, in order to convert from "simple syntax" to "kubernetes syntax" we add this execution module See: #3574
TeddyAndrieux
added a commit
that referenced
this issue
Nov 24, 2021
Since we want to expose a really simple way to setup Pod affinity we do not use the exact same syntax as what need to provided in the Kubernetes objects, in order to convert from "simple syntax" to "kubernetes syntax" we add this execution module See: #3574
TeddyAndrieux
added a commit
that referenced
this issue
Nov 24, 2021
Since we want to expose a really simple way to setup Pod affinity we do not use the exact same syntax as what need to provided in the Kubernetes objects, in order to convert from "simple syntax" to "kubernetes syntax" we add this execution module See: #3574
TeddyAndrieux
added a commit
that referenced
this issue
Nov 25, 2021
Since we want to expose a really simple way to setup Pod affinity we do not use the exact same syntax as what need to provided in the Kubernetes objects, in order to convert from "simple syntax" to "kubernetes syntax" we add this execution module See: #3574
TeddyAndrieux
added a commit
that referenced
this issue
Nov 25, 2021
This commit add ability to configure `podAntiAffinity` for Dex from CSC. Patche the Dex helm chart to add support for `strategy` on Dex deployment, as the default one does not make sense for our Dex deployment See dexidp/helm-charts#66 Render chart to salt state using ``` ./charts/render.py dex charts/dex.yaml charts/dex \ --namespace metalk8s-auth \ --service-config dex metalk8s-dex-config \ metalk8s/addons/dex/config/dex.yaml.j2 metalk8s-auth \ > salt/metalk8s/addons/dex/deployed/chart.sls ``` See: #3574
TeddyAndrieux
added a commit
that referenced
this issue
Nov 25, 2021
This commit add ability to configure `podAntiAffinity` for Dex from CSC. Patche the Dex helm chart to add support for `strategy` on Dex deployment, as the default one does not make sense for our Dex deployment See dexidp/helm-charts#66 Render chart to salt state using ``` ./charts/render.py dex charts/dex.yaml charts/dex \ --namespace metalk8s-auth \ --service-config dex metalk8s-dex-config \ metalk8s/addons/dex/config/dex.yaml.j2 metalk8s-auth \ > salt/metalk8s/addons/dex/deployed/chart.sls ``` See: #3574
TeddyAndrieux
added a commit
that referenced
this issue
Nov 29, 2021
Since we want to expose a really simple way to setup Pod affinity we do not use the exact same syntax as what need to provided in the Kubernetes objects, in order to convert from "simple syntax" to "kubernetes syntax" we add this execution module See: #3574
TeddyAndrieux
added a commit
that referenced
this issue
Nov 29, 2021
This commit add ability to configure `podAntiAffinity` for Dex from CSC. Patche the Dex helm chart to add support for `strategy` on Dex deployment, as the default one does not make sense for our Dex deployment See dexidp/helm-charts#66 Render chart to salt state using ``` ./charts/render.py dex charts/dex.yaml charts/dex \ --namespace metalk8s-auth \ --service-config dex metalk8s-dex-config \ metalk8s/addons/dex/config/dex.yaml.j2 metalk8s-auth \ > salt/metalk8s/addons/dex/deployed/chart.sls ``` See: #3574
TeddyAndrieux
added a commit
that referenced
this issue
Nov 29, 2021
This commit add ability to configure `podAntiAffinity` for the control plane ingress controller from bootstrap config. NOTE: This is only used when MetalLB is enabled as otherwise the control plane ingress controller is deployed as a DaemonSet. Render chart to salt state using ``` ./charts/render.py ingress-nginx-control-plane --namespace metalk8s-ingress \ charts/ingress-nginx-control-plane-deployment.yaml charts/ingress-nginx/ \ > salt/metalk8s/addons/nginx-ingress-control-plane/deployed/chart-deployment.sls ``` See: #3574
TeddyAndrieux
added a commit
that referenced
this issue
Nov 30, 2021
This commit add ability to configure `podAntiAffinity` for the control plane ingress controller from bootstrap config. NOTE: This is only used when MetalLB is enabled as otherwise the control plane ingress controller is deployed as a DaemonSet. Render chart to salt state using ``` ./charts/render.py ingress-nginx-control-plane --namespace metalk8s-ingress \ charts/ingress-nginx-control-plane-deployment.yaml charts/ingress-nginx/ \ > salt/metalk8s/addons/nginx-ingress-control-plane/deployed/chart-deployment.sls ``` See: #3574
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
complexity:medium
Something that requires one or few days to fix
kind:enhancement
New feature or request
severity:major
Major impact on live deployments (e.g. some non-critical feature is not working at all)
topic:deployment
Bugs in or enhancements to deployment stages
topic:networking
Networking-related issues
Component:
'kubernetes', 'dns'
Why this is needed:
Today we deploy coredns with 2 replicas and no affinity which means that all coredns may sit on the same node, which mean no HA for the Kubernetes DNS which likely has an impact on the workload running in the cluster.
What should be done:
In order to have HA of coredns we need to make sure the coredns replica sits in different "failure zone".
But this cannot be done automatically and need to be input from the user, so we need to expose a way to change this.
We likely want to also expose the number of coredns replicas.
Implementation proposal (strongly recommended):
Add an entry in the bootstrap config for coredns so that user can input:
suggested default (TBD) (soft anti-affinity on hostname):
Since coredns pod will not be re-scheduled during the life of the platform in order to be sure our "soft" anti-affinity gets applied when it's possible we will need to trigger a "rollout restart" of the coredns when we expand MetalK8s cluster with a new master node.
Check kubernetes docs for more information about affinity and anti-affinity https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity
The text was updated successfully, but these errors were encountered: