This example illustrates the ways how the placement of scheduled pods can be influenced. For this examples we assume that you have Minikube installed and running as described in here.
The simplest way to influence the scheduling process is to use a nodeSelector
.
Apply our simple example random-generator application with node selector:
kubectl create -f node-selector.yml
You will notice that this Pod does not get scheduled, because the nodeSelector
can find any node with the label disktype=ssd
:
kubectl describe pod node-selector
....
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 8s (x2 over 8s) default-scheduler 0/1 nodes are available: 1 node(s) didnt match node selector.
Let’s change this:
kubectl label node minikube disktype=ssd
kuebctl get pods
NAME READY STATUS RESTARTS AGE
random-generator 1/1 Running 0 65s
Let’s now use Node affinity rules for scheduling our Pod:
kubectl create -f node-affinity.yml
Again, our Pod won’t schedule as there is no node which fullfills the affinity rules. We can change this with
kubectl label node minikube numberCores=4
Does the Pod starts up now ? What if you choose 2 instead of 4 for the number of cores ?
To test Pod affinity we need to install a Pod to which our Pod can be connected. We are trying to create both Pods with
kubectl create -f pod-affinity.yml
pod/pod-affinity created
pod/confidential-high created
kubctl get pods
NAME READY STATUS RESTARTS AGE
confidential-high 1/1 Running 0 22s
pod-affinity 0/1 Pending 0 22s
"confidential-high" is a placeholder pod which has a label which is matched by our "pod-affinity" Pod. However our node doesn’t have the proper topology key yet. That can be changed with
kubectl label --overwrite node minikube security-zone=high
node/minikube labeled
kubectl get pods
NAME READY STATUS RESTARTS AGE
confidential-high 1/1 Running 0 9m39s
pod-affinity 1/1 Running 0 9m39s
For testing taints and tolerations, we first have to taint our Minikube node, so that by default no Pods are scheduled on it:
kubectl taint nodes minikube node-role.kubernetes.io/master="":NoSchedule
You can check that this taint is working by reapplying the previous pod-affinity.ym
example and see that the confiential-high
pod is not scheduled.
kubectl delete -f pod-affinity.yml
kubectl create -f pod-affinity.yml
kubectl get pods
NAME READY STATUS RESTARTS AGE
confidential-high 0/1 Pending 0 2s
pod-affinity 0/1 Pending 0 2s
But our Pod in tolerations.yml
can be scheduled as it tolerates this new taint on Minikube:
kubectl create -f tolerations.yml
kubectl get pods
NAME READY STATUS RESTARTS AGE
confidential-high 0/1 Pending 0 2m51s
pod-affinity 0/1 Pending 0 2m51s
tolerations 1/1 Running 0 4s