This repository has been archived by the owner on Aug 25, 2021. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 385
Unable to run v0.2.0 in minikube. "pod has unbound PersistentVolumeClaims" #13
Labels
bug
Something isn't working
Comments
I had to remove affinity rules on the StatefulSet, here
After removing that anti-affinity rule, I was able to successfully launch consul server & consul-ui. I'm very new to k8s, but it seems to me that the anti-affinity rules are conflicting when running multiple consul server agents? |
Yes, I also hit the same issue this is happening because of affinity rules only. for redundancy purpose they want to start the single consul server on each node. |
This seems like two things:
|
Closed
We resolved this issue by making the affinity a variable {{- if .Values.affinity }}
affinity:
{{ tpl .Values.affinity . | indent 8 }}
{{- end }} And then in the values ### Consul Settings
affinity: |
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: {{ template "consul.name" . }}
release: "{{ .Release.Name }}"
component: server
topologyKey: kubernetes.io/hostname This then means in minikube you can override with ## Affinity settings, this allows us to configure and run in Minikube
affinity: |
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
topologyKey: kubernetes.io/hostname
labelSelector:
matchExpressions:
- key: component
operator: In
values:
- "{{ .Release.Name }}-{{ .Values.Component }}" |
@mitchellh Is there anything I can provide to get this item resolved? Making affinity a variable allows testing on Minikube Thanks |
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Running
helm install --name consul --namespace=pkr -f dev-consul.yaml ./consul-helm
with these custom values:It completes successfully, but when running
kubectl get pods -n pkr
, I see thatconsul-server-1
&consul-server-2
are Pending.Closer inspection on
pods consul-server-0
:This "pod has unbound PersistentVolumeClaims" error is same for all consul servers. Yet, when running kubectl get pvc & kubectl get pv, I see persistent volumes fine:
So, I don't understand the error, since pv & pvc outputs seem fine to me.
How should I debug this further? I've tried deleting the minikube cluster and starting over from scratch, but I get this result every time.
The text was updated successfully, but these errors were encountered: