You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Prometheus operator does not patch the corresponding PVC object which results in the behaviour you observe, i.e. no change in capacity (in fact, neither the operator nor helm actually creates the PVC). The idea is that this should be done by the statefulset controller (see kubernetes/enhancements#661).
At the moment, the PVC itself has to be patched manually as described in Prometheus operator's resizing volumes.
Describe the bug a clear and concise description of what the bug is.
I have a GKE cluster (v1.26.5), I wanted to increase the prometheus storage from 120Gi to 140Gi, so I updated the
values.yaml
:Here is the output of the
helm diff
after this change:After the
help update
, here is thekubectl get pvc
:The PVC is still 120Gi and if I
exec
in one pod:❯ k exec -it prometheus-k8s-2 -- sh /prometheus $ df -h Filesystem Size Used Available Use% Mounted on overlay 54.9G 4.7G 50.2G 9% / tmpfs 64.0M 0 64.0M 0% /dev tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup /dev/sdb 117.9G 81.3G 36.7G 69% /prometheus [...]
The size of the disk is still 120Gi but the pod well restarted after the upgrade...
Did I miss something ? Or is this a non expected behaviour ?
What's your helm version?
v3.12.1
What's your kubectl version?
v1.27.3
Which chart?
kube-prometheus-stack
What's the chart version?
48.3.1
What happened?
No response
What you expected to happen?
No response
How to reproduce it?
No response
Enter the changed values of values.yaml?
No response
Enter the command that you execute and failing/misfunctioning.
helm upgrade --install monitoring . -n monitoring -f values-ng-monitoring.yaml
not increasing the Prometheus disk size.Anything else we need to know?
No response
The text was updated successfully, but these errors were encountered: