-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expose kube-scheduler, kube-proxy and kube-controller metrics endpoints #3619
Comments
That was necessary because we don't have a facility to pass CLI flags through to the embedded etcd. For Kubernetes components, you can already just do something like: |
@brandond I am probably mis-reading the code here but it looks like it hardcoded to k3s/pkg/daemons/control/server.go Lines 134 to 135 in 238dc20
Will setting the options you described override this? |
Yes, if you look a few lines down you can see where the user-provided args are used to update to the args map when flattening the map into the args slice. Since the user args come last, they are preferred over the defaults we provide. k3s/pkg/daemons/control/server.go Line 142 in 238dc20
|
Thanks @brandond 🙏🏼 kube-prometheus-stack helm valueskubeApiServer:
enabled: true
kubeControllerManager:
enabled: true
endpoints:
- 192.168.42.10
- 192.168.42.11
- 192.168.42.12
kubeScheduler:
enabled: true
endpoints:
- 192.168.42.10
- 192.168.42.11
- 192.168.42.12
kubeProxy:
enabled: true
endpoints:
- 192.168.42.10
- 192.168.42.11
- 192.168.42.12
kubeEtcd:
enabled: true
endpoints:
- 192.168.42.10
- 192.168.42.11
- 192.168.42.12
service:
enabled: true
port: 2381
targetPort: 2381 k3s controllers settingskube-controller-manager-arg:
- "address=0.0.0.0"
- "bind-address=0.0.0.0"
kube-proxy-arg:
- "metrics-bind-address=0.0.0.0"
kube-scheduler-arg:
- "address=0.0.0.0"
- "bind-address=0.0.0.0"
etcd-expose-metrics: true I can also verify Grafana dashboard are populated :D |
For anyone stumbling upon same issue (because it pops on google first search page)
to both kubeControllerManager and kubeScheduler because now it forces https. Also, ports have changed, so my config looks like:
Additionally, "address=0.0.0.0" can be dropped because it's deprecated now, see https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/ Verified on kube-prometheus-stack 20.0.1 and k3s 1.22.3 |
@onedr0p: how do I exactly set the k3s controller settings on my master nodes? Not during installation but in an running environment. With the k3s from your and @rlex comments I understand that the configuration needs to look like this, correct?: kube-controller-manager-arg:
- "bind-address=0.0.0.0"
kube-proxy-arg:
- "metrics-bind-address=0.0.0.0"
kube-scheduler-arg:
- "bind-address=0.0.0.0"
etcd-expose-metrics: true |
Yes but I'm k3s 1.22 some defaults in kube prometheus stack need to be changed: |
The changes in the kube prometheus stack seem to be clear to me. I am /was struggling with the k3s config. |
Depends on how you installed k3s, you need to tell k3s to look for the |
You don't need to tell k3s to look for config.yaml if you place it at |
Worked like charm. Thank you guys! |
@onedr0p does this solution hit the problem explained in the issue below? Thank you |
@Jojoooo1 did you setup single-node or multi-node cluster? |
Thanks! I actually had a single node! |
strictly speaking, k3s single-node can have etcd, but only if you added cluster-init parameter to k3s args / config / env |
where can I edit this file, for my k3s controller manager?
|
https://rancher.com/docs/k3s/latest/en/installation/install-options/#configuration-file
|
thanks man, anyway I don't have that file config, but i only have /etc/rancher/k3s/k3s.yaml and /etc/systemd/system/k3s.sservice and some files in /var/lib/rancher/k3s Is it possible to create it manually or do I need to upgrade my K3S? |
@rthamrin config file is not installed by default, you need to manually create it. |
enabled: kubeControllerManager, kubeScheduler, kubeEtcd see: onedr0p/home-ops#2378 k3s-io/k3s#3619
I had the same issue with a K3s cluster, and tried following the above solution, but after adding endpoints to the prometheus operator values file, the helm would fail to deploy with the following error: I found a working solution at this page: https://picluster.ricsanfre.com/docs/prometheus/#k3s-components-monitoring. Leaving it here in case if anyone runs it to the same issue with the helm chart. |
@macrokernel this error means helm does not have ownership, try to re-install KPS or add the helm annotations to the existing resources it is complaining about.
|
@onedr0p I tried completely uninstalling KPS by removing the monitoring namespace where the helm chart was installed and installing it from scratch, but this did not help. I wish I knew which annotations it is missing and how to add them ;) |
The error specifically says endpoints in the kube-system namespace. |
@onedr0p thanks for guiding me through it :) Adding the following annotation and similar ones for
Helm chart update was successful after the modifications. Yet to check if Prometheus is getting the data. UPDATE: UPDATE 2:
Still figuring out where they must be placed inside the values file. |
@onedr0p, could you please give another hint? I tried adding the annotations all over the values file to no avail. |
Don't add helm annotations in those values, delete the specific endpoint(s) in the kube-system namespace and redeploy the chart. |
@onedr0p, I removed the operator and everything with prometheus in the kube-system namespace, then reinstalled the operator helm chart. Helm install went without errors, however, prometheus-stack-kube-prom-operator pod is in CrashLoopBackOff state due to the following error:
The port is actually in use on the node:
I've also tried this https://github.com/prometheus-operator/prometheus-operator#removal removal procedure to make sure that there is nothing leftover from the previous attempts, but it did not help. UPDATE:
And reinstalling it with the following command:
Seems like the issue was with the webhooks - I had to disable them. |
I spent a couple of days figuring out how to make default Firstly, Now to fix this properly is a bit difficult. All default grafana charts filter data by job name. I.e. the kube-proxy dashboard has kubelet:
serviceMonitor:
metricRelabelings:
# k3s exposes all metrics on all endpoints, relabel jobs that belong to other components
- sourceLabels: [__name__]
regex: "scheduler_(.+)"
targetLabel: "job"
replacement: "kube-scheduler"
- sourceLabels: [__name__]
regex: "kubeproxy_(.+)"
targetLabel: "job"
replacement: "kube-proxy" This simply sets job label to But there is another problem. Instance variable in grafana charts uses There are also other metrics, which are shared between components such as Also keep in mind that default It is really unfortunate that k3s makes it so complicated to use |
@chemicstry Thanks for the details, I suppose we could re-open this issue but I am not sure if it is something the k3s maintainers are willing to "fix". Ideally this should all work out of the box with the @brandond any comment on this? |
There isn't really anything we can fix on our side. The prometheus go libraries use a global metrics configuration, so any metrics registered by any component in a process are exposed by all metrics listeners. There's no way to bind specific metrics to a specific metrics endpoint, when they're all running in the same process. A core efficiency of K3s is that we run all the components in the same process, and we're not planning on changing that. |
Has anyone here found a good solution? What do you think about this? portefaix/portefaix-kubernetes#4682 portefaix/portefaix-kubernetes@dc767bd#diff-725c569b96f4a66ed07e1a4d1a5d8d24b3a500f1a1dae5b80444a2109ce94c17 |
If others come here by googling, I have found what seems to be a good solution.
|
@mrclrchtr if I understand this correctly you are still going to have duplicate metrics, this can lead to absolutely insane high memory usage with Prometheus. I recently switch my cluster from k3s to Talos and saw 2-3GB less usage in memory per Prometheus instance since Talos exports these metrics the "standard" way. The best method I found was to do the analysis across what needs to be kept on each component and write relabelings based upon that research. For example https://github.com/onedr0p/home-ops/blob/e6716b476ff1432ddbbb7d4efa0e10d0ac4e9a66/kubernetes/storage/apps/observability/kube-prometheus-stack/app/helmrelease.yaml However this isn't perfect as well and will not dedupe all metric labels across the components, it's prone to error and when updates to Kubernetes happen won't capture any new metrics being emitted. FWIW even with these relabelings I was still seeing a 3-4GB RAM usage per prometheus instance with them applied. I would love for k3s to support a native way to handle this with the kube-prometheus-stack as it's my major pain point with it, not obvious until one discovers this issue and one of the major reasons I am exploring other options like Talos. 😢 |
Damn... I was hoping this would be the solution... I'll probably have to look for alternatives too... I've invested way too much time in this already.... But thanks for the info! I'll have a look at Talos too. |
Is your feature request related to a problem? Please describe.
Unable to monitor the following components using
kube-prometheus-stack
:Describe the solution you'd like
Add configuration options like in PR #2750 for each component so they are not only being bound to
127.0.0.1
E.g.
In kube-prometheus-stack configuration all you have to do is configure the following:
Describe alternatives you've considered
Deploying rancher-pushprox to get these metrics exposed but it's not very easy to do, or user-friendly
Additional context
I am willing to give a shot at opening a PR as it should be pretty close to #2750
Related to #425
The text was updated successfully, but these errors were encountered: