Skip to content
This repository has been archived by the owner on Jan 11, 2023. It is now read-only.

Configure kube-dns to expose metrics #2566

Merged
merged 1 commit into from
Jun 14, 2018
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,8 @@ spec:
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
prometheus.io/scrape: "true"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should these annotations be on the service rather than the deployment?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Each pods has its own set of stats so Prometheus should scrape each pods, not the service.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe the recommended approach is to annotate the service (though I cannot find a black and white statement from the team about it now). The pods of the service will still get scraped if using a role of endpoints in the kubernetes_sd_configs.
The exposed metrics will be the same either way, but there are a bunch of extra labels available this way.

Also, do note that this annotation is not really standardized, even if it is common in examples. We do not use it at all, and rather use ServiceMonitors from the prometheus-operator. But perhaps it is a reasonable default to ship anyway? Is it set on other kube* things with metrics in acs-engine?

prometheus.io/port: "10055"
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
Expand Down Expand Up @@ -78,6 +80,9 @@ spec:
- "--dns-port=10053"
- "--v=2"
- "--config-dir=/kube-dns-config"
env:
- name: PROMETHEUS_PORT
value: "10055"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this port fixed? Can you please point me at the docs for this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not fixed, we could define any port not already used but that's what they use in kubernetes example:

https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/kube-dns.yaml.base#L132-L134

image: <kubernetesKubeDNSSpec>
livenessProbe:
failureThreshold: 5
Expand All @@ -96,6 +101,9 @@ spec:
- containerPort: 10053
name: dns-tcp-local
protocol: TCP
- containerPort: 10055
name: metrics
protocol: TCP
readinessProbe:
httpGet:
path: "/readiness"
Expand Down