Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable apiserver metrics #504

Closed
derekperkins opened this issue Jul 6, 2018 · 10 comments
Closed

Enable apiserver metrics #504

derekperkins opened this issue Jul 6, 2018 · 10 comments
Assignees

Comments

@derekperkins
Copy link
Contributor

When trying to report on metrics using the prometheus operator, the apiserver isn't reporting any metrics. prometheus-operator/prometheus-operator#1522

There's a related issue for kubedns metrics here #345

@weinong
Copy link
Contributor

weinong commented Aug 7, 2018

please provide your feature request at https://feedback.azure.com/forums/914020-azure-kubernetes-service-aks

@weinong weinong closed this as completed Aug 7, 2018
@andig
Copy link

andig commented Nov 20, 2018

Fore sake of other readers, this patch file seems to be working for me:

# workaround for https://github.com/Azure/AKS/issues/504
spec:
  endpoints:
  - honorLabels: true
    interval: 30s
    port: http-metrics
    scheme: http
  jobLabel: component
  namespaceSelector:
    matchNames:
    - default
  selector:
    matchLabels:
      component: apiserver
      provider: kubernetes

@rnkhouse
Copy link

@andig How to apply this patch?

@andig
Copy link

andig commented Nov 26, 2018

@rnkhouse you can patch the servicemonitor like this:

kubectl patch servicemonitor prometheus-prometheus-oper-apiserver --patch "$(cat prometheus/kubelet-apiserver-patch.yaml)" --type=merge

@iahmad94
Copy link

iahmad94 commented Dec 5, 2018

@andig this did not work as expected for me. The patch just removed the apiserver from the targets list in prometheus. The alerts for the API server being down are still firing.

Also something to note is that when you install prometheus-operator and kube-prometheus directly from coreos using helm. The servicemonitor for the API server is called kube-prometheus-exporter-kubernetes. This is the file that I applied the patch too.

Also this is link to monitor this issue in terms AKS feature requests, currently it just points back to this page, but in the future the answer may be here as well:
https://feedback.azure.com/forums/914020-azure-kubernetes-service-aks/suggestions/35875957-enable-apiserver-metrics

@andig
Copy link

andig commented Dec 5, 2018

Working for me but installed from helm/stable, not from coreos.

@ams0
Copy link

ams0 commented Dec 5, 2018

For the operator I was trying the relabeling config like this:

    relabelings:
    - action: replace
      regex: (.*)
      replacement: kubernetes.default.svc:443
      separator: ;
      source_labels: __address__
      target_label: instance

but I can't get it to work, prometheus doesn't restart because:

level=error ts=2018-11-28T09:20:18.531213221Z caller=main.go:617 err="error loading config from \"/etc/prometheus/config_out/prometheus.env.yaml\": couldn't load configuration (--config.file=\"/etc/prometheus/config_out/prometheus.env.yaml\"): parsing YAML file /etc/prometheus/config_out/prometheus.env.yaml: relabel configuration for replace action requires 'target_label' value"

Anyone managed to get it working? It's driving me nuts!

@reddare
Copy link

reddare commented Jan 24, 2019

@ams0 use sourceLabels and targetLabel.

@bdschaap
Copy link

This seemed to work for my Prometheus scrape config.

- job_name: 'kubernetes-apiservers'
  kubernetes_sd_configs:
  - role: endpoints
    namespaces:
      names:
      - default
  scheme: https
  relabel_configs:
  - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
    action: keep
    regex: kubernetes;https
  - target_label: __address__
    replacement: kubernetes.default.svc:443

@ghost ghost locked as resolved and limited conversation to collaborators Aug 5, 2020
@JohnRusk
Copy link
Member

JohnRusk commented Jan 17, 2023

In addition to bdschapp's comment, for my installation (running as a pod inside my cluster) I also needed to include this in my scrape config, to pick up files that are automatically provisioned there, to authenticate the TLS connection and to provide authorization creds.

  tls_config:
    ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
  authorization:
    type: Bearer
    credentials_file:  /var/run/secrets/kubernetes.io/serviceaccount/token

I.e. I added those lines to bdschapp's config.

Also, if you happen to be using Prometheus Operator, here's how to get bdschapp's config into your Prometheus: https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/additional-scrape-config.md . Note that Prometheus will restart when you do that, so if you are using kubectl's port forwarding to access Prom's UI, you'll need to restart you port forwarding.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

9 participants