Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add prometheus scrape 'false' annotation to headless collector service #348

Merged
merged 1 commit into from
Apr 3, 2019
Merged

Add prometheus scrape 'false' annotation to headless collector service #348

merged 1 commit into from
Apr 3, 2019

Conversation

objectiser
Copy link
Contributor

Resolves #347

@jkandasa Haven't been able to reproduce your issue, although it may depend upon the prometheus config being used. I've added an annotation to the headless service which should prevent it being scraped - but if you could verify by testing against the https://hub.docker.com/r/objectiser/jaeger-operator to see if it fixes the problem?

Signed-off-by: Gary Brown [email protected]

@jpkrohling
Copy link
Contributor

This change is Reviewable

@codecov
Copy link

codecov bot commented Mar 26, 2019

Codecov Report

Merging #348 into master will increase coverage by <.01%.
The diff coverage is 100%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #348      +/-   ##
==========================================
+ Coverage   89.88%   89.89%   +<.01%     
==========================================
  Files          64       64              
  Lines        3035     3037       +2     
==========================================
+ Hits         2728     2730       +2     
  Misses        207      207              
  Partials      100      100
Impacted Files Coverage Δ
pkg/service/collector.go 100% <100%> (ø) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 3b6e569...d6aca3f. Read the comment docs.

Copy link
Contributor

@jpkrohling jpkrohling left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:lgtm:

Reviewed 1 of 1 files at r1.
Reviewable status: :shipit: complete! all files reviewed, all discussions resolved

@jkandasa
Copy link
Member

jkandasa commented Mar 28, 2019

@objectiser I tried https://hub.docker.com/r/objectiser/jaeger-operator, but still, I see headless in my
Prometheus server discovery list.

With this change, I see there is an annotation in YAML file,

prometheus.io/scrape: false

Maybe I have misconfigured something in my Prometheus?

 relabel_configs:
  - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
    separator: ;
    regex: "true"
    replacement: $1
    action: keep
  - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scheme]
    separator: ;
    regex: (https?)
    target_label: __scheme__
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
    separator: ;
    regex: (.+)
    target_label: __metrics_path__
    replacement: $1
    action: replace
  - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
    separator: ;
    regex: (.+)(?::\d+);(\d+)
    target_label: __address__
    replacement: $1:$2
    action: replace
  - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_username]
    separator: ;
    regex: (.+)
    target_label: __basic_auth_username__
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_password]
    separator: ;
    regex: (.+)
    target_label: __basic_auth_password__
    replacement: $1
    action: replace
  - separator: ;
    regex: __meta_kubernetes_service_label_(.+)
    replacement: $1
    action: labelmap
  - source_labels: [__meta_kubernetes_service_name]
    separator: ;
    regex: (.*)
    target_label: service_name
    replacement: $1
    action: replace

@objectiser
Copy link
Contributor Author

@jkandasa Could you include your complete prometheus configuration - the snippet you have provided appears related to scraping pods not services.

@jkandasa
Copy link
Member

@objectiser
command used to deploy:

oc new-app -f prometheus.yaml --param NAMESPACE=jaeger-pipeline --param IMAGE_PROMETHEUS=quay.io/openshift/origin-prometheus:v3.11

Prometheus complete file,

apiVersion: template.openshift.io/v1
kind: Template

metadata:
  name: prometheus
  annotations:
    "openshift.io/display-name": Prometheus
    description: |
      A monitoring solution to get metrics and alerts for applications running in a namespace.
    iconClass: icon-cogs
    tags: "monitoring,prometheus,time-series"

parameters:
- description: The namespace where to deploy the Prometheus service.
  name: NAMESPACE
  required: true
- description: The Prometheus service account (must be created beforehand).
  name: PROMETHEUS_SA
  value: prometheus
- description: The Prometheus configuration (either app-monitoring or full-monitoring).
  name: PROMETHEUS_CONFIG
  value: app-monitoring
- description: The location of the prometheus image.
  name: IMAGE_PROMETHEUS
  value: openshift/prometheus:v2.0.0

objects:
- apiVersion: v1
  kind: Service
  metadata:
    labels:
      app: prometheus
    name: prometheus
    namespace: "${NAMESPACE}"
  spec:
    ports:
    - name: prometheus
      port: 80
      protocol: TCP
      targetPort: 9090
    selector:
      app: prometheus

- apiVersion: route.openshift.io/v1
  kind: Route
  metadata:
    labels:
      app: prometheus
    name: prometheus
    namespace: "${NAMESPACE}"
  spec:
    to:
      name: prometheus

- apiVersion: extensions/v1beta1
  kind: Deployment
  metadata:
    labels:
      app: prometheus
    name: prometheus
    namespace: "${NAMESPACE}"
  spec:
    replicas: 1
    selector:
      matchLabels:
        app: prometheus
    template:
      metadata:
        labels:
          app: prometheus
        name: prometheus
      spec:
        serviceAccountName: "${PROMETHEUS_SA}"
        containers:
        - name: prometheus
          args:
          - --storage.tsdb.retention=6h
          - --config.file=/etc/prometheus/prometheus.yml
          - --web.enable-admin-api
          image: ${IMAGE_PROMETHEUS}
          imagePullPolicy: IfNotPresent
          volumeMounts:
          - mountPath: /etc/prometheus
            name: config-volume
          - mountPath: /prometheus
            name: data-volume
          readinessProbe:
            httpGet:
              path: /-/ready
              port: 9090
            initialDelaySeconds: 10
          livenessProbe:
            httpGet:
              path: /-/healthy
              port: 9090
            initialDelaySeconds: 10

        restartPolicy: Always
        volumes:
        - configMap:
            defaultMode: 420
            name: "${PROMETHEUS_CONFIG}"
          name: config-volume
        - emptyDir: {}
          name: data-volume

- apiVersion: v1
  kind: ConfigMap
  metadata:
    label:
      app: prometheus
    name: app-monitoring
    namespace: "${NAMESPACE}"
  data:
    prometheus.rules: |
      groups:
      - name: example-rules
        interval: 30s # defaults to global interval
        rules:
        - alert: Target Down
          expr: up == 0
          for: 1m
          annotations:
            severity: "Critical"
            message: "Instance {{ $labels.instance }} for job {{ $labels.job }} is down"

    prometheus.yml: |
      rule_files:
        - 'prometheus.rules'

      scrape_configs:
      - job_name: 'prometheus'
        static_configs:
        - targets: ['localhost:9090']

      - job_name: 'endpoints'
        tls_config:
          insecure_skip_verify: true

        kubernetes_sd_configs:
        - role: endpoints
          namespaces:
            names:
            - "${NAMESPACE}"

        relabel_configs:
        - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
          action: keep
          regex: true
        - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scheme]
          action: replace
          target_label: __scheme__
          regex: (https?)
        - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
          action: replace
          target_label: __metrics_path__
          regex: (.+)
        - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
          action: replace
          target_label: __address__
          regex: (.+)(?::\d+);(\d+)
          replacement: $1:$2
        - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_username]
          action: replace
          target_label: __basic_auth_username__
          regex: (.+)
        - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_password]
          action: replace
          target_label: __basic_auth_password__
          regex: (.+)
        - action: labelmap
          regex: __meta_kubernetes_service_label_(.+)
        - source_labels: [__meta_kubernetes_service_name]
          action: replace
          target_label: service_name

- apiVersion: v1
  kind: ConfigMap
  metadata:
    label:
      app: prometheus
    name: full-monitoring
    namespace: "${NAMESPACE}"
  data:
    prometheus.rules: |
      groups:
      - name: example-rules
        interval: 30s # defaults to global interval
        rules:
        - alert: Target Down
          expr: up == 0
          for: 1m
          annotations:
            severity: "Critical"
            message: "Instance {{ $labels.instance }} for job {{ $labels.job }} is down"

    prometheus.yml: |
      rule_files:
        - 'prometheus.rules'

      scrape_configs:
      - job_name: 'prometheus'
        static_configs:
        - targets: ['localhost:9090']

      # TODO: use the proxy API for Kubernetes >= 1.7 since kubelet doesn't
      # expose anymore the container metrics directly
      - job_name: 'container_metrics'

        scheme: https
        tls_config:
          ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        # The node metrics endpoint requires a bearer token to be scraped
        bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token

        kubernetes_sd_configs:
        - role: node

        # Store only the container metrics applying to the current namespace
        metric_relabel_configs:
        - source_labels: [ namespace ]
          action: keep
          regex: "${NAMESPACE}"

      - job_name: 'app_metrics'
        tls_config:
          insecure_skip_verify: true

        kubernetes_sd_configs:
        - role: endpoints
          namespaces:
            names:
            - "${NAMESPACE}"

        relabel_configs:
        - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
          action: keep
          regex: true
        - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scheme]
          action: replace
          target_label: __scheme__
          regex: (https?)
        - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
          action: replace
          target_label: __metrics_path__
          regex: (.+)
        - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
          action: replace
          target_label: __address__
          regex: (.+)(?::\d+);(\d+)
          replacement: $1:$2
        - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_username]
          action: replace
          target_label: __basic_auth_username__
          regex: (.+)
        - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_password]
          action: replace
          target_label: __basic_auth_password__
          regex: (.+)
        - action: labelmap
          regex: __meta_kubernetes_service_label_(.+)
        - source_labels: [__meta_kubernetes_service_name]
          action: replace
          target_label: service_name
        - source_labels: [__meta_kubernetes_pod_container_name]
          action: replace
          target_label: container_name
        - source_labels: [__meta_kubernetes_pod_name]
          action: replace
          target_label: pod_name

@objectiser objectiser merged commit 306dfa4 into jaegertracing:master Apr 3, 2019
@objectiser objectiser deleted the disablescrapeheadless branch April 3, 2019 14:07
@objectiser
Copy link
Contributor Author

Merged based on discussion here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

jaeger-collector-headless service creates duplicate entry in Prometheus
3 participants