-
Notifications
You must be signed in to change notification settings - Fork 344
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add prometheus scrape 'false' annotation to headless collector service #348
Add prometheus scrape 'false' annotation to headless collector service #348
Conversation
Signed-off-by: Gary Brown <[email protected]>
Codecov Report
@@ Coverage Diff @@
## master #348 +/- ##
==========================================
+ Coverage 89.88% 89.89% +<.01%
==========================================
Files 64 64
Lines 3035 3037 +2
==========================================
+ Hits 2728 2730 +2
Misses 207 207
Partials 100 100
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 1 of 1 files at r1.
Reviewable status: complete! all files reviewed, all discussions resolved
@objectiser I tried https://hub.docker.com/r/objectiser/jaeger-operator, but still, I see headless in my With this change, I see there is an annotation in YAML file,
Maybe I have misconfigured something in my Prometheus?
|
@jkandasa Could you include your complete prometheus configuration - the snippet you have provided appears related to scraping pods not services. |
@objectiser
Prometheus complete file, apiVersion: template.openshift.io/v1
kind: Template
metadata:
name: prometheus
annotations:
"openshift.io/display-name": Prometheus
description: |
A monitoring solution to get metrics and alerts for applications running in a namespace.
iconClass: icon-cogs
tags: "monitoring,prometheus,time-series"
parameters:
- description: The namespace where to deploy the Prometheus service.
name: NAMESPACE
required: true
- description: The Prometheus service account (must be created beforehand).
name: PROMETHEUS_SA
value: prometheus
- description: The Prometheus configuration (either app-monitoring or full-monitoring).
name: PROMETHEUS_CONFIG
value: app-monitoring
- description: The location of the prometheus image.
name: IMAGE_PROMETHEUS
value: openshift/prometheus:v2.0.0
objects:
- apiVersion: v1
kind: Service
metadata:
labels:
app: prometheus
name: prometheus
namespace: "${NAMESPACE}"
spec:
ports:
- name: prometheus
port: 80
protocol: TCP
targetPort: 9090
selector:
app: prometheus
- apiVersion: route.openshift.io/v1
kind: Route
metadata:
labels:
app: prometheus
name: prometheus
namespace: "${NAMESPACE}"
spec:
to:
name: prometheus
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: prometheus
name: prometheus
namespace: "${NAMESPACE}"
spec:
replicas: 1
selector:
matchLabels:
app: prometheus
template:
metadata:
labels:
app: prometheus
name: prometheus
spec:
serviceAccountName: "${PROMETHEUS_SA}"
containers:
- name: prometheus
args:
- --storage.tsdb.retention=6h
- --config.file=/etc/prometheus/prometheus.yml
- --web.enable-admin-api
image: ${IMAGE_PROMETHEUS}
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /etc/prometheus
name: config-volume
- mountPath: /prometheus
name: data-volume
readinessProbe:
httpGet:
path: /-/ready
port: 9090
initialDelaySeconds: 10
livenessProbe:
httpGet:
path: /-/healthy
port: 9090
initialDelaySeconds: 10
restartPolicy: Always
volumes:
- configMap:
defaultMode: 420
name: "${PROMETHEUS_CONFIG}"
name: config-volume
- emptyDir: {}
name: data-volume
- apiVersion: v1
kind: ConfigMap
metadata:
label:
app: prometheus
name: app-monitoring
namespace: "${NAMESPACE}"
data:
prometheus.rules: |
groups:
- name: example-rules
interval: 30s # defaults to global interval
rules:
- alert: Target Down
expr: up == 0
for: 1m
annotations:
severity: "Critical"
message: "Instance {{ $labels.instance }} for job {{ $labels.job }} is down"
prometheus.yml: |
rule_files:
- 'prometheus.rules'
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'endpoints'
tls_config:
insecure_skip_verify: true
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- "${NAMESPACE}"
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scheme]
action: replace
target_label: __scheme__
regex: (https?)
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
target_label: __address__
regex: (.+)(?::\d+);(\d+)
replacement: $1:$2
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_username]
action: replace
target_label: __basic_auth_username__
regex: (.+)
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_password]
action: replace
target_label: __basic_auth_password__
regex: (.+)
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_service_name]
action: replace
target_label: service_name
- apiVersion: v1
kind: ConfigMap
metadata:
label:
app: prometheus
name: full-monitoring
namespace: "${NAMESPACE}"
data:
prometheus.rules: |
groups:
- name: example-rules
interval: 30s # defaults to global interval
rules:
- alert: Target Down
expr: up == 0
for: 1m
annotations:
severity: "Critical"
message: "Instance {{ $labels.instance }} for job {{ $labels.job }} is down"
prometheus.yml: |
rule_files:
- 'prometheus.rules'
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
# TODO: use the proxy API for Kubernetes >= 1.7 since kubelet doesn't
# expose anymore the container metrics directly
- job_name: 'container_metrics'
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
# The node metrics endpoint requires a bearer token to be scraped
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
# Store only the container metrics applying to the current namespace
metric_relabel_configs:
- source_labels: [ namespace ]
action: keep
regex: "${NAMESPACE}"
- job_name: 'app_metrics'
tls_config:
insecure_skip_verify: true
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- "${NAMESPACE}"
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scheme]
action: replace
target_label: __scheme__
regex: (https?)
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
target_label: __address__
regex: (.+)(?::\d+);(\d+)
replacement: $1:$2
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_username]
action: replace
target_label: __basic_auth_username__
regex: (.+)
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_password]
action: replace
target_label: __basic_auth_password__
regex: (.+)
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_service_name]
action: replace
target_label: service_name
- source_labels: [__meta_kubernetes_pod_container_name]
action: replace
target_label: container_name
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: pod_name |
Merged based on discussion here. |
Resolves #347
@jkandasa Haven't been able to reproduce your issue, although it may depend upon the prometheus config being used. I've added an annotation to the headless service which should prevent it being scraped - but if you could verify by testing against the https://hub.docker.com/r/objectiser/jaeger-operator to see if it fixes the problem?
Signed-off-by: Gary Brown [email protected]