Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SRE-96: Add pipeline to send application metrics to Honeycomb #12

Closed
wants to merge 1 commit into from
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 21 additions & 5 deletions templates/opentelemetry-collector-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -11,13 +11,20 @@ data:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:55681
prometheus:
prometheus/collector-metrics:
config:
scrape_configs:
- job_name: {{ .Release.Namespace }}-collector
scrape_interval: 15s
static_configs:
- targets: ["127.0.0.1:8888"]
prometheus/app-metrics:
config:
scrape_configs:
- job_name: {{ .Release.Namespace }}
scrape_interval: 15s
static_configs:
- targets: [""] # all the desired pods or whatever
Copy link
Contributor Author

@shelbyspees shelbyspees Dec 17, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm hoping there's some template variable that Argo will fill in for me to give me the list of all the pods in the namespace, so we can point directly to all of them. (Side note: will scraping all that overwhelm the collector?)

If not, we can add a variable to the values.yaml file in each app's Helm chart listing the pods (nodes?) we want to scrape, and then have Argo populate the config from that.

There's gonna be some variable interpolation no matter what to get the specific urls, but I'm hoping we can do it with minimal changes required in the app charts.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this might not be possible with our Prometheus setup until this PR gets merged: open-telemetry/opentelemetry-collector-contrib#6344

processors:
batch:
timeout: 100s
Expand Down Expand Up @@ -48,11 +55,16 @@ data:
headers:
"x-honeycomb-team": "${HONEYCOMB_TEAM}"
"x-honeycomb-dataset": "${HONEYCOMB_DATASET}"
otlp/metrics:
otlp/collector-metrics:
endpoint: "api.honeycomb.io:443"
headers:
"x-honeycomb-team": "${HONEYCOMB_TEAM}"
"x-honeycomb-dataset": "collector-metrics"
otlp/app-metrics:
endpoint: "api.honeycomb.io:443"
headers:
"x-honeycomb-team": "${HONEYCOMB_TEAM}"
"x-honeycomb-dataset": "${HONEYCOMB_DATASET}"
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"x-honeycomb-dataset": "${HONEYCOMB_DATASET}"
"x-honeycomb-dataset": "app-metrics"

We probably want to send prom data to a different dataset at first just to see what it looks like. If we do this, I'll have to create the dataset manually first, since the Honeycomb keys being used won't have dataset creation permissions.

extensions:
health_check:
endpoint: 0.0.0.0:13133
Expand All @@ -66,7 +78,11 @@ data:
receivers: [otlp]
processors: [memory_limiter, resource, batch]
exporters: [otlp]
metrics:
receivers: [prometheus]
metrics/collector:
receivers: [prometheus/collector-metrics]
processors: [resource]
exporters: [otlp/collector-metrics]
metrics/app:
receivers: [prometheus/app-metrics]
processors: [resource]
exporters: [otlp/metrics]
exporters: [otlp/app-metrics]