Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Logging sampler configuration not working properly (?) #1061

Closed
je-al opened this issue Jun 1, 2020 · 3 comments
Closed

Logging sampler configuration not working properly (?) #1061

je-al opened this issue Jun 1, 2020 · 3 comments
Labels
bug Something isn't working help wanted Good issue for contributors to OpenTelemetry Service to pick up priority:p3 Lowest
Milestone

Comments

@je-al
Copy link

je-al commented Jun 1, 2020

Describe the bug
Logging sampling configuration seems to not be applied

What did you expect to see?
A constant logging rate

What did you see instead?
Logging sampling isn't working as intended, we get approximatelly 1k loglines per second

What version did you use?
otel/opentelemetry-collector-contrib:0.3.0

What config did you use?

exporters:
  logging:
    loglevel: info
    sampling_initial: 5
    sampling_thereafter: 1000
...

service:
  extensions: [health_check]
  pipelines:
    traces:
      receivers: [sapm]
      processors: [memory_limiter, batch, attributes/newenvironment, queued_retry]
      exporters: [logging, sapm]

Additional context
I'm not sure whether this is a problem with our configuration, but dropped traces from the queued_retry processor are flooding our logs.

@je-al je-al added the bug Something isn't working label Jun 1, 2020
@je-al
Copy link
Author

je-al commented Jun 3, 2020

I've since been disabused of the notion that this configuration actually affected logging for other parts of the pipeline...
So I'm guessing the ask (?) would be for a way to actually be able to sample the logs coming off of the queued_retry processor, because right now the only way to avoid the flood of logs is to have rules dropping them in some sort of logs collector, or actually lower (raise?) the loglevel globally to ERROR (since dropped messages log at WARN), neither of which seem that convenient.

@jan25
Copy link

jan25 commented Jun 12, 2020

Are you seeing raw trace data logged? or just one line per trace printing number of spans inside the trace?

@bogdandrutu bogdandrutu added this to the GA 1.0 milestone Aug 4, 2020
@bogdandrutu bogdandrutu added the help wanted Good issue for contributors to OpenTelemetry Service to pick up label Aug 4, 2020
@tigrannajaryan tigrannajaryan modified the milestones: GA 1.0, Backlog Oct 19, 2020
MovieStoreGuy pushed a commit to atlassian-forks/opentelemetry-collector that referenced this issue Nov 11, 2021
@bogdandrutu
Copy link
Member

bogdandrutu commented Oct 25, 2022

No longer applies, I think it was fixed

hughesjj pushed a commit to hughesjj/opentelemetry-collector that referenced this issue Apr 27, 2023
Bumps debian from 11.1 to 11.2.

---
updated-dependencies:
- dependency-name: debian
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <[email protected]>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Troels51 pushed a commit to Troels51/opentelemetry-collector that referenced this issue Jul 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working help wanted Good issue for contributors to OpenTelemetry Service to pick up priority:p3 Lowest
Projects
None yet
Development

No branches or pull requests

5 participants