Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor tail-sampling processor - Experiment and validate the scheduling mechanism #31584

Open
Tracked by #31580
jpkrohling opened this issue Mar 5, 2024 · 6 comments
Assignees
Labels
processor/tailsampling Tail sampling processor

Comments

@jpkrohling
Copy link
Member

jpkrohling commented Mar 5, 2024

During some performance tests, I noticed that some data appeared to be lingering around even when nothing was expected to still be in the queue. This issue is about creating enough load on the collector to reproduce this kind of issue, adding the necessary telemetry to either identify when this happens or to evidence that it doesn't.

@jpkrohling jpkrohling added the processor/tailsampling Tail sampling processor label Mar 5, 2024
Copy link
Contributor

github-actions bot commented Mar 5, 2024

Pinging code owners for processor/tailsampling: @jpkrohling. See Adding Labels via Comments if you do not have permissions to add labels yourself.

Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions github-actions bot added the Stale label May 14, 2024
@jpkrohling jpkrohling self-assigned this May 16, 2024
@jpkrohling jpkrohling removed the Stale label May 16, 2024
@jamesmoessis
Copy link
Contributor

@jpkrohling when you say there's lingering data do you have any more specifics? Is it just un-GCed memory or things that remain in the cache when they shouldn't? Or you don't know and we need to do more specific validation.

In the coming weeks I'll be testing this processor at quite a high load looking to optimise it, so I'll put any updates here if I see anything related.

@jpkrohling
Copy link
Member Author

I'm not 100% sure: what I have seen in the past, and this might be even fixed already by now, is that traces would still be kept there in the internal map, likely due to concurrent updates issues.

Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
processor/tailsampling Tail sampling processor
Projects
None yet
Development

No branches or pull requests

2 participants