-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OOM caused by servicegraph connector #30634
Comments
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
relate #29762 |
I'm with the same issue related to the serviceGraph connector :( |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
Assign to me please, I'll have a look at this. |
Without deep diving into your detailed use case, there are a couple of things you can try:
|
@wjh0914 does this continue happening if you remove |
you can check this metrics:
It can help you get know about the edges in store. |
I've been testing this, with some mixed results. With the same configs and also with a pick-and-choose from there, there seems to be a slow memory creep over time, so I was able to reproduce that in a limited way. What's more interesting is that I think it's due to the GC not running as early as possible, or waiting too much to run. I was able to make the memory consumption stable using the GOGC and GOMEMLIMIT env vars, so I advise anyone to try that too. This is probably also a case of giving more memory to the collector instance being counterproductive, because the default GOGC is 100, possibly waiting to fill-up before triggering GC runs. tl;dr: Didn't find a clear mem leak, but a combination of env vars GOGC << 100 and GOMEMLIMIT as a soft-limit can trigger earlier GC runs and make the mem usage stable. |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
This issue has been closed as inactive because it has been stale for 120 days with no activity. |
Component(s)
connector/servicegraph
What happened?
Description
we try to use servicegraph connector to generate service topo,and find OOM issue
when the otel collector starts,the memeroy keeps growing:
the profile shows the pmap takes lots of memory for servicegraph connector:
Collector version
0.89.0
Environment information
Environment
OS: (e.g., "Ubuntu 20.04")
Compiler(if manually compiled): (e.g., "go 14.2")
OpenTelemetry Collector configuration
Log output
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: