Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory grows until OOM with slow telemetry collector and lots of data. #26

Open
seiferteric opened this issue Mar 2, 2021 · 7 comments
Assignees

Comments

@seiferteric
Copy link
Contributor

seiferteric commented Mar 2, 2021

I am opening this issue to get an opinion on how to handle an issue I have observed. We had a telemetry collector (telegraf with custom gNMI plugin) that was running with limited CPU quota (I think limited to ~20% CPU time). The collector was using SAMPLE mode to get a large BGP table. Over time we noticed that the memory of the telemetry process was increasing until we hit OOM and the process was killed.

I identified the issue in that when in client_subscribe.go send() function we call err = stream.Send(resp). This call will actually block in the case I described above when the collector is not processing data quickly enough. The problem then is that the telemetry process will keep adding data to the PriorityQueue which causes the memory to grow. To rectify this issue, I introduced a new "LimitedQueue" instead of the current PriorityQueue in our (Dell) sonic-telemetry. The LimitedQueue will check the size of the Queue and reject adding newitems if the size is greater than the predefined maximum size (default I set to 100MB).

This is working, however it means that the collector will start to miss telemetry updates. Recently Broadcom recommended instead I close the connection with gRCP code RESOURCE_EXHAUSTED instead of silently dropping updates.

Wanting to know what is the community preferred way to do this before opening a PR.

@lguohan
Copy link
Contributor

lguohan commented Mar 3, 2021

in case of closing the connection, will all memory allocated be released?

when will the telemety reconnect?

@seiferteric
Copy link
Contributor Author

Yes the queue will be disposed and so the memory will be freed. It would be up to the collector to reconnect after receiving the RESOURCE_EXHAUSTED error since this is for dial-in telemetry.

@macikgozwa
Copy link
Contributor

I think sending RESOURCE_EXHAUSTED and terminating the subscription is a good option. That would be analogous to HTTP 429 (throttling) response from a Rest API.

One potential concern would be a client which mixes a high volume data and low volume data in the same SubscriptionList request. Since the queue is per SubscriptionList request, the telemetry service would end up canceling both high volume and low volume paths. The collector side should be aware of this case and not mix them in the same SubsciptionList request.

@pra-moh pra-moh reopened this May 18, 2021
carl-nokia referenced this issue in carl-nokia/sonic-buildimage Aug 7, 2021
#### Why I did it
Fix https://github.com/Azure/sonic-telemetry/issues/71

#### How I did it
Added memory limit for telemetry docker.
Historical docker memory usage shows telemetry docker consuming 150-200MB memory. Adding some extra buffer.
@qiluo-msft qiluo-msft reopened this Jan 31, 2022
@qiluo-msft
Copy link
Collaborator

@seiferteric Could you help connect to right person in Broadcom

Recently Broadcom recommended instead I close the connection with gRCP code RESOURCE_EXHAUSTED instead of silently dropping updates

I think the disconnecting gRPC client is a better choice.

@ganglyu ganglyu transferred this issue from sonic-net/sonic-telemetry Aug 31, 2022
@sneelam20
Copy link
Contributor

Assigning this @anand-kumar-subramanian based on the last comment.

@anand-kumar-subramanian
Copy link
Contributor

This issue was already fixed by Dell. Assigning this to @kwangsuk to make sure this issue is fixed.

@kwangsuk
Copy link

This issue was already fixed by Dell. Assigning this to @kwangsuk to make sure this issue is fixed.

The fix is not yet made in the community. We will plan to replicate the fix.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

8 participants