-
-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory leak with Kafka and ElasticLogger ? #567
Comments
apiVersion: v1
|
Could you please try using version 0.40.2? For sure, the DNS collector is consuming more memory than it did at the beginning of the project. Perhaps you may consider adjusting the following settings (it depend of your traffic volume):
|
I am seeing the same problem but with an elastic search client. When the collector runs on instances where we have a lot of traffic, it uses up all the memory within a few hours. I'm not very familiar with the go programming language to analyse the problem more deeply, but I've tried using pprof and this is output. Maybe this will give you some clues. |
Thank for pprof. |
4k queries per second
|
@misaki-kawakami thank to report this issue, could you try with the fix in the branch https://github.com/dmachard/go-dnscollector/tree/fix-memory-leak ? Please also add (or update) the chan-buffer-size: 2048
bulk-size: 1048576 # in bytes
flush-interval: 10 if the following error message appears, you need to increase the
@secure-xxx Regarding kafka, I need to do more testing but I identified a wrong default value for channel size. |
Thank you. I no longer notice the memory growing over time 👍 |
thank you for testing, it's really appreciated. |
Could you do some new tests ? I did some optimizations to reduce CPU usage and memory. |
@secure-xxx I did more tests with the kafka producer and observed no memory leaks with it. In your case, it's clearly the prometheus logger. Have you tried to adjust thesettings like my suggestion ? |
I am surprised because I did stress tests to in my side with 4kqps without issue. . |
Added this to elastic loggers:
|
Thanks, I will make more test with your settings and QPS. |
After investigation, the increase of memory is expected because the bulk of DNSmessages are sent in ordered way to avoid to hurt the API of your elastic instance. But in DNS-collector side; it cost in memory to do that... However, I have made improvements to limit the use of the memory. Could you try with the following config and the branch https://github.com/dmachard/go-dnscollector/tree/fix-elastic-leak ? chan-buffer-size: 4096
bulk-size: 5242880 # chuck of 5MB
flush-interval: 10 # flush every 5s
compression: gzip
bulk-channel-size: 10 If you observed the following error, try to increase the
|
I have tested your tweaks and I no longer see such a large increase in memory usage. Everything is working fine. |
In 0.32 version (with kafka logger) memory leak absent or minimal. In other newest versions it grow. Same config was been used.
#529
The text was updated successfully, but these errors were encountered: