-
-
Notifications
You must be signed in to change notification settings - Fork 132
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance degradation over time #2146
Comments
I have to monitor different versions of liquidsoap. So it will took some time. 1.4.4
2.0.0+
|
Thanks for the report. A comparison would be great. As for the memory, there may not be a leak. The OCaml garbage collector may decide to raise the amount of the allocated memory pool to better accommodate the application's usage. |
All of them
At the moment average CPU time relative to real time for 10 minutes is:
|
Maybe this problem is related to |
By a mistake deployed instance with |
Thanks for these report, I will have a look very shortly. |
hey toots, any update on this? been seeing a fair few people complaining about higher than normal memory / CPU usage |
Hi Mitch. I haven't looked at the issue yet. Could you give me more details about how this is impacting users? Please note that the issue you are referring to in AzuraCast is about memory consumption, not CPU. |
@vitoyucepi any longer term update? I'm not sure that I see a significative difference in the numbers above. |
@SC2Mitch I'm not sure if this issues are related. @toots It seems this issue is related to
|
Thanks for reporting. I'm not sure what I am supposed to look at with the last log @vitoyucepi 🙂 |
@toots But |
@vitoyucepi Thanks! The memleak happens on the |
Initial research seems to point at an issue causing both, memory leak from buffers that would, in the case of cross, lead to a lot of stress on the GC, causing increase in CPU usage as well. Will report as soon as possible. |
@toots I reread your questions. |
I can reproduce the memory leak but at the same time, This seems to indicate a bug in the C allocation, perhaps something similar to #2054 |
Update: The memory allocation inspection found some inefficient float operations with bigarrays in |
Quick update on this one: still working on it. I can confirm that it's coming from radio = single("/path/to/song.mp3")
radio = crossfade(radio)
clock.assign_new(sync="none",[radio])
output.dummy(fallible=true,radio) I'm still working on zeroing on the root cause for it. |
Update on this issue: after a lot of testing, it appears that there isn't anything going wrong beside OCaml's garbage collector taking some initiative to mitigate CPU usage vs. memory allocation. It looks like a lot of the OCaml ecosystem is generally leaning toward more memory allocations and less CPU usage. To give a little more details, the OCaml memory management (the GC) has to balance the extra work needed to cleanup allocated memory with the extra CPU cost for it. This means that the GC can sometimes leave allocated memory uncleaned for a while to avoid consuming too many resources. There are parameters to control this behavior and they are now exported in the Before closing this, I'd like to do a little more research to see if we can provide users with a simpler wrapper that would allow to do simple configurations such as making sure that the allocated memory never crosses a certain threshold. We will see what is possible. We also want to release |
There was typo in the above, I meant to say:
|
AzuraCast has confirmed that they are not seeing any error on their end so I'm gonna mark this one as fixed with the |
Describe the bug
After some time running liquidsoap begins to consume much more CPU time.
Here's my log of
Also it looks like there's also a memory leak.
Another one log, the cpu consumption is higher because additionally I encode 6 streams 2*
mp3
+2*ogg
+2*opus
.To Reproduce
Music generation script
Expected behavior
Same CPU consumption over several days of work
Version details
ubuntu:20.04
in docker2.0.2
Install method
Deb package from liquidsoap ci artifacts at github
Common issues
N/A
The text was updated successfully, but these errors were encountered: