-
-
Notifications
You must be signed in to change notification settings - Fork 171
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
server memory leak #1662
Comments
Can you try turning off as many features as you can (sound forwarding, etc) to see if that helps? |
I can reproduce it with |
Watching the server memory usage with First had to fix memleak debugging ( Then found a leak in the protocol layer, so "xpra info" would leak yet more memory when I was trying to find where the real leak was... fixed in r17299. Both of those should be backported. |
2017-11-04 17:22:52: antoine uploaded file
|
There are still some small leaks, so:
What makes this particularly difficult is that the leak debugging slows things down dramatically and blocking the main thread, so it can cause things to get backed up so much that they look like leaks when they're not. Another problem is the "traceback reference cycle problem" And more importantly, we're still leaking somewhere as this gets printed every time the leak detection code runs (always exactly the same leak count):
|
By turning off the ping feature, the leaks are reduced. It also looks like generating network traffic (ie: moving the mouse around) also causes more leaking. I suspect that this comes from the non-blocking socket timeouts, like this shown at debug level:
|
r17328 (+r17330 fixup) fixes a leak caused by logging. Dumping all the cell objects (matched by type string since there does not seem to be a python type exposed for it), the recurring entries seem to be:
Not sure where they're from yet... could even be the leak debugging code itself. |
Left "xpra info" running in a loop for 4 hours and those leaks are definitely gone. |
2017-11-10 16:24:55: antoine uploaded file
|
More improvements:
This is hard... |
Related improvements: r17358 + r17360: avoid churn I think the leaks are gone (at least the big ones), it just takes a very long time for the maxrss value to settle on its high water mark, probably because of memory fragmentation. It would be worth playing with MALLOC_MMAP_THRESHOLD_ to validate this assumption, but I've already spent far too much time on this ticket. @nathan-renniewaldock: can I close this? |
Issue migrated from trac ticket # 1662
component: server | priority: critical | resolution: worksforme
2017-10-17 00:19:04: nathan-renniewaldock created the issue
The text was updated successfully, but these errors were encountered: