-
Notifications
You must be signed in to change notification settings - Fork 10.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory leak #3477
Comments
I think you might have a recursion issue going on here....you emit an "action" and receive an "action". Make sure the names are different. |
I think I'm experiencing this issue too, I did not have the time to test it, but it appears that if we pass an async function to the socket.on method it will keep an reference to it never freeing the memory. |
I also see a memory leak in my code. When clients are closed, the memory being used doesn't go back down. |
I've noticed the same. Our web-socket server is in k8s pod and has RAM limit 4GB and it seems like k8s kills it every 3 weeks or so because RAM consumption grows from 500MB to 4GB. |
In my case, the memory bursts to 1.5GB after 30K clients (max of 3K active) get connected and transmitted the total number of 900K messages in 10 minutes. And the memory didn't get released when all clients get disconnected (it remained 1.4GB even after calling the garbage collector manually).
So the main question here is that why Hope this help others. |
how do you disable perMessageDeflate in socket.io? |
is this correct ?
|
You are missing the first argument which should be the port: |
@masoudkazemi that's a good question actually. You should be able to find the reasoning in the history of https://github.com/socketio/engine.io, I remember there was quite a lot of discussions about that. I think it should be disabled by default though. Let's include it in Socket.IO v3 👍 |
Hi, I faced a similar problem and I want to share my findings. The root cause of this seems to be memory fragmentation (node issue 8871). In other words, there's no actual memory leak, but rather the memory gets allocated in such a way that RSS keeps growing while the actual heap memory keeps steady. This means that, while disabling Linking related issue: socketio/engine.io#559 |
Can someone test if this is still an issue in Node 14.7.0 or newer? |
It will be correct import http from 'http';
import express from 'express';
const app = express();
const server = http.createServer(app);
require('socket.io').listen(server, {perMessageDeflate: false}); |
Just wanted to say that @pmoleri's idea worked for us. I set |
I can confirm that this has a huge effect on Heroku too using this build pack: https://elements.heroku.com/buildpacks/gaffneyc/heroku-buildpack-jemalloc with |
FYI jemalloc was proposed for nodejs at some point: |
For future readers, please note that the per-message deflate WebSocket extension is now disabled by default since version 3.0.0. Documentation: https://socket.io/docs/v4/migrating-from-2-x-to-3-0/#Saner-default-values I think we can now close this. |
Given that 14.7.x addresses the memory fragmentation issues in zlib, is it safe to say that the documentation is outdated now (in terms of reasoning)? I can confirm in our production system that actually moving back to the original |
@nyxtom Would you mind expanding on how enabling per message deflate improved your system? Can you share any stats about how much your throughput changed? I would love to learn more about your results. |
@GaryWilber It improved our system because by passing along For stats sake, just on one of our collectors we were processing roughly half of the throughput we normally are able (from 2600 flows/second to 1066 flows/second). Dropping our flow rate by that much caused significant backlogs on our system and collectors simply could not keep up. During our upgrade, setting perMessageDeflate back to what it used to be (threshold: 1024) improved our peak throughput rate at nearly 10,000 flows/second on our of collectors - with all other cases showing steady flow rates and moving through backlogs without a hitch. We used to run on an old version of socket.io 2.7.x with permessagedeflate enabled, this was steady for quite some time for us. When we upgraded to 4.x we had to revert a few of the default options that were changed (including max buffer size - basically the list of breaking migration changes listed on socket.io's website). NOTE this works for us only because we are leveraging websockets for piping traffic from our collectors, but may not work well for other use cases such as from browser clients for instance. That being said we haven't seen problems there either. |
@nyxtom Thanks for the detailed info! Glad to hear permessage-deflate is working well now. |
I found my memory leak on the server that had to do with incompatible clients vs server socket.io versions. Each request from a client would create a new websocket connection due to a I had a mixture of v2, v3, and v4 stuff. I cleaned up the clients and servers versions and everything seems to be much happier now. I am not too sure why the version incompatibility wasn't cleaning up the old connections on the server :S |
Hello everyone.
Following code results in memory leak:
If you run this test with limit 200 thousand requests you can see memoryUsage log:
Or If you run this test with limit 800 thousand requests:
socket.io.json data you can get here:
https://pastebin.com/uUeZJe6x
socket.io and socket.io-client version:
2.2.0
The text was updated successfully, but these errors were encountered: