Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory leak #3477

Closed
Sitronik opened this issue Aug 15, 2019 · 21 comments
Closed

Memory leak #3477

Sitronik opened this issue Aug 15, 2019 · 21 comments
Milestone

Comments

@Sitronik
Copy link

Sitronik commented Aug 15, 2019

Hello everyone.
Following code results in memory leak:

import * as io from 'socket.io-client';
import Bluebird from 'bluebird';
import express from 'express';
import socketIo from 'socket.io';
import http from 'http';

import data from './socket.io.json';

describe('Socket.io', () => {
  it('200 thousand requests', async () => {
    const limit = 200 * 1000;

    //  add configurations for this test Edit Configurations -> Node options -> --expose-gc (WebStorm)
    setInterval(() => {
      global.gc();
      console.error(new Date(), process.memoryUsage());
    }, 1000);

    // Server
    const app = express();
    const server = http.createServer(app);
    server.listen(20017, 'localhost');

    const ioMain = socketIo.listen(server);

    ioMain.sockets.on('connection', (socket) => {
      socket.on('some_route', async (args) => {
        return;
      });
    });

    // Client
    const socket = io.connect('ws://localhost:20017', {
      transports: ['websocket'],
      rejectUnauthorized: false,
      query: {key: 'key'}
    });

    await Bluebird.delay(3 * 1000);

    for (let i = 0; i < limit; i++) {
      socket.emit('some_route', ['some_data', 7777, data]);
    }

    await Bluebird.delay(3 * 1000);
  });
});

If you run this test with limit 200 thousand requests you can see memoryUsage log:

2019-08-15T07:57:26.345Z { rss: 101449728,
  heapTotal: 69914624,
  heapUsed: 28566952,
  external: 31683 }
2019-08-15T07:57:27.345Z { rss: 91463680,
  heapTotal: 69914624,
  heapUsed: 27574720,
  external: 20968 }
2019-08-15T07:57:28.349Z { rss: 91475968,
  heapTotal: 69914624,
  heapUsed: 26643376,
  external: 20968 }
2019-08-15T07:57:34.580Z { rss: 1773096960,
  heapTotal: 921309184,
  heapUsed: 866143944,
  external: 819505496 }

Or If you run this test with limit 800 thousand requests:

FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory

<--- Last few GCs --->

[5377:0x102802800]    13083 ms: Scavenge 1396.7 (1424.6) -> 1396.2 (1425.1) MB, 2.0 / 0.0 ms  (average mu = 0.155, current mu = 0.069) allocation failure 
[5377:0x102802800]    13257 ms: Mark-sweep 1396.9 (1425.1) -> 1396.4 (1425.1) MB, 173.1 / 0.0 ms  (average mu = 0.093, current mu = 0.028) allocation failure scavenge might not succeed


<--- JS stacktrace --->

==== JS stack trace =========================================

    0: ExitFrame [pc: 0x3b4c160dbe3d]
Security context: 0x167f40a1e6e9 <JSObject>
    1: hasBinary [0x167f40c16b71] [/Users/denis/api/node_modules/has-binary2/index.js:~30] [pc=0x3b4c1617e245](this=0x167fb3f9ad81 <JSGlobal Object>,obj=0x167f2e2dd279 <Object map = 0x167f3307a4f1>)
    2: hasBinary [0x167f40c16b71] [/Users/denis/api/node_modules/has-binary2/index.js:~30] [pc=0x3b4c1617e0fa](this=0...

 1: 0x10003c597 node::Abort() [/usr/local/bin/node]
 2: 0x10003c7a1 node::OnFatalError(char const*, char const*) [/usr/local/bin/node]
 3: 0x1001ad575 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/usr/local/bin/node]
 4: 0x100579242 v8::internal::Heap::FatalProcessOutOfMemory(char const*) [/usr/local/bin/node]
 5: 0x10057bd15 v8::internal::Heap::CheckIneffectiveMarkCompact(unsigned long, double) [/usr/local/bin/node]
 6: 0x100577bbf v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [/usr/local/bin/node]
 7: 0x100575d94 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/usr/local/bin/node]
 8: 0x10058262c v8::internal::Heap::AllocateRawWithLigthRetry(int, v8::internal::AllocationSpace, v8::internal::AllocationAlignment) [/usr/local/bin/node]
 9: 0x1005826af v8::internal::Heap::AllocateRawWithRetryOrFail(int, v8::internal::AllocationSpace, v8::internal::AllocationAlignment) [/usr/local/bin/node]
10: 0x100551ff4 v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationSpace) [/usr/local/bin/node]
11: 0x1007da044 v8::internal::Runtime_AllocateInNewSpace(int, v8::internal::Object**, v8::internal::Isolate*) [/usr/local/bin/node]
12: 0x3b4c160dbe3d 
13: 0x3b4c1617e245 

Process finished with exit code 134 (interrupted by signal 6: SIGABRT)

socket.io.json data you can get here:
https://pastebin.com/uUeZJe6x

socket.io and socket.io-client version:
2.2.0

@knoxcard
Copy link

I think you might have a recursion issue going on here....you emit an "action" and receive an "action". Make sure the names are different.

@pedroivorbgrodrigues
Copy link

I think I'm experiencing this issue too, I did not have the time to test it, but it appears that if we pass an async function to the socket.on method it will keep an reference to it never freeing the memory.
I've done a huge refactor in my code, but it was most logical equivalents, and changin from promises to async/await.
The code is running fine but the memory usage is increasing very fast, I've profiled it and it was related to async_hooks, dont know much about it but it seems like internal usage.

@lupulin
Copy link

lupulin commented Feb 26, 2020

I also see a memory leak in my code. When clients are closed, the memory being used doesn't go back down.

@arthot
Copy link

arthot commented Mar 3, 2020

I've noticed the same. Our web-socket server is in k8s pod and has RAM limit 4GB and it seems like k8s kills it every 3 weeks or so because RAM consumption grows from 500MB to 4GB.

@masoudkazemi
Copy link

In my case, the memory bursts to 1.5GB after 30K clients (max of 3K active) get connected and transmitted the total number of 900K messages in 10 minutes. And the memory didn't get released when all clients get disconnected (it remained 1.4GB even after calling the garbage collector manually).
I tried to debug the memory leak in different ways and after a lot (4 days of debugging), find out that disabling perMessageDeflate will fix the issue. From ws-module API docs:

The extension is disabled by default on the server and enabled by default on the client. It adds a significant overhead in terms of performance and memory consumption so we suggest to enable it only if it is really needed.

So the main question here is that why perMessageDeflate is true by default in Socket-io?!

Hope this help others.

@Senglean
Copy link

Senglean commented Jun 4, 2020

how do you disable perMessageDeflate in socket.io?

@Senglean
Copy link

Senglean commented Jun 4, 2020

is this correct ?

io = require('socket.io')({ perMessageDeflate: false });

@masoudkazemi
Copy link

You are missing the first argument which should be the port:
io = require('socket.io')(3000, { perMessageDeflate: false });

@darrachequesne
Copy link
Member

So the main question here is that why perMessageDeflate is true by default in Socket-io?!

@masoudkazemi that's a good question actually. You should be able to find the reasoning in the history of https://github.com/socketio/engine.io, I remember there was quite a lot of discussions about that.

I think it should be disabled by default though. Let's include it in Socket.IO v3 👍

@pmoleri
Copy link

pmoleri commented Aug 20, 2020

Hi, I faced a similar problem and I want to share my findings.

The root cause of this seems to be memory fragmentation (node issue 8871). In other words, there's no actual memory leak, but rather the memory gets allocated in such a way that RSS keeps growing while the actual heap memory keeps steady.

This means that, while disabling perMessageDeflate will definitely help, you may hit this same issue in other parts of your application.
There's a workaround to memory fragmentation by preloading jemalloc before starting you application: see nodejs/node#21973
In my case it cut initial memory footprint by half and it keeps the memory low after that.

Linking related issue: socketio/engine.io#559

@RosenTomov
Copy link

Can someone test if this is still an issue in Node 14.7.0 or newer?

nodejs/node#34048

@Sitronik
Copy link
Author

is this correct ?

io = require('socket.io')({ perMessageDeflate: false });

It will be correct

import http from 'http';
import express from 'express';

const app = express();
const server = http.createServer(app);

require('socket.io').listen(server, {perMessageDeflate: false});

@DuBistKomisch
Copy link
Contributor

Just wanted to say that @pmoleri's idea worked for us. I set perMessageDeflate: false and used jemalloc via LD_PRELOAD and we're no longer running out of memory. We're on Node 12 FTR.

image

@mikecann
Copy link

mikecann commented Nov 13, 2020

I can confirm that this has a huge effect on Heroku too using this build pack: https://elements.heroku.com/buildpacks/gaffneyc/heroku-buildpack-jemalloc with perMessageDeflate: false

chrome_QZus8zGxgt

@md-seb
Copy link

md-seb commented Jul 5, 2021

FYI jemalloc was proposed for nodejs at some point:
nodejs/node#21973

@darrachequesne
Copy link
Member

For future readers, please note that the per-message deflate WebSocket extension is now disabled by default since version 3.0.0.

Documentation: https://socket.io/docs/v4/migrating-from-2-x-to-3-0/#Saner-default-values

I think we can now close this.

@nyxtom
Copy link
Contributor

nyxtom commented Sep 22, 2021

Given that 14.7.x addresses the memory fragmentation issues in zlib, is it safe to say that the documentation is outdated now (in terms of reasoning)? I can confirm in our production system that actually moving back to the original perMessageDeflate: { threshold: 1024 } options has improved our setup. The defaults that disabled perMessageDeflate limited our throughput significantly. nodejs/node#34048

@GaryWilber
Copy link

@nyxtom Would you mind expanding on how enabling per message deflate improved your system? Can you share any stats about how much your throughput changed? I would love to learn more about your results.

@nyxtom
Copy link
Contributor

nyxtom commented Sep 29, 2021

@GaryWilber It improved our system because by passing along permessage-deflate, ws websockets can now negotiation a compression/decompression (https://datatracker.ietf.org/doc/html/rfc7692) configuration between the client and server. Without that option, we weren't getting any compression over our significant data so this slowed things quite a bit when perMessageDeflate was disabled (resulting in backlogs of stream data that couldn't be sent). ws appears to configure zlib with default concurrency set to 10 as well and it appears now with 14.7.x the previous memory fragmentation issues have gone away (we aren't seeing that problem on our server at least).

For stats sake, just on one of our collectors we were processing roughly half of the throughput we normally are able (from 2600 flows/second to 1066 flows/second). Dropping our flow rate by that much caused significant backlogs on our system and collectors simply could not keep up. During our upgrade, setting perMessageDeflate back to what it used to be (threshold: 1024) improved our peak throughput rate at nearly 10,000 flows/second on our of collectors - with all other cases showing steady flow rates and moving through backlogs without a hitch.

We used to run on an old version of socket.io 2.7.x with permessagedeflate enabled, this was steady for quite some time for us. When we upgraded to 4.x we had to revert a few of the default options that were changed (including max buffer size - basically the list of breaking migration changes listed on socket.io's website).

NOTE this works for us only because we are leveraging websockets for piping traffic from our collectors, but may not work well for other use cases such as from browser clients for instance. That being said we haven't seen problems there either.

@GaryWilber
Copy link

@nyxtom Thanks for the detailed info! Glad to hear permessage-deflate is working well now.

@KamalAman
Copy link

KamalAman commented Apr 20, 2023

I found my memory leak on the server that had to do with incompatible clients vs server socket.io versions. Each request from a client would create a new websocket connection due to a parser error. That client would be sent into a reconnect loop. The message that the decoder on the client would error on a message that looked like this 96:0{"sid":"ptzi_578ycUci8WLB9G1","upgrades":["websocket"],"pingInterval":25000,"pingTimeout":5000}2:40 troubleshooting-connection-issues. This probably has to do with the EIO=3 to EIO=4 change. The leak was growing at about 1mb per 8 seconds.

I had a mixture of v2, v3, and v4 stuff. I cleaned up the clients and servers versions and everything seems to be much happier now.

I am not too sure why the version incompatibility wasn't cleaning up the old connections on the server :S

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests