-
Notifications
You must be signed in to change notification settings - Fork 7.3k
stream: write small messages cause massive memory usage #6623
Comments
I believe this is related to #6506 |
Here's a slight variation that doesn't cause the blocking, but does still cause memory to spike rapidly: var net = require('net');
var PORT = 8000;
var SIZE = 1024;
var b = new Buffer(SIZE).fill('a');
var cntr = 0;
net.createServer(function(conn) {
write(conn);
}).listen(PORT, function() {
console.log('Listening on port 8000');
});
function write(conn) {
var lcntr = 0;
do {
cntr += SIZE;
lcntr += SIZE;
} while (conn.write(b) && lcntr < 0x40000000);
setImmediate(write, conn);
}
setInterval(function() {
var mib = cntr / 1024 / 1024;
console.log(mib.toFixed(2) + ' MB');
}, 2000); |
Here's a possible solution: diff --git a/lib/_stream_writable.js b/lib/_stream_writable.js
index 6404821..dbd1bc2 100644
--- a/lib/_stream_writable.js
+++ b/lib/_stream_writable.js
@@ -78,6 +78,14 @@ function WritableState(options, stream) {
// socket or file.
this.length = 0;
+ // If we're writing many small requests then it's possible to starve
+ // the event loop since write() will always return true and the
+ // callback will always be called.
+ // This keep track of the total number of bytes written before we've
+ // deferred to the event loop. It will defer after HWM bytes have
+ // been written.
+ this._loopWritten = 0;
+
// a flag to see when we're in the middle of a write.
this.writing = false;
@@ -234,8 +244,15 @@ function writeOrBuffer(stream, state, chunk, encoding, cb) {
var len = state.objectMode ? 1 : chunk.length;
state.length += len;
+ state._loopWritten += len;
var ret = state.length < state.highWaterMark;
+
+ if (state._loopWritten > state.highWaterMark) {
+ ret = false;
+ state._loopWritten = 0;
+ }
+
state.needDrain = !ret;
if (state.writing || state.corked) Though there're two issues. First is that if there's a callback (i.e. my second example) then it'll still starve the event loop because the callback is called synchronously. Second, it's causing a test to fail. /cc @tjfontaine @isaacs |
I posted this right after you closed the related issue. This comment may still be relevant (I think): Please note that I don't think this is just memory related. There seems to be latency which i'm not sure is related to the memory utilization problem. |
Hello, After reviewing this some more... I have come up with a proposed patched (needs more testing) that I think solves my problem. I think the problem that exists is that clearbuffer calls .slice on the buffer too much. Basically for every queued write we are manipulating a large array and this causes the extra latency on writes. Instead we should just null the values in the buffer and skip past them. Here is my proposed patch: https://gist.github.com/ashelley/bf4cc9dd22aec8ddd8df EDIT: removed erroneous edits on patch file and please note this applies to v0.10.0 |
Can't figure out the minimum test case but here it is:
Only thing changing in my code to stop this from happening is changing how often i write to zlib.createDeflate stream. I know this isn't much help but wanted to document it anyway. EDIT: probably just related to high memory usage as OP noted |
Is there any fix planned for this issue? I am still encountering it in v. 0.12.2 and it's becoming a road block for a product I am working on :(. |
@dipbhi I haven't taken another look at this since it my last post here. The quickest solution for you would be to occasionally use |
Ok. I will try setImmediate. Thanks for the suggestion.
|
By the way, would it be bad to call setImmediate before each write call?
|
@dipbhi if you can listen for the write to return false then wait for the drain event to be emitted you probably won't see as much memory usage... pseudo code:
alternatively you can see watch stream._writableState.needDrain then wait for the drain event. This will be more efficient than calling setImmediate on each write AFAIK. |
@ashelley, On Fri, May 8, 2015 at 10:08 AM, ashelley [email protected] wrote:
|
Actually, after reviewing this again the original example of using the network stream the highWaterMark is never hit as originally reported. So the solution might depend on the type of stream you are working with. As @trevnorris mentioned setImmediate is your only option if you are using the socket stream. One thing I did notice though that instead of monitoring stream._writableState.needDrain you can monitor stream._writableState.pendingcb to throttle your writes:
Edit: you might get higher throughput if you lower the number of setImmediate calls:
original:
with reduced number of setimmediates:
|
@ashelley Great points. The addition to reducing the number of calls to |
@ashelley & @trevnorris However when I try to use similar solution with multiple clients (e.g. chat room) memory usage still goes up and server does seem to have a memory leak as well. I am hoping that I can pull your brain a little more to see if there is any workaround for multi-client situation as well. Here is my sample code (server should be run with --expose-gc): Server: var net = require('net'), conns = [];
net.createServer(function(conn) {
console.log('client connected');
conns.push(conn);
conn.on('data', function(buf){
distribute(conn, buf);
});
conn.on('error', function(err){
//console.error(err);
removeConn(conn);
})
conn.on('end', function(){
console.log('client disconnected');
removeConn(conn);
});
}).listen(8030);
var SIZE = 1024;
var b = new Buffer(SIZE);
var cntr = 0;
b.fill('a');
function distribute(conn, buf) {
for(var i = 0, len = conns.length; i < len; i++){
var targetConn = conns[i];
if(targetConn != conn){
write(targetConn, buf);
}
}
}
function write(conn, buf) {
if(conn){
if(conn._writableState.pendingcb > conn._writableState.highWaterMark) {
return setImmediate(function() {
write(conn, buf);
});
}
else{
conn.write(buf, function(err){
if(err){
removeConn(conn);
}
});
}
}
cntr += SIZE;
}
function removeConn(conn){
var connIndex = conns.indexOf(conn);
if(connIndex > 0){
conns.splice(connIndex, 1);
}
}
var t = process.hrtime();
setInterval(function() {
t = process.hrtime(t);
var mib = cntr * 8 / 1024 / 1024 / (t[0] + t[1] / 1e9);
console.log(mib.toFixed(2) + ' Mb/sec');
cntr = 0;
t = process.hrtime();
}, 2000);
setInterval(function(){
console.log('calling gc');
gc();
}, 30000); Client: var cp = require('child_process'),
numOfChildren = 100,
runningChildren = numOfChildren;
for(var i = 0; i < numOfChildren; i++){
var child = cp.fork('child.js', {silent: true});
child.on('close', function(){
runningChildren--;
if(runningChildren == 0){
console.log('Execution completed.');
}
});
child.stdout.on('data', function(data){
console.log('' + data);
});
child.stderr.on('data', function(data){
console.error(data.toString());
});
} Client Child var net = require('net'),
numOfMsgs = 1000,
receivedCount = 0,
receivedPrintFreq = 1000,
host = '127.0.0.1',
totalBufferSize = 0,
writeCount = 0;
var conn = net.connect({port: 8030, host: host}, function() {
console.log('Connected to the server, pid - %d', process.pid);
});
conn.on('data', function(buffer){
receivedCount++;
totalBufferSize += buffer.length;
if(receivedCount % receivedPrintFreq === 0) console.log('%d messages received, buffer size - %d, pid - %d', receivedCount, totalBufferSize, process.pid);
});
conn.on('error', function(err) {
console.error(err);
});
conn.on('end', function() {
console.log('Disconnected from the server, pid - %d', process.pid);
});
var SIZE = 1024;
var b = new Buffer(SIZE);
b.fill('a');
for(var i = 0; i < numOfMsgs; i++){
conn.write(b, function(){
writeCount++;
if(writeCount === numOfMsgs){
setImmediate(function(){
conn.end();
});
}
});
} Edit |
Can you try on node v0.12.3 ? I tried the first test and i didn't see any memory leaks. |
@saquibkhan I was using latest release v0.12.2. Is there any installer available for v0.12.3? |
hmm, v0.12.2 should also be fine. i used below code to test me leaks, i could see me leak on v0.10 but not on v0.12 dev branch var net = require('net');
net.createServer(function(conn) {
conn.on('drain', function() {
write(conn);
});
write(conn);
}).listen(8000, function() {
console.log('Listening on port 8000');
});
var SIZE = 1024;
var b = new Buffer(SIZE);
var cntr = 0;
b.fill('a');
function write(conn) {
while(conn.write(b)) {
cntr += SIZE;
}
}
var t = process.hrtime();
setInterval(function() {
t = process.hrtime(t);
var mib = cntr * 8 / 1024 / 1024 / (t[0] + t[1] / 1e9);
console.log(mib.toFixed(2) + ' Mb/sec');
cntr = 0;
t = process.hrtime();
}, 2000); |
@saquibkhan OK. How about the multi-client scenario I posted later. Does that leak on the dev branch? |
@trevnorris ... what do you want to do with this one? |
@jasnell This is an issue because we call the callback sync if all data was flushed immediately. Which prevents the loop from turning over, thus not allowing full cleanup. We could do a simple patch to force async after N calls. Same thing that libuv does. After 49,000 events are processed in the poll phase it forces the event loop to proceed. Should be a simple patch, but never got around to it. |
My issue is more along the lines of #8048 but they say that that is a dup of this.
That sample even uses a setTimeout after the file closes... so everything should be flushed and done before the next iteration of this process. node --expose-gc --max-old-space-size=512 test.js >err 2>&1 but unfortunately that doesn't really run out of memory like my actual process though... but making it might give me a work around to wait until the close event to dispatch another run. |
@d3x0r if you listen for the return value of write and wait for the drain event to happen on wstream before continuing you probably won't see the problem. in other words... you can't write in a for loop without listening to the return value of write. when write returns false you have to wstream.once('drain', () => {//start running loop again}) |
@ashelly thanx; I will attempt that monday... other fish to shish-kabob. I see there's often a 'right' way that's the only way. I'm a little astounded that the file can't just evaporate with all it's buffers... even if it takes a 'while' ... at least scheduling in the
|
I updated my comment to include the sample I'd rather do, without all the bells and whistles of any events and possible reverse-reference linking... I see, and the next logical step is after implementing on( 'drain', ()=>{} ) for output write failures, if the input is readline file stream, the lines don't stop immediately... but it could be possible to end() the file before the drain fires. in that case the drain never fires, and never unlocks the thing... so you have to keep the closed This version shouldn't run out of memory.... and yet it does. Hooray for garbage collection (NOT!)
|
@ashelley I updated the sample to include drain event listening and protection to avoid end()ing the file... (see prior post) So even doing it the 'proper' way doesn't fix leaks. |
i'm not sure how to limit 'new node''s memory usage but this code works fine and completes successfully in a small memory footprint. I ran your above code sans es6 syntax in node 0.12.7 and it works fine: node --max_old_space_size=20 test.js I doubt that --max_old_space_size works on new versions of node as it probably allocates memory differently and requires a different command line switch to "properly limit it". The key is that once you are listening to drain you don't go into "mystery wait times" and everything performs as expected. I'd look for answers regarding on how to limit node's memory usage from command line options and you will see that with your above code you will get your expected results. |
(The sample code as is, as of this time the code in a nightly build from a few weeks ago does generate large files... When I first was diagnosing this, I used electron which uses node, and used its debugging tools...but when I paused the code so I could do a heap snapshot the heap cleaned itself and went back to 340M (typical for tables kept in memory) ... from 1G.. 700M just because of writing to a file is obscene. core i7-6700 3.4ghz And it's not even like this is a threading problem. I'm definitely providing a time on the stack when there is 0 program events. Collision with doing end() while it is in the state of needing to emit drain()? I did try with once, but the input was a readline thing, and .pause() does... after 20 some messages. And setting up a .once handler blinding not checking to see if I had one set already caused a over-scheduling warning from addEventListener. It just gets more complex when the linereader feeds a cluster that has work queues that come out and feed the output to a file... it's like 'oh by the way, when the cluster gives you an event, don't do anything with it because I have to wait for a drain event' This is why people say 'use a real streaming library like XXXXX' that has 20+ other dependancies itself.... I could rework this as transform filters maybe that wouldn't leak as much? |
internally there is an array that gets sliced 1 time per write. so if you don't wait for drain the internal buffer grows huge... to reproduce the issue create a large array and remove each element by slicing it from the bottom. it's slow. by listenning to the drain event you cause the internal buffer to not grow huge so you don't end up eventually waiting for an array to get sliced inefficiently once per write. it's definitely a bug in node not being able to flush writes out in batches and inefficiently handling the internal buffer. |
Okay... so let's suppose wstream is an array that gets things added, and slices a bit off every time and keeps adding things... the drain event supposedly when wstream.length == 0 Edit3: I lied it's losing about 4k per second after running a half hour or so... and it's up to 42M... In ideal conditions this appears to go to 39M and stay there. It started at 22M or so... I suppose I could copy the bytes flat into the buffer... that would be more like write.
|
the code has changed since i last looked but you are better off trying to debug the actual node code i'd say instead of trying to mock it: https://github.com/nodejs/node/blob/v4.4.2-proposal/lib/_stream_writable.js#L398-L412 EDIT: i think it was stated above that it is because this cb never called async but i could be wrong. https://github.com/nodejs/node/blob/v4.4.2-proposal/lib/_stream_writable.js#L355 EDIT2: I actually added global.gc() back to your code and it started garbage collecting keeping the memory at around 300-400MB (err maybe not) EDIT3: after playing around with it a bit... node 0.12 doesn't go crazy with memory usage with this code but writes a lot slower (more cpu i think too). 4.4.2 and 5.10.1 write a lot faster but eat memory so i believe the issue here is totally different depending on which version of node you are using. |
Still broken in 6.1.0 bump (or is bumping bannable?)
|
If I write until
writable.write()
returns false when writing many small buffers Node consumes a lot of memory and the process seems to halt:As soon as the connection is made the
setInterval()
stops printing and memory begins to fill up quickly.Here's another example that definitely shouldn't be able to fill up memory:
I run these tests by starting up the Node process then running
nc localhost 8000 > /dev/null
.The second one doesn't cause the same memory issue in v0.10, while the first one does.
The text was updated successfully, but these errors were encountered: