-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Consumes O(n) memory when evaluating long promise chains #111
Comments
I found a worse case. Here, the promises aren't chained together at all. Simply initiating one promise in the function test(i){
if (i <=0){
console.log("Done");
} else {
Q.when(i-1).then(test);
}
} The above keeps all the intermediate values in memory until the last one finishes. But if we insert a delay like this the problem goes away: function fixed(i){
if (i <=0){
console.log("Done");
} else {
Q.nextTick(function(){
Q.when(i-1).then(fixed);
});
}
} This |
PS: I realized my use of |
I think I found the root cause. If I comment out this line in
And this makes sense. If your current stack contains a bunch of closures, you're going to grab references to them here and they can't get GCed until you let go of the I don't know what the fix is here, other than to enable a mode that disables these nice stack traces in favor of better memory performance. |
Nice detective work. As for the fix, we should probably either respect |
A workaround that doesn't require altering Q is to set EDITED TO ADD: I posted the above before seeing domenic's reply. Yes, agreed, I think Error.stackTraceLimit is probably sufficient. I'm don't think any Q-specific limit is needed. |
Well, but the point of If you write non-Q code, even with long callstacks, you wouldn't run into this. With, say, But we capture multiple stack traces, one for each "jump" between short stack traces, so memory grows linearly. That is, memory is We could use |
Ah right, got it. I was only considering the limit==0 case, which mitigates my immediate problem. A constant-but-nonzero amount of stack frames would be even better. :-) |
OK, this is harder than I initially thought. I think it will require a redo of the long stack trace logic. Jotting down some notes here, both for myself and for anyone to comment on. The current approach is to attach a We actually do not support more than one level deep of long stack traces. That is, Q.resolve(5)
.then(function () { return 10; })
.then(function () { return 20; })
.then(function () { throw new Error("boo!"); })
.done(); only generates
This can clearly be seen by looking at the implementation of I'd like to fix this. But, why is the current approach "leaking"? It doesn't really make sense, if we only use the last stack trace, for us to keep around memory for every stack trace. Why is this happening? If @ef4 is right and commenting out The key question: is the Is it possible that by using these not-yet-serialized stack traces we're keeping references to closures and scope that keep the promise alive, even though we've since chained off it and moved on? Test this by serializing the stack trace. Regardless, this needs a rewrite. The current architecture leaks and doesn't give you anything useful for your leaks; i.e. it only gives you one jump. New strategy: each promise gets a chain of serialized stack traces, probably pre-concatenated. This can be capped using Getting access to the existing chain is the tricky part. Currently we hook into |
WOW. It is indeed preventing GC because of the unserialized stack traces. That is, if I change Error.captureStackTrace(promise, defer); to Error.captureStackTrace(promise, defer);
promise.stack; the memory usage stays low. That means by doing #117 this bug should be fixed. We remain with the fact that there's only one stack jump level, but that's actually pretty orthogonal. |
The following example consumes a surprisingly large amount of memory.
This isn't a permanent memory leak. The memory gets freed once the last promise is fulfilled. But until then, every intermediate promise seems to be staying in the heap and is not eligible for garbage collection.
While the above is a simplified example, this memory issue is causing real problems in my application.
The text was updated successfully, but these errors were encountered: