-
Notifications
You must be signed in to change notification settings - Fork 29.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Node v7.6.0+ memory leak #12019
Comments
Do you have a way for us to reproduce? The simpler the test case, the better. |
Can't say I do unfortunately, aside from isolating the commits that changed in our application between the versions (which included the update from Node v7.5.0 to Node v7.6.0) we could not isolate a specific section of code responsible - the memory footprint running on our production server would grow gradually over the space of a few hours, not something we could figure out a way to quickly replicate: The two most likely culprits from changes within 7.6.0 appear to be the upgrades to V8 5.5 / zlib 1.2.11 |
The V8 upgrade seems like the most likely culprit, yes. You could build v7.6.0 from source and revert the zlib upgrade to verify it's not that. The V8 upgrade is less trivial, it's not a single commit. That said, does "memory leak" mean that the process eventually dies with an out-of-memory error or does it just have a bigger memory footprint than before? |
The memory footprint kept growing until the app was forcibly restarted when CPU/mem hit the limits (when GC was constantly running I assume), then it would begin to grow again straight away (as indicated by the orange line on our graphs above). |
What are those limits? Does |
I would like to be able to jump on this and test it using your suggestions but as I am working on an Agile project I will have to try and schedule some time that we can try to do some of these suggestions and/or try to upgrade to the very latest node version v7.7.4 and see if that is any different |
Okay, since there isn't anything actionable at the moment I'll go ahead and close this out for now. Let me know when you (generic you, not just you you) have more to report or have a way to reproduce and we can revisit. Cheers. |
Is there any material gain to closing the ticket? Surely with higher visibility the more likelihood of other people experiencing the same problem being able to offer their own reproducible steps from their testing? |
It declutters the issue list. People will still be able to find it but collaborators won't have to sift through non-actionable issues. |
cc/ @rgrove @coox from https://twitter.com/yaypie/status/838539575605653504, a reproducible testcase would be ideal (less dependencies the better). |
It appears that the culprit could have been this issue in crypto, which was just fixed in 7.8.0: |
We noticed steady and consistent memory leaks in our application when running in production using v7.6.0. At first we attributed this to a section of code using async/await but after reverting this code change the problem persisted. As soon as we reverted the node version used back to v7.5.0 in our Docker container the issue went away completey.
This mirrors the experiences of: https://twitter.com/yaypie/status/838539575605653504
I can provide additional graphs from Cloudwatch showing memory utilisation growing in our application if necessary.
I am surprised no one else seems to have found the same issue or raised it as an Issue or PR so far!
Stuck on v7.5.0 until this is rectified - which is a shame as we were looking forward to trialling async/await
The text was updated successfully, but these errors were encountered: