-
Notifications
You must be signed in to change notification settings - Fork 29.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
high tls memory usage (rss) #1522
Comments
Could be reproduced with 'use strict';
var wreck = require('wreck');
function callback(error, res, payload) {
if (error) {
console.log(error);
return;
}
console.log('statusCode: ', res.statusCode);
setTimeout(iterate, 10);
}
function iterate() {
console.log(JSON.stringify(process.memoryUsage()));
console.log('iterating...');
wreck.get('https://google.com/', callback);
};
iterate(); |
I found a work-around for that particular testcase. 'use strict';
var requests = 20;
var total = 1000;
// If `maxSockets` is less then `requests`, then RSS stops growing from some point.
var maxSockets = 10;
var https = require('https');
var agent = new https.Agent();
agent.maxSockets = maxSockets;
var count = 0;
var options = {
agent: agent,
hostname: 'google.com'
};
function next() {
setTimeout(iterate, 0);
}
function onError(error) {
console.log(error);
}
function noop() {}
function onResponse(res) {
console.log('statusCode: ', res.statusCode);
res.on('data', noop);
res.on('end', next);
}
function iterate() {
console.log(JSON.stringify(process.memoryUsage()));
if (count >= total) process.exit();
console.log('iterating... ' + ' num: ' + (++count));
https.request(options, onResponse).
on('error', onError).
end();
};
for (var i = 0; i < requests; i++) iterate(); In this example, if http://oserv.org/bugs/iojs/memory0/test-agent.js — testcase. http://oserv.org/bugs/iojs/memory0/test-fast.js could be used to reproduce this a bit faster (20 requests at the same time). |
cc @indutny |
http://oserv.org/bugs/iojs/memory0/test-fast.js could be used to reproduce this a bit faster (20 requests at the same time). |
Actually, it goes down after manually calling Could it be that |
Until a full gc run (calling Full gc is not called automatically until ~500MiB rss (without valgrind/massif) is reached. |
But even when rss reaches ~500MiB and full gc is called (deleting |
@ChALkeR I have modified the test case to work with localhost server (partly because I'm on 3G internet, partly because I'm not so patient). And it appears to me that it grows up to 1.1gb very fast, then drops to 800mb and then stays around 600mb for 50 parallel connections. The breaking point (1.1gb) happens at 21373 iteration for me. Have you tried waiting that long? |
I have a fix for lowering 1.1gb, btw. Will push a PR tomorrow. |
Lowering to which point? If Just checked — in this case
|
@ChALkeR to 400mb as far as I remember. Anyway, do you see that everything is getting collected after growing to 1gb? |
No. For me, full gc kicks in at 500 MiB. But it does not lower rss memory usage, it just (almost) stops growing.
|
@ChALkeR I wasn't talking about GC in general. I was talking about the point where all accumulated TLSWraps are collected and RSS lowers significantly. |
TLSWraps are not collected untill a full gc run. |
So why is this issue called a "leak" then? ;) If the RSS stays the same? |
Because I found that out (the 500 MiB cap) after reporting the issue. =). |
There are two questions here:
|
|
|
Erm, I swear I thought I put (2) there :) Sorry! |
Yep, I did it. This is a flavored markdown that borked me out. |
Ah, that's the Markdown. It re-numbers lists. Even 1. 1. 1. is displayed as:
|
|
I could make RSS stay around 200mb with this patch: diff --git a/src/node_crypto.h b/src/node_crypto.h
index 75ffe4f..fcc8cbe 100644
--- a/src/node_crypto.h
+++ b/src/node_crypto.h
@@ -140,12 +140,15 @@ class SSLWrap {
session_callbacks_(false),
new_session_wait_(false) {
ssl_ = SSL_new(sc->ctx_);
+ env_->isolate()->AdjustAmountOfExternalAllocatedMemory(kExternalSize);
CHECK_NE(ssl_, nullptr);
}
virtual ~SSLWrap() {
if (ssl_ != nullptr) {
SSL_free(ssl_);
+ const int64_t len = static_cast<int64_t>(kExternalSize);
+ env_->isolate()->AdjustAmountOfExternalAllocatedMemory(-len);
ssl_ = nullptr;
}
if (next_sess_ != nullptr) {
@@ -169,6 +172,11 @@ class SSLWrap {
inline bool is_waiting_new_session() const { return new_session_wait_; }
protected:
+ // Size allocated by OpenSSL: one for SSL structure, one for SSL3_STATE and
+ // some for buffers.
+ // NOTE: Actually it is much more than this
+ static const int kExternalSize = sizeof(SSL) + sizeof(SSL3_STATE) + 42 * 1024;
+
static void InitNPN(SecureContext* sc);
static void AddMethods(Environment* env, v8::Handle<v8::FunctionTemplate> t);
|
That makes v8 take the minimum possible |
I'm not sure what the problem is then :) |
Well, 200MiB just for requests is still a problem, isn't it? |
Destroy singleUse context right after it is going out of use. Fix: nodejs#1522
Do not keep SSL structure in memory once socket is closed. This should lower the memory usage in many cases. Fix: nodejs#1522
Ensure that GC kicks in at the right times and the RSS does not blow up. Fix: nodejs#1522
When connecting to server with `keepAlive` turned off - make sure that the read/write buffers won't be kept in a single use SSL_CTX instance after the socket will be destroyed. Fix: nodejs#1522
Destroy singleUse context right after it is going out of use. Fix: nodejs#1522
Do not keep SSL structure in memory once socket is closed. This should lower the memory usage in many cases. Fix: nodejs#1522
Ensure that GC kicks in at the right times and the RSS does not blow up. Fix: nodejs#1522
When connecting to server with `keepAlive` turned off - make sure that the read/write buffers won't be kept in a single use SSL_CTX instance after the socket will be destroyed. Fix: nodejs#1522
Destroy singleUse context right after it is going out of use. Fix: nodejs#1522
Do not keep SSL structure in memory once socket is closed. This should lower the memory usage in many cases. Fix: #1522 PR-URL: #1529 Reviewed-By: Shigeki Ohtsu <[email protected]>
Ensure that GC kicks in at the right times and the RSS does not blow up. Fix: #1522 PR-URL: #1529 Reviewed-By: Shigeki Ohtsu <[email protected]>
When connecting to server with `keepAlive` turned off - make sure that the read/write buffers won't be kept in a single use SSL_CTX instance after the socket will be destroyed. Fix: #1522 PR-URL: #1529 Reviewed-By: Shigeki Ohtsu <[email protected]>
Destroy singleUse context right after it is going out of use. Fix: #1522 PR-URL: #1529 Reviewed-By: Shigeki Ohtsu <[email protected]>
Do not keep SSL structure in memory once socket is closed. This should lower the memory usage in many cases. Fix: #1522 PR-URL: #1529 Reviewed-By: Shigeki Ohtsu <[email protected]>
Ensure that GC kicks in at the right times and the RSS does not blow up. Fix: #1522 PR-URL: #1529 Reviewed-By: Shigeki Ohtsu <[email protected]>
When connecting to server with `keepAlive` turned off - make sure that the read/write buffers won't be kept in a single use SSL_CTX instance after the socket will be destroyed. Fix: #1522 PR-URL: #1529 Reviewed-By: Shigeki Ohtsu <[email protected]>
Destroy singleUse context right after it is going out of use. Fix: #1522 PR-URL: #1529 Reviewed-By: Shigeki Ohtsu <[email protected]>
https.request leaks in this example:
rss
grows over time,heapUsed
remains about constant (~30m).This could be reproduced with all released iojs versions that I tried v1.0.1 and v1.8.1.
Node v0.12.2 behaves better in this testcase: rss grows to about 100m, but goes down periodically (I wouldn't call that a perfect behaviour, let's call that «fine»).
53ba494, dad73f6 and 8a83eba are three sequental commits.
53ba494 is fine (as 0.12), dad73f6 fails to build, 8a83eba leaks.
dad73f6 is «upgrade v8 to 3.31.74.1» with 831 files changed, 104394 insertions(+), 21427 deletions(-).
Node v0.12.2 could be fine because it still uses v8 3.28.73.
io.js 1.0.0/1.1.0 + shared v8 3.30.33.16 behaves fine (the same as 0.12).
All the testcases and massif data are at http://oserv.org/bugs/iojs/memory0/
The text was updated successfully, but these errors were encountered: