Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is [email protected] leaking memory? #237

Closed
fivdi opened this issue Mar 13, 2018 · 20 comments
Closed

Is [email protected] leaking memory? #237

fivdi opened this issue Mar 13, 2018 · 20 comments

Comments

@fivdi
Copy link

fivdi commented Mar 13, 2018

If the following addon:

my-object.h

#include <napi.h>

class MyObject : public Napi::ObjectWrap<MyObject> {
 public:
  static Napi::Object Init(Napi::Env env, Napi::Object exports);
  MyObject(const Napi::CallbackInfo& info);

 private:
  static Napi::FunctionReference constructor;

  void Open(const Napi::CallbackInfo& info);
};

my-object.cc

#include "my-object.h"

class OpenAsyncWorker : public Napi::AsyncWorker {
  public:
    OpenAsyncWorker(Napi::Function& callback)
      : Napi::AsyncWorker(callback) {
    }

    void Execute() {
    }
};

Napi::FunctionReference MyObject::constructor;

Napi::Object MyObject::Init(Napi::Env env, Napi::Object exports) {
  Napi::HandleScope scope(env);

  Napi::Function func = DefineClass(env, "MyObject", {
    InstanceMethod("open", &MyObject::Open)
  });

  constructor = Napi::Persistent(func);
  constructor.SuppressDestruct();

  exports.Set("MyObject", func);
  return exports;
}

MyObject::MyObject(const Napi::CallbackInfo& info)
  : Napi::ObjectWrap<MyObject>(info) {
}

void MyObject::Open(const Napi::CallbackInfo& info) {
  Napi::Function cb = info[0].As<Napi::Function>();
  OpenAsyncWorker* worker = new OpenAsyncWorker(cb);
  worker->Queue();
}

addon.cc

#include <napi.h>
#include "my-object.h"

Napi::Object InitAll(Napi::Env env, Napi::Object exports) {
  return MyObject::Init(env, exports);
}

NODE_API_MODULE(addon, InitAll)

is tested with the following JavaScript:

'use strict';

const addon = require('bindings')('addon.node');
const myObject = new addon.MyObject();
let count = 0;

function open() {
  myObject.open(() => {
    count += 1;
    if (count % 1000000 === 0) {
      console.log(count);
    }
    open();
  });
}

open();

then the memory usage of the node process keeps increasing and increasing and the process ends up crashing after the async open method has been called about 18 million times.

Here's the output of the program:

pi@raspberrypi:~/node-addon-api-leak $ node test/leak.js 
(node:1304) Warning: N-API is an experimental feature and could change at any time.
1000000
2000000
3000000
4000000
5000000
6000000
7000000
8000000
9000000
10000000
11000000
12000000
13000000
14000000
15000000
16000000
17000000
18000000
Killed

The complete addon can be seen here.

@mhdawson
Copy link
Member

My first guess is that you may just be missing some required cleanup after your async event runs or you create so many objects so quickly that they just don't get to run fast enough and you get a large backlog an eventually run out of memory due to that backlog

I'm travelling today but will to take a closer look later in the week.

@fivdi
Copy link
Author

fivdi commented Mar 14, 2018

My first guess is that you may just be missing some required cleanup after your async event runs

This would be good. Unfortunately I can't see anything that's missing although I've looked at everything many times.

or you create so many objects so quickly that they just don't get to run fast enough and you get a large backlog an eventually run out of memory due to that backlog

Only one instance of MyObject is created from JavaScript and it's async open methods is called approx. 18000000 times. This means that approx. 18000000 instances of OpenAsyncWorker will be created. However, each call to the async open method (except the first call) occurs in the completion callback of the previous call to open. If I understand the code in AsyncWorker::OnWorkComplete correctly then it deletes the current instance of OpenAsyncWorker immediately after calling the completion callback. If I'm not mistaken this means that there will never be more than two instances of OpenAsyncWorker at any time.

I'm travelling today but will to take a closer look later in the week.

Excellent, no rush.

I don't want to send you on a wild goose chase here but this issue reminds me a bit of this one. However, I have no idea if the issues are in any way related.

@fivdi
Copy link
Author

fivdi commented Apr 3, 2018

@mhdawson Were you able to verify whether or not there's a memory leak here?

@mhdawson
Copy link
Member

mhdawson commented Apr 4, 2018

I've not had time to investigate yet. Have a lot of things on my plate, sorry.

@mhdawson
Copy link
Member

mhdawson commented Apr 10, 2018

Some investigation using the existing asyncworker

const assert = require('assert');

test(require(`./build/${buildType}/binding.node`));
test(require(`./build/${buildType}/binding_noexcept.node`));

function test(binding) {
  binding.asyncworker.doWork(true, function (e) {
    assert.strictEqual(typeof e, 'undefined');
    assert.strictEqual(typeof this, 'object');
    assert.strictEqual(this.data, 'test data');
  }, 'test data');

  binding.asyncworker.doWork(false, function (e) {
    assert.ok(e instanceof Error);
    assert.strictEqual(e.message, 'test error');
    assert.strictEqual(typeof this, 'object');
    assert.strictEqual(this.data, 'test data');
  }, 'test data');

let count = 0;

function open() {
  binding.asyncworker.doWork(true, () => {
    count += 1;
    if (count % 100000 === 0) {
      console.log(count);
    }
    open();
  },'test data');
};

open();
}

Running under valgrind with 5000 and 1000 interations showed:

5000
LEAK SUMMARY:
==16094== definitely lost: 5,008 bytes in 294 blocks
==16094== indirectly lost: 80 bytes in 1 blocks
==16094== possibly lost: 304 bytes in 1 blocks
==16094== still reachable: 74,395 bytes in 1,153 blocks
==16094== suppressed: 0 bytes in 0 blocks
==16094== Reachable blocks (those to which a pointer was found) are not shown.
==16094== To see them, rerun with: --leak-check=full --show-leak-kinds=all

10000
==16109== LEAK SUMMARY:
==16109== definitely lost: 4,944 bytes in 292 blocks
==16109== indirectly lost: 80 bytes in 1 blocks
==16109== possibly lost: 304 bytes in 1 blocks
==16109== still reachable: 74,395 bytes in 1,153 blocks
==16109== suppressed: 0 bytes in 0 blocks
==16109== Reachable blocks (those to which a pointer was found) are not shown.
==16109== To see them, rerun with: --leak-check=full --show-leak-kinds=all

So I don't see an obvious leak on the native side.

The heap does seem to grow but running with
node --trace-gc --max_old_space_size=128 asyncworker.js
seems to grow to the max heap size and then continue to run and the virtual and resident sizes stay consistent.

From this don't think its something on the asyncworker side, so willl need to integrate the use of ObjectWrap to see if we get unbounded memory growth with that.

@mhdawson
Copy link
Member

mhdawson commented Apr 10, 2018

Existing asyncworker test case with use of object wrap integrated:

asyncworker.cc

#include "napi.h"

using namespace Napi;

class TestWorker : public AsyncWorker {
public:
  static void DoWork(const CallbackInfo& info) {
    bool succeed = info[0].As<Boolean>();
    Function cb = info[1].As<Function>();
    Value data = info[2];

    TestWorker* worker = new TestWorker(cb);
    worker->Receiver().Set("data", data);
    worker->_succeed = succeed;
    worker->Queue();
  }

  TestWorker(Function cb) : AsyncWorker(cb), _succeed(true) {}

protected:
  void Execute() override {
    if (!_succeed) {
      SetError("test error");
    }
  }

private:
  bool _succeed;
};

class MyObject : public Napi::ObjectWrap<MyObject> {
 public:
  static Napi::Object Init(Napi::Env env, Napi::Object exports);
  MyObject(const Napi::CallbackInfo& info);

 private:
  static Napi::FunctionReference constructor;

  void Open(const Napi::CallbackInfo& info);
};

Napi::FunctionReference MyObject::constructor;

Napi::Object MyObject::Init(Napi::Env env, Napi::Object exports) {
  Napi::HandleScope scope(env);

  Napi::Function func = DefineClass(env, "MyObject", {
    InstanceMethod("open", &MyObject::Open)
  });

  constructor = Napi::Persistent(func);
  constructor.SuppressDestruct();

  exports.Set("MyObject", func);
  return exports;
}

MyObject::MyObject(const Napi::CallbackInfo& info)
  : Napi::ObjectWrap<MyObject>(info) {
}

void MyObject::Open(const Napi::CallbackInfo& info) {
  Napi::Function cb = info[0].As<Napi::Function>();
  TestWorker* worker = new TestWorker(cb);
  worker->Queue();
}


Object InitAsyncWorker(Env env) {
  Object exports = Object::New(env);
  exports["doWork"] = Function::New(env, TestWorker::DoWork);
  return MyObject::Init(env, exports);
}

asyncworker.js

'use strict';
const buildType = process.config.target_defaults.default_configuration;
const assert = require('assert');

test(require(`./build/${buildType}/binding.node`));
test(require(`./build/${buildType}/binding_noexcept.node`));

function test(binding) {
  binding.asyncworker.doWork(true, function (e) {
    assert.strictEqual(typeof e, 'undefined');
    assert.strictEqual(typeof this, 'object');
    assert.strictEqual(this.data, 'test data');
  }, 'test data');

  binding.asyncworker.doWork(false, function (e) {
    assert.ok(e instanceof Error);
    assert.strictEqual(e.message, 'test error');
    assert.strictEqual(typeof this, 'object');
    assert.strictEqual(this.data, 'test data');
  }, 'test data');

  const myObject = new binding.asyncworker.MyObject();
  let count = 0;

  function open() {
    myObject.open(() => {
      count += 1;
      if (count % 1000000 === 0) {
        console.log(count);
      }
      open();
    });
  }

  open();
}

@mhdawson
Copy link
Member

mhdawson commented Apr 10, 2018

using the following to limit the number of loops in the open() function:

if (count < X) {
       open();
}

Valgrind on 10000

==25463== HEAP SUMMARY:
==25463==     in use at exit: 164,324 bytes in 1,470 blocks
==25463==   total heap usage: 181,094 allocs, 179,624 frees, 52,618,468 bytes allocated
==25463==
==25463== LEAK SUMMARY:
==25463==    definitely lost: 5,160 bytes in 298 blocks
==25463==    indirectly lost: 72 bytes in 2 blocks
==25463==      possibly lost: 76,425 bytes in 15 blocks
==25463==    still reachable: 82,667 bytes in 1,155 blocks
==25463==         suppressed: 0 bytes in 0 blocks
==25463== Rerun with --leak-check=full to see details of leaked memory

Valgrind on 20000

==25478== HEAP SUMMARY:
==25478==     in use at exit: 164,324 bytes in 1,470 blocks
==25478==   total heap usage: 341,518 allocs, 340,048 frees, 64,446,004 bytes allocated
==25478==
==25478== LEAK SUMMARY:
==25478==    definitely lost: 5,136 bytes in 298 blocks
==25478==    indirectly lost: 96 bytes in 2 blocks
==25478==      possibly lost: 76,425 bytes in 15 blocks
==25478==    still reachable: 82,667 bytes in 1,155 blocks
==25478==         suppressed: 0 bytes in 0 blocks
==25478== Rerun with --leak-check=full to see details of leaked memory
==25478==

So I don't see any native memory leak based on number of times open is called.

@mhdawson
Copy link
Member

mhdawson commented Apr 10, 2018

Running with node --trace-gc --max_old_space_size=128 asyncworker.js

and watching memory with top, memory increases and then stabilizes at (shown at about 14 mins into the run) ~

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
25500 mhdawson  20   0 1290348 236352  12132 R 174.6  1.4  14:11.08 node
 1583 jsbuild   20   0 7664760 186020  15260 S   0.7  1.1  33:11.52 java
  123 root      20   0       0      0      0 S   0.3  0.0   4:42.11 kworker/3:1

gc activity also stabilizes at frequent gc's freeing about 6MB each time

[25500:0x2c2dd60]   512946 ms: Mark-sweep 126.5 (132.3) -> 126.5 (132.3) MB, 176.4 / 33.3 ms  (+ 173.2 ms in 193 steps since start of marking, biggest step 61.9 ms, walltime since start of marking 383 ms) finalize incremental marking via task GC in old space requested
[25500:0x2c2dd60]   513336 ms: Mark-sweep 126.5 (132.3) -> 126.5 (132.3) MB, 181.5 / 40.5 ms  (+ 175.8 ms in 193 steps since start of marking, biggest step 63.0 ms, walltime since start of marking 390 ms) finalize incremental marking via task GC in old space requested
[25500:0x2c2dd60]   513718 ms: Mark-sweep 126.5 (132.3) -> 126.5 (132.3) MB, 175.1 / 34.5 ms  (+ 172.6 ms in 190 steps since start of marking, biggest step 61.3 ms, walltime since start of marking 382 ms) finalize incremental marking via task GC in old space requested
[25500:0x2c2dd60]   514100 ms: Mark-sweep 126.5 (132.3) -> 126.5 (132.3) MB, 175.5 / 34.5 ms  (+ 172.9 ms in 193 steps since start of marking, biggest step 61.7 ms, walltime since start of marking 381 ms) finalize incremental marking via task GC in old space requested
[25500:0x2c2dd60]   514487 ms: Mark-sweep 126.5 (132.3) -> 126.5 (132.3) MB, 178.8 / 33.8 ms  (+ 174.3 ms in 193 steps since start of marking, biggest step 62.4 ms, walltime since start of marking 386 ms) finalize incremental marking via task GC in old space requested
[25500:0x2c2dd60]   514868 ms: Mark-sweep 126.5 (132.3) -> 126.5 (132.3) MB, 173.3 / 33.5 ms  (+ 173.7 ms in 196 steps since start of marking, biggest step 61.7 ms, walltime since start of marking 380 ms) finalize incremental marking via task GC in old space requested
[25500:0x2c2dd60]   515249 ms: Mark-sweep 126.5 (132.3) -> 126.5 (132.3) MB, 175.9 / 33.4 ms  (+ 172.0 ms in 192 steps since start of marking, biggest step 61.1 ms, walltime since start of marking 381 ms) finalize incremental marking via task GC in old space requested

@mhdawson
Copy link
Member

These were done with the lastest node master and current node-addon-api repo.

@mhdawson
Copy link
Member

@fivdi can you take a look at the code I used above, I think I integrated your example properly but it does not look like there is memory leaking to me.

I see you are on a raspberry pi which might not have as much memory as I do. Can you try with --max_old_space_size=128 to see if the process still gets terminated?

@fivdi
Copy link
Author

fivdi commented Apr 11, 2018

@mhdawson

@fivdi can you take a look at the code I used above, I think I integrated your example properly but it does not look like there is memory leaking to me.

It looks like the examples was integrated correctly into asyncworker.cc but it still looks like there is a memory leak to me (see below.)

I see you are on a raspberry pi which might not have as much memory as I do. Can you try with --max_old_space_size=128 to see if the process still gets terminated?

Yes, it's a Raspberry Pi 3 with 1GB of RAM. I tried --max_old_space_size=128 with my example from the original post, here's the output:

pi@raspberrypi:~/node-addon-api-leak/node_modules/node-addon-api-leak/test $ node --max_old_space_size=128 leak.js 
100000
200000
300000
400000
500000
600000
700000
800000
900000
1000000
1100000
1200000
1300000
1400000
1500000
1600000
1700000
1800000
1900000
2000000
2100000
2200000
2300000
2400000
2500000
2600000
2700000
2800000
2900000
3000000
3100000
3200000
3300000
3400000
3500000
3600000
3700000
3800000
3900000
4000000
4100000
4200000
4300000
4400000
4500000
4600000

<--- Last few GCs --->

[20259:0x2297110] 28022182 ms: Mark-sweep 126.6 (133.2) -> 126.6 (133.2) MB, 1649.2 / 91.2 ms  allocation failure GC in old space requested
[20259:0x2297110] 28023559 ms: Mark-sweep 126.6 (133.2) -> 126.6 (133.2) MB, 1376.2 / 127.0 ms  last resort GC in old space requested
[20259:0x2297110] 28024989 ms: Mark-sweep 126.6 (133.2) -> 126.6 (133.2) MB, 1430.3 / 142.5 ms  last resort GC in old space requested


<--- JS stacktrace --->

==== JS stack trace =========================================

Security context: 0x33914c4d <JSObject>
    1: open(aka open) [/home/pi/node-addon-api-leak/node_modules/node-addon-api-leak/test/leak.js:~7] [pc=0x4f16e8b8](this=0x51404185 <undefined>)
    2: /* anonymous */ [/home/pi/node-addon-api-leak/node_modules/node-addon-api-leak/test/leak.js:13] [bytecode=0x51d7d779 offset=66](this=0x4d9ef199 <Object map = 0x3de841c9>)

==== Details ================================================

[1]: open(aka...

FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
Aborted
pi@raspberrypi:~/node-addon-api-leak/node_modules/node-addon-api-leak/test $ 

leak.js was modified to display count every 1e5 iterations rather than every 1e6 iterations. As can be seen there is fatal error as the JavaScript heap runs out of memory.

The program was running for several hours before the error occurred. The displaying of count to the console got slower and slower as time went on. Everything looked stable with top.

@fivdi
Copy link
Author

fivdi commented Apr 11, 2018

@mhdawson I did the last test with v10.0.0-nightly2018040653aaa55a3a and npm i fivdi/node-addon-api-leak.

@fivdi
Copy link
Author

fivdi commented Apr 12, 2018

@mhdawson When the example you posted above is integrated into asyncworker.cc and asyncworker.js is modified to:

'use strict';
const buildType = process.config.target_defaults.default_configuration;
const assert = require('assert');

test(require(`./build/${buildType}/binding.node`));
test(require(`./build/${buildType}/binding_noexcept.node`));

function test(binding) {
  binding.asyncworker.doWork(true, function (e) {
    assert.strictEqual(typeof e, 'undefined');
    assert.strictEqual(typeof this, 'object');
    assert.strictEqual(this.data, 'test data');
  }, 'test data');

  binding.asyncworker.doWork(false, function (e) {
    assert.ok(e instanceof Error);
    assert.strictEqual(e.message, 'test error');
    assert.strictEqual(typeof this, 'object');
    assert.strictEqual(this.data, 'test data');
  }, 'test data');

  const myObject = new binding.asyncworker.MyObject();
  let count = 0;

  function open() {
    myObject.open(() => {
      count += 1;
      if (count % 100000 === 0) {
        console.log(count + ' ' + new Date().toISOString());
      }
      open();
    });
  }

  open();
}

The output is:

pi@raspberrypi:~/node-addon-api/test $ node --max_old_space_size=128 asyncworker.js 
100000 2018-04-11T20:30:05.281Z
100000 2018-04-11T20:30:05.290Z
200000 2018-04-11T20:30:10.371Z
200000 2018-04-11T20:30:10.371Z
300000 2018-04-11T20:30:15.231Z
300000 2018-04-11T20:30:15.232Z
400000 2018-04-11T20:30:20.144Z
400000 2018-04-11T20:30:20.145Z
500000 2018-04-11T20:30:24.726Z
500000 2018-04-11T20:30:24.727Z
600000 2018-04-11T20:30:29.805Z
600000 2018-04-11T20:30:29.806Z
700000 2018-04-11T20:30:34.455Z
700000 2018-04-11T20:30:34.456Z
800000 2018-04-11T20:30:39.716Z
800000 2018-04-11T20:30:39.717Z
900000 2018-04-11T20:30:44.406Z
900000 2018-04-11T20:30:44.406Z
1000000 2018-04-11T20:30:49.086Z
1000000 2018-04-11T20:30:49.087Z
1100000 2018-04-11T20:30:54.502Z
1100000 2018-04-11T20:30:54.503Z
1200000 2018-04-11T20:30:59.169Z
1200000 2018-04-11T20:30:59.169Z
1300000 2018-04-11T20:31:03.859Z
1300000 2018-04-11T20:31:03.859Z
1400000 2018-04-11T20:31:09.655Z
1400000 2018-04-11T20:31:09.656Z
1500000 2018-04-11T20:31:14.203Z
1500000 2018-04-11T20:31:14.204Z
1600000 2018-04-11T20:31:18.920Z
1600000 2018-04-11T20:31:18.920Z
1700000 2018-04-11T20:31:23.567Z
1700000 2018-04-11T20:31:23.568Z
1800000 2018-04-11T20:31:29.596Z
1800000 2018-04-11T20:31:29.596Z
1900000 2018-04-11T20:31:34.549Z
1900000 2018-04-11T20:31:34.549Z
2000000 2018-04-11T20:31:40.967Z
2000000 2018-04-11T20:31:40.968Z
2100000 2018-04-11T20:31:47.194Z
2100000 2018-04-11T20:31:47.195Z
2200000 2018-04-11T20:32:03.977Z
2200000 2018-04-11T20:32:03.977Z
2300000 2018-04-11T20:39:06.460Z
2300000 2018-04-11T20:39:14.743Z

<--- Last few GCs --->

[1404:0x2f16118] 23992123 ms: Mark-sweep 127.1 (133.7) -> 127.1 (133.7) MB, 1648.1 / 91.4 ms  allocation failure GC in old space requested
[1404:0x2f16118] 23993549 ms: Mark-sweep 127.1 (133.7) -> 127.1 (133.7) MB, 1425.7 / 142.4 ms  last resort GC in old space requested
[1404:0x2f16118] 23994948 ms: Mark-sweep 127.1 (133.7) -> 127.1 (133.7) MB, 1398.6 / 142.5 ms  last resort GC in old space requested


<--- JS stacktrace --->

==== JS stack trace =========================================

Security context: 0x41194c4d <JSObject>
    1: open(aka open) [/home/pi/node-addon-api/test/asyncworker.js:~25] [pc=0x20b6e8f4](this=0x5a004185 <undefined>)
    2: /* anonymous */ [/home/pi/node-addon-api/test/asyncworker.js:31] [bytecode=0x207c461d offset=100](this=0x49f6be45 <Object map = 0x4d4841c9>)

==== Details ================================================

[1]: open(aka open) [/home/pi/node-addon-api/test/asyncworker.js:~25] [pc...

FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
Aborted
pi@raspberrypi:~/node-addon-api/test $ 

After 2300000 2018-04-11T20:39:14.743Z was logged to the console the program ran for several hours before the fatal error "JavaScript heap out of memory" occurred.

@sierrars
Copy link

sierrars commented May 10, 2018

I found this issue when trying to pinpoint my own memory leak, which I believe is caused by the same problem. While I am fairly new with native add-on things, I'd like to add one observation hoping it is helpful.

The same symptom can be observed on both my application and @fivdi 's https://github.com/fivdi/node-addon-api-leak repo. So I am using node-addon-api-leak as the example. By running node --inspect test/leak.js and then attaching to it with the chrome dev tools' allocation timeline tool, it can be seen that sometimes when an AsyncWorker instance is constructed(fivdi's test case shows this very consistently), there is an Object(56 bytes in size and retained by GC root) left in the heap. Over time they add up and I believe this also causes my own app's memory leak(I could be wrong of course). See the screenshot attached. With my limited experience, I couldn't figure out where these objects are from. While the AsyncWorker constructor creates a new object as the receiver(if not given), that doesn't seem to be the object in question.

BTW, all this is on macOS and I haven't tried it on any other os. Node-addon-api version is 1.3.0.

screen shot 2018-05-10 at 11 12 29 pm

@gabrielschulhof
Copy link
Contributor

I ran node-addon-api-leak on my laptop, and it seems to top off at 7.8% memory usage, and it stops printing after 26000000. It continues running at 100%.

@gabrielschulhof
Copy link
Contributor

While it prints, the CPU usage is stead, but after it stops printing it seems to spike and abate.

@mhdawson
Copy link
Member

I believe I know what is leaking with the latest, still not sure why but this what I think is related:
in napi-inl.h the method:

inline AsyncWorker::AsyncWorker(const Object& receiver, const Function& callback)

calls

status = napi_create_async_work(
    _env, nullptr, resourceId.Value().Get("resource-id"),  OnExecute, OnWorkComplete, this, &_work);
OnWorkComplete, this, &_work);

The value of nullptr for the resource results in the napi_create_async_work allocating an emtpy Object to act as the resource.

Thinking that might be the problem I modified the code to allocate a resource once and pass that in with:

static ObjectReference resourceId;

inline AsyncWorker::AsyncWorker(const Object& receiver, const Function& callback)
  : _env(callback.Env()),
    _receiver(Napi::Persistent(receiver)),
    _callback(Napi::Persistent(callback)) {
  HandleScope scope(_env);

  napi_status status;
  if (resourceId.IsEmpty()) {
    napi_value resource_id;
    status = napi_create_string_latin1(
      _env, "genericXXXX", NAPI_AUTO_LENGTH, &resource_id);
    NAPI_THROW_IF_FAILED(_env, status);
    Napi::Object theObject = Napi::Object::New(_env);
    theObject.Set("resource-id", resource_id);
    resourceId = Napi::Persistent(theObject);
    resourceId.SuppressDestruct();
    printf("created resource id\n");
  }

  status = napi_create_async_work(
    _env, resourceId.Value().Get("resource-id"), resourceId.Value().Get("resource-id"),  OnExecute, OnWorkComplete, this, &_work);

With that change, then when looking at the heap snapshot with the chrome debugger the objects that are leaking are all Strings with the string value "genericXXXX". But since the code above passes in the same string every time (we only see the printout of "created resource id" once), the resource object must be being copied somewhere and then it held alive.

mhdawson added a commit to mhdawson/io.js that referenced this issue May 11, 2018
Reset the persistent that keeps the resource
Object alive when the AsyncResource is being
destroyed.

Fixes: nodejs/node-addon-api#237
@mhdawson
Copy link
Member

This seems to resolve the problem: nodejs/node#20668

I can run the test case and when I take heapdumps with the chrome dev tools I no longer see growth in the heap and the resident set size stays reasonable.

@gabrielschulhof
Copy link
Contributor

@mhdawson and I backed the JS part of the node-addon-api-leak test with the following native code, lacking an AsyncResource, and it did not leak:

#include <stdio.h>

#include "uv.h"
#include "node.h"

using namespace v8;

typedef struct {
  uv_work_t uv_work;
  Persistent<Function> perma_cb;
} MyWork;

static void do_work(uv_work_t* work) {}

static void done_work(uv_work_t* work, int status) {
  auto isolate = Isolate::GetCurrent();
  HandleScope scope(isolate);
  auto context = isolate->GetCurrentContext();
  node::CallbackScope cb_scope(isolate, Object::New(isolate),
      node::async_context());

  MyWork* my_work = reinterpret_cast<MyWork*>(work);
  Local<Function> js_cb =
      Local<Function>::New(isolate, my_work->perma_cb);
  js_cb->Call(context, context->Global(), 0, nullptr).ToLocalChecked();
  my_work->perma_cb.Reset();
  delete my_work;
}

static void MyObject(const FunctionCallbackInfo<Value>& info) {}

static void MyObject_open(const FunctionCallbackInfo<Value>& info) {
  MyWork *work = new MyWork;
  work->perma_cb.Reset(Isolate::GetCurrent(), info[0].As<Function>());
  uv_loop_t *loop = uv_default_loop();
  if (uv_queue_work(loop,
      reinterpret_cast<uv_work_t*>(work), do_work, done_work)
      != 0) {
    fprintf(stderr, "Failed to create work\n");
  }
}

static void Init(Local<Object> exports, Local<Value> module) {
  auto isolate = Isolate::GetCurrent();
  auto context = isolate->GetCurrentContext();
  auto js_class = FunctionTemplate::New(isolate, MyObject);
  auto js_method = FunctionTemplate::New(isolate, MyObject_open,
      Local<Value>(), Signature::New(isolate, js_class));
  js_class->PrototypeTemplate()->Set(isolate, "open", js_method);
  exports->Set(isolate->GetCurrentContext(),
    String::NewFromUtf8(isolate, "MyObject", NewStringType::kNormal)
        .ToLocalChecked(),
    js_class->GetFunction(context).ToLocalChecked()).FromJust();
}

NODE_MODULE(NODE_GYP_MODULE_NAME, Init)

@sierrars
Copy link

Thanks @mhdawson for the quick turnaround!

addaleax pushed a commit to nodejs/node that referenced this issue May 14, 2018
Reset the persistent that keeps the resource
Object alive when the AsyncResource is being
destroyed.

Fixes: nodejs/node-addon-api#237

PR-URL: #20668
Reviewed-By: Anna Henningsen <[email protected]>
Reviewed-By: Tiancheng "Timothy" Gu <[email protected]>
Reviewed-By: Gabriel Schulhof <[email protected]>
Reviewed-By: James M Snell <[email protected]>
MylesBorins pushed a commit to nodejs/node that referenced this issue Jul 9, 2018
Reset the persistent that keeps the resource
Object alive when the AsyncResource is being
destroyed.

Fixes: nodejs/node-addon-api#237

PR-URL: #20668
Reviewed-By: Anna Henningsen <[email protected]>
Reviewed-By: Tiancheng "Timothy" Gu <[email protected]>
Reviewed-By: Gabriel Schulhof <[email protected]>
Reviewed-By: James M Snell <[email protected]>
MylesBorins pushed a commit to nodejs/node that referenced this issue Jul 10, 2018
Reset the persistent that keeps the resource
Object alive when the AsyncResource is being
destroyed.

Fixes: nodejs/node-addon-api#237

PR-URL: #20668
Reviewed-By: Anna Henningsen <[email protected]>
Reviewed-By: Tiancheng "Timothy" Gu <[email protected]>
Reviewed-By: Gabriel Schulhof <[email protected]>
Reviewed-By: James M Snell <[email protected]>
rvagg pushed a commit to nodejs/node that referenced this issue Aug 16, 2018
Reset the persistent that keeps the resource
Object alive when the AsyncResource is being
destroyed.

Fixes: nodejs/node-addon-api#237

PR-URL: #20668
Reviewed-By: Anna Henningsen <[email protected]>
Reviewed-By: Tiancheng "Timothy" Gu <[email protected]>
Reviewed-By: Gabriel Schulhof <[email protected]>
Reviewed-By: James M Snell <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants