Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

running webpack under node version higher than v10.1 produces heavy cpu usage and a high number of Idle Wake Ups #22314

Open
plitzenberger opened this issue Aug 14, 2018 · 11 comments
Labels
performance Issues and PRs related to the performance of Node.js.

Comments

@plitzenberger
Copy link

  • Working version: v10.1.0
  • Issue appears on versions: v10.2.1 v10.3.0 v10.4.1 v10.5.0 v10.6.0 v10.7.0 v10.8.0
  • NVM VERSION: 0.33.11
  • Platform: Darwin 17.6.0 Darwin Kernel Version 17.6.0; root:xnu-4570.61.1~1/RELEASE_X86_64 x86_64
  • Subsystem: unknow

After upgrading to the latest version I'm suffering constant heavy CPU load when running webpack (even without any file system changes on the code base). The machine gets almost unusable when actively developing.

package.json webpack versions:

{
  "devDependencies": {
    ...
    "nodemon": "^1.14.9",
    ...
    "webpack": "^3.10.0",
    "webpack-assets-manifest": "^1.0.0",
    "webpack-bundle-analyzer": "^2.9.1",
    "webpack-dev-middleware": "^2.0.4",
    "webpack-dev-server": "^2.10.0",
    "webpack-hot-middleware": "^2.21.0",
    "webpack-manifest-plugin": "^1.3.2",
    "webpack-spritesmith": "^0.3.3",
  }
}

The number of "Idle Wake Ups" in Mac Os "Activity Monitor" is hight when the issue appears.

v10.1.0 after 5 minutes without any file system changes
node-v10 1 0-5min

v10.2.1 after 5 minutes without any file system changes
node-v10 2 1-5min

v10.3.0 after 5 minutes without any file system changes
node-v10 3 0-5min

Any help/guidance on how to debug this issue is highly appreciated.

@ChALkeR ChALkeR added performance Issues and PRs related to the performance of Node.js. v10.x labels Aug 15, 2018
@vmarchaud
Copy link
Contributor

Do you have a sample code that reproduce the issue ?
Otherwise you could add a flag to node to enable the debugger (doc) and then run a CPU Profile, which will indicate which function use a lot of cpu

@mdarse
Copy link
Contributor

mdarse commented Sep 18, 2018

I encounter the same issue. I'm also using webpack but version 2.3.3 on Darwin 16.7.0 Darwin Kernel Version 16.7.0; root:xnu-3789.73.14~1/RELEASE_X86_64 x86_64 if that matters.

After running my code with v8 profiler enabled (--prof), I get the following result:

 [C++]:
   ticks  total  nonlib   name
  57021   54.0%   56.1%  T _munmap
   8665    8.2%    8.5%  T ___carbon_delete
   3421    3.2%    3.4%  T _uv_timer_stop
   1982    1.9%    1.9%  T ___channel_get_opt
   1048    1.0%    1.0%  t _heap_node_swap
    929    0.9%    0.9%  T node::contextify::ContextifyScript::New(v8::FunctionCallbackInfo<v8::Value> const&)
    895    0.8%    0.9%  T __pthread_qos_class_decode
    893    0.8%    0.9%  t _szone_malloc_should_clear
    889    0.8%    0.9%  t node::fs::Read(v8::FunctionCallbackInfo<v8::Value> const&)
    627    0.6%    0.6%  T _uv_timer_start
    469    0.4%    0.5%  t _timer_cb

(...)

 [Summary]:
   ticks  total  nonlib   name
  10837   10.3%   10.7%  JavaScript
  90117   85.3%   88.6%  C++
   2218    2.1%    2.2%  GC
   3965    3.8%          Shared libraries
    741    0.7%          Unaccounted

 [C++ entry points]:
   ticks    cpp   total   name
   2921   29.8%    2.8%  T v8::internal::Builtin_HandleApiCall(int, v8::internal::Object**, v8::internal::Isolate*)
    669    6.8%    0.6%  T v8::internal::Runtime_CompileOptimized_Concurrent(int, v8::internal::Object**, v8::internal::Isolate*)
    554    5.7%    0.5%  T v8::internal::StringTable::LookupStringIfExists_NoAllocate(v8::internal::String*)
    499    5.1%    0.5%  T v8::internal::Runtime_CompileLazy(int, v8::internal::Object**, v8::internal::Isolate*)
    437    4.5%    0.4%  T v8::internal::Runtime_SetDataProperties(int, v8::internal::Object**, v8::internal::Isolate*)
    392    4.0%    0.4%  T _munmap

(...)

 [Bottom up (heavy) profile]:
  Note: percentage shows a share of a particular caller in the total
  amount of its parent calls.
  Callers occupying less than 1.0% are not shown.

   ticks parent  name
  57021   54.0%  T _munmap
   8665    8.2%  T ___carbon_delete
   3421    3.2%  T _uv_timer_stop
   3062    2.9%  /usr/lib/system/libsystem_c.dylib
   1982    1.9%  T ___channel_get_opt

It seems that time is spent in C++, does anyone know what v8::internal::Builtin_HandleApiCall is for?

@Trott
Copy link
Member

Trott commented Nov 21, 2018

Can anyone report if this is an issue in Node.js 11.2.0? How about a more recent 10.x version like 10.13.0?

@Jamesernator
Copy link

I've been seeing this problem on two colleagues' Macbooks, while we're not using webpack a lot of watchers are being created with chokidar that are being consumed manually.

I used nvm to compare three different versions of node, the current version being used is 11.0 and it has the performance issue, as does 11.2. However when I tried running the same script with 10.0 the issue was not present and the CPU usage was actually reasonable.

With the versions where the issue is present the CPU is constantly at 150%+ usage whereas with version 10.0 the process was regularly below 1-2%.

@jraoult
Copy link

jraoult commented Dec 29, 2018

@Trott I had been using node LTS (10) for this reason for a few months now after I tried v11 when it was first released. I gave another try today with v11.6.0, and I can confirm the problem is still here while v10.15 is fine.

@BridgeAR
Copy link
Member

BridgeAR commented Jan 9, 2019

@jraoult you said that the issue is not present on v.10.15? Could you check since what 10 version this changed?

@jraoult
Copy link

jraoult commented Jan 9, 2019

@BridgeAR sure, I'll try to do a "bisect" asap. Bear with me.

EDIT: So truth to be told I just can't reproduce with any ^10 whereas 11.6 geos straight over 100% CPU usage and have a high count of Idle Wake Ups (oscillating between 8 to 15). So it seems that my case has the same symptoms but not occurring with the same versions. What else can I do to help @BridgeAR ?

@hulkish
Copy link

hulkish commented Apr 11, 2019

@plitzenberger Can u set parallelism: 1 (defaults to 100`) and run your perf tests again? This will provide a better baseline.

@gireeshpunathil
Copy link
Member

@plitzenberger - is this still an issue?

I come across this - wakeup profiling in linux, but not sure this capability exists for mac.

IMO the --prof data does not help much in wakeup debugging, as it focusses only on CPU usage when running, not on by whom, when and why a thread was waken up.

But if:

  • the profile data is an accurate reflection of the high CPU at the time period that spans the idle wake ups,
  • all the idle wake ups are causing thread(s) to consume high CPU,

then I think we can make these observations (from the above data):

  • the user share has increased from 7% in good to 24% in bad case, which measn node was predominently running in user space with high CPU
  • the presence of uv_timer_start, uv_timer_stop, node::fs::Read and munmap in the profile data may indicate a setInterval that kicks in at intervals, and processes a large file content.

Running node with dtruss -t open -s -f node <args> should trace the open system calls and produce stack of the call chain.

#cat 22314.js

setInterval(() => {
  const d = require('fs').readFileSync(process.execPath)
  const c = d.swap16()
}, 1000)
$ sudo dtruss -t open -s -f node 22314

...


23903/0x1524bf:  open("/usr/local/bin/node\0", 0x1000000, 0x1B6)		 = 20 0

              libsystem_kernel.dylib`__open+0xa
              node`uv_fs_open+0xeb
              node`node::fs::Open(v8::FunctionCallbackInfo<v8::Value> const&)+0x389
              node`v8::internal::FunctionCallbackArguments::Call(v8::internal::CallHandlerInfo)+0x220
              node`v8::internal::MaybeHandle<v8::internal::Object> v8::internal::(anonymous namespace)::HandleApiCallHelper<false>(v8::internal::Isolate*, v8::internal::Handle<v8::internal::HeapObject>, v8::internal::Handle<v8::internal::HeapObject>, v8::internal::Handle<v8::internal::FunctionTemplateInfo>, v8::internal::Handle<v8::internal::Object>, v8::internal::BuiltinArguments)+0x22b
              node`v8::internal::Builtin_Impl_HandleApiCall(v8::internal::BuiltinArguments, v8::internal::Isolate*)+0x110
              node`Builtins_CEntry_Return1_DontSaveFPRegs_ArgvOnStack_BuiltinExit+0x39
              node`Builtins_InterpreterEntryTrampoline+0x2a1
              node`Builtins_InterpreterEntryTrampoline+0x2a1
              node`Builtins_InterpreterEntryTrampoline+0x2a1
              node`Builtins_InterpreterEntryTrampoline+0x2a1
              node`Builtins_InterpreterEntryTrampoline+0x2a1
              node`Builtins_JSEntryTrampoline+0x5a
              node`Builtins_JSEntry+0x78
...

that should give us sufficient clues about what could be happening in the idle times?

@jasnell
Copy link
Member

jasnell commented Jan 11, 2021

I know it's been a while on this one without progress but revisiting it I'm really wondering if this isn't related to the changes that were made to fs.readFile() in terms of how it's using the threadpool (see #25741)... If that's the case, the only solution is to move away from using fs.readFile() and to use fs.read() or some other alternative.

@jasnell
Copy link
Member

jasnell commented Apr 26, 2021

Given that 10.x is reaching end of life and it's not clear if this issue is limited to 10.x or not, I'm removing the 10.x label.

@jasnell jasnell removed the v10.x label Apr 26, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
performance Issues and PRs related to the performance of Node.js.
Projects
None yet
Development

No branches or pull requests