-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug?]: workspaces foreach using too much memory #3395
Comments
Working on a reproduce case. |
@arcanis Could this have to do with how Clipanion spawns new processes? Edit: Using Lerna for these scripts also did not grant the memory issues. I did not check yet how Lerna spawns processes. |
The scripts are executed within the same process (here) - it's possible there's a weird interaction somewhere, but my assumption was that in-process execution would be lighter than spawning subprocesses 🤔 |
I tested on a project with 277 workspaces with a total of 6449 packages and it sits at about 400MB memory and doesn't climb so the project size doesn't seem to be the problem |
Our situation is: CI Host > Spawning Docker container > Running yarn commands We've found a resolution for this by running the following command in the Host Inspiration came from nodejs/node#25382 (comment) Edit: If all of these child processes allocate the same amount of memory as their parent process, we get the OOM error, even though there is still plenty of memory to go around. The child processes are forked on Linux and claim they "might use" as much memory as the parent, so this amount is allocated. Now, this has to do with how node.js forks processes on Linux and maybe not so much with yarn, but I would argue that the issue is still valid, as spawning processes with alternatives (custom JS script or Lerna) does not make the same CI system go OOM. |
I've been tooling around with replacing Lerna with workspaces foreach recently, and while my memory issues aren't as severe as yours, I have also seen the problem with spawning lots of intermediate processes that do barely anything sucking up resources. In an attempt to solve it I've been hacking around with a plugin that hooks into the https://github.com/seansfkelley/berry/tree/alias-run/packages/plugin-run-inline |
Self-service
Describe the bug
We used to have a custom script building a dependency tree for our monorepo and building packages in the right order.
Since the workspaces plugin does the same thing, we've replaced this script.
However, when running the command on CI, it goes out of memory.
As the workspaces foreach command is essentially doing the same thing as our former JS script, this is unexpected behavior.
Script:
yarn workspaces foreach --topological-dev --parallel --verbose --jobs 4 run build
It also goes out of memory with 2 jobs, while we used to be able to run this with 4 workers.
Output on CI:
To reproduce
Need quite a big monorepo to reproduce this..
If there are ideas for me to set this up, I will try to make a solid reproduce case.
Environment
Additional context
It seems to me that there is some kind of memory leak when using
yarn workspaces foreach
on a big monorepo (~80+ packages).We have a test suite per package, and when running those suites sequentially with
workspaces foreach
, even those are going out of memory on CI atm.Edit:
Running the following command on CI:
CI=true yarn workspaces foreach -v run test --silent --passWithNoTests
where
test
in every workspace is defined like this:"test": "yarn g:jest"
and the top level command is:
"g:jest": "cd $INIT_CWD && jest"
Goes out of memory as soon as it needs to actually build a test.
Edit2:
Added mem logging on CI. It shows that our machine has plenty of memory available, but still we're getting an ENOMEM error.
Edit 3:
Yarn workspaces is crashing out of the process as soon as it tries to spawn the process.
There is no output from the process or command itself.
The problem seems to be a memory leak in yarn workspaces under the Node env.
Normal CI output:
Our instance has 16GB of ram and 8 CPU cores.
CI output with memory logs:
The text was updated successfully, but these errors were encountered: