-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NPM install stucks with node:20 #1946
Comments
I have the same issue with yarn |
Same here. My image works with Node 16, 18 and 19. With 20 even Update: on my pipeline based on Ubuntu-22.04 linux/amd64 works |
I just tried on Ventura with an M2 chip and it didn't hang |
I am ablwe to reproduce the behavior when disabling my network. I am wondering if some firewall rule might be blocking traffic. |
Hi, I'm also facing the same issue. I'm using M1 Mac. I'm observing this issue since last week. I have tried Node 20,18 alpine, slim and latest. The issue persist. |
I was able to run yarn and npm commands now after disabling Use Rosetta for x86/amd64 emulation on Apple Silicon option in docker settings. Not sure this is a fix. |
I'm encountering the same issue - docker build hangs on (Why do this? I want to test an image on my mac laptop that will eventually run in an amd64 environment, and one apt dependency does not have an arm64 version) Workarounds I found:
Dockerfile
package.json
index.js
Console output
|
Same issue affecting npm and yarn when using |
got same issues , you can easily duplicate this issue with below commands
Then it stucks at installation forever. run tests on below platforms
|
My workaround at the moment is to stay on Node 18.x. |
Also have the same issue, and reverted back to Node 18.x for my docker builds. |
It seems that this is a duplicate of #1798? I see the same seqence of |
Upgrading from Docker Desktop 4.21 to 4.26.1 (Latest at the time of this writing) seems to have fixed this problem for me on an M2 Mac. |
# linux/arm/v7 arm32 is not supported by node20 nodejs/docker-node#1946
TriggerWhat triggered this problem for me was updating my SolutionThe key solution for me was using DetailsHere are some additional details I gathered before finding the final solution, in case it's helpful for others, or for debugging the root problem. Sorta SolutionsThese attempts somewhat worked but not in a satisfactory way...
Non-SolutionsThese attempts didn't work...
|
Using |
My finding is that it eventually goes through the installation process, but hangs for 2 - 5 minutes. In my build process there are two node applications installed in node:20, the other one goes through fine, the second one hangs. Both have lockfileVersion 3. Only difference I can find is that the non-functioning one is defined as a "type": "module" in package.json. |
* chore: Upgrade stack versions * Revert node upgrade See nodejs/docker-node#1946
It's still a thing on node 22 :/ |
FIY the issue is in npm and node (not qemu), which after 10.4.0 runs a node debugging method once per resolved package. This debugging method takes very long after http requests are made in node, see npm/cli#4028 (comment) The hardest part of locating this issue is now done and a PR is opened so hopefully a fix will be released soon |
On windows with WSL2 installed, changing the builder in docker setting to "default" and not using WSL builder seems to help with the issue. Not sure if it's an actual solution. |
I was able to get node:lts to get past this hang by adding
I hope you found the fix @Tofandel, but why would adding |
Did you try with npm 10.8.2 and node 22.11.0 instead of switching both versions at the same time? Or did you try with node 22.11.0 with npm 10.3.0 ? Basically the root of the issue is in node's getReport which runs reverse DNS queries in a sync manner for all open handles (think fetch) Npm had the good idea to introduce glibc detection using node's getReport.. In a loop... Resulting in extremely long delays in some envs where DNS resolution is not cached or not fast The network host bypasses the dns resolution of the docker image, which apparently is slow (this is the only thing I didn't figure out, why some are super fast and some so slow, for example windows takes 5s for a uv_get_nameinfo but wsl only 2s) It's possible the node version simply introduces variance in which handles are kept open after a request and thus aleviates the issue |
I found out with NodeJS 22.7 this error doesn't occur. I faced the issue using node:22-alpine as the base image for my NextJS app. As they don't keep old versions of it (They overwrite the node:22-alpine tag to have the most recent minor version of Node) using node:22-alpine currently uses NodeJS 22.11 which does have the bug. I had to use their Dockerfile for Alpine 3.20 and NodeJS 22.7 Then, used that image built locally as the base for my build process and It worked like a charm. Went down from 28 min to 1:50 min build time. |
Downgrading node is a bad idea (especially 22.7 which is known to have a big buffer bug), it works because doing that also changes the npm version, simply use the latest node but downgrade npm with Basically what I found is that the bug will either occur in 10.4.0 or 10.8.3 depending on your actual dependencies because |
@Tofandel Good call. I'll give it a try. Thanks Edit: It worked. Now I'm using Node 22.11 with NPM 10.3.0 BTW I'll keep my custom alpine build to make sure I keep on 22.11 and prevent unintended upgrades to potentially unstable minor versions. Thanks again! |
if at all possible i recommend switching to using bun for your install step. Even if you need to use node it can save a lot if time in the install step to use bun. I had the same issue with installs taking forever. Switching to bun made the install time go from 57 seconds to 2 seconds. |
@xeon826 Can you elaborate? |
The issue comes from NPM >=10.4.0
|
Thanks @diego-coba ! I will give it a try. I wanted to add that I only experience this issue when |
Downgrading npm version as suggested to @10.3.0 worked for me. Thanks |
I can confirm that downgrading my node image from node:lts-slim to node:22.9-slim fixed the issue for me. At the time of writing node:lts-slim was resolving to node:22.11-slim. I did not try node:22.10-slim. The issue was not occurring on OSX (orbstack+arm64) image builds but was happening for me on linux/amd64 builds running on a linux/amd64 system. |
I also downgraded npm to 10.3.0 and that seems to have fixed the problem. |
I had the same issue building for ARM7 and ARM64. |
Same here! Downgrading to npm v10.3.0 does fix the issue. |
This was fixed in npm 10.9.1 (via npm/npm-install-checks#120) |
I had just had to cancel a cross plattform docker build for armv7 after 4 hours. I used alpine:edge with npm 10.9.1 as builder and npm install stuck. So may be there is still a qemu problem? |
You would need to debug this further adding timings and debug log to your task, there is still a few things that can make npm freeze One is a kernel issue amazonlinux/amazon-linux-2023#840 (comment) (you can try the workaround and see if that helps) Another possibility is one of your dependency is building something and this is what's getting stuck You will need to find the cause or do some debugging before we can be of further help |
I have set up a package.json with no packages. Using alpine:edge with npm 10.9.1 with target x64, the docker build finishes within seconds. With target armv7, the npm install makes no progress even after 1 hour. With alpine:3.18.9, the npm install only takes a few seconds to complete with armv7 target. The first failing version is alpine:19.1.0 which uses npm 10. The mentioned workaround has no effect. |
Exact same symptoms here with a Any of these 3 independent workarounds seem to work for me (tested multiple times):
|
Environment
Expected Behavior
NPM install should be started. I've tested this with node:18 image instead and it works. node:20-slim stucks too.
Current Behavior
NPM install does not start and stucks.
I've tried other npm commands like
npm cache clean --force
but those stuck too.Possible Solution
Steps to Reproduce
I'm using the docker file from official nodejs example :
Additional Information
The text was updated successfully, but these errors were encountered: