-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Timeout abort can leave process(es) running in the background #3077
Comments
I also noticed something similar. I think there was a comment in another issue that 100 CPU leads to process.exit override by execa 🤔 I am currently on my phone, so I cannot provide more information right now, but we should definitely look into it. |
@NuroDev as work-around for now you can use export default defineConfig({
test: {
poolMatchGlobs: [
// This test prevents worker_thread from terminating
["**/spacex/tests/starlink.test.ts", "child_process"],
],
},
}); This issue is kind of duplicate of #2008, but the reproduction case here is best one so far. I can reproduce the process hang here on every run and there are not that many test cases in total. Earlier I used I can look into this more later. But likely something in the |
I was able to reproduce this issue using just Node API's and I didn't look into their codebase that much but there seems to be some native code: https://github.com/nodejs/undici/tree/main/deps/llhttp/src |
Encountered the same issue on Node 20. Seems undici is the cause. By stubbing the global fetch with node-fetch the issue was resolved. |
I seem to have the same issue on GitHub Actions. So far no problem on Windows. |
Has there been any further investigation to the root cause of this? I am running into it running in Azure Devops as well but do not use the above unidici library mentioned anywhere in the project so I think it's more generic. Trying the above workaround to see if it helps but curious if there has been any further insight into what might cause this? |
I've only reported the issue to NodeJS issue board but that's it. Note that this also happens with native I've only seen reproduction setups where |
I think that I am running into this issue as well. Unfortunately for my project, I am stuck between a rock and a hard place. I will either have to live with the hanging issue until it gets fixed upstream or downgrade to MSW 1.x. It appears that MSW 2.0 doesn't support polyfilling fetch and requires the native node fetch API so at least in my scenario, fixing it isn't as easy as replacing node's native fetch API with node-fetch. Where MSW says that they will not support polyfilling fetch. |
Node's On Vitest you can use |
In my case, node-fetch is already stubbed - as is for every project using happy-dom with vitest, since happy-dom uses node-fetch under the hood. The issue however is present, so node-fetch is not the solution. |
// @vitest-environment happy-dom
import { test } from "vitest";
import nodeFetch from "node-fetch";
test("fetch", () => {
console.log(fetch === nodeFetch);
// stdout | tests/happy-dom.test.ts > fetch
// false
}); @raegen I'm happy to look into reproduction cases where using |
I also created a simple repo to try to reproduce this issue. In my experiment, I used the library What I'm sure that without
|
@InfiniteXyy I don't think that is related to this issue as it's not using |
+1 , I am using nodejs fetch api. Most cases are working well, but one of them faced the issue problem. Not 100% happens. |
I ran into this issue today, and using
test: {
globals: true,
environment: 'jsdom',
setupFiles: './src/test/setup.tsx',
// Setting pool='forks' is preventing this issue https://github.com/vitest-dev/vitest/issues/3077
pool: 'forks',
}, |
* update vitest and vue test-utils * fix vue-tsc errors. * try to prevent issue vitest-dev/vitest#3077 by using vitest config "pool" set to "forks" instead of default "threads"
Some seemingly good news from upstream: nodejs/undici#2026 (comment) |
CI sometimes still hangs, trying to run without threads - vitest-dev/vitest#2008 - vitest-dev/vitest#3077 In earlier vitest versions the optinon was --no-threads, but this changed to --pool=forks in: - https://github.com/vitest-dev/vitest/releases/tag/v1.0.0-beta.0 - https://github.com/vitest-dev/vitest/blob/main/docs/guide/migration.md#pools-are-standardized-4172 i# with '#' will be ignored, and an empty message aborts the commit.
CI sometimes still hangs, trying to run without threads - vitest-dev/vitest#2008 - vitest-dev/vitest#3077 In earlier vitest versions the optinon was --no-threads, but this changed to --pool=forks in: - https://github.com/vitest-dev/vitest/releases/tag/v1.0.0-beta.0 - https://github.com/vitest-dev/vitest/blob/main/docs/guide/migration.md#pools-are-standardized-4172 Changed vitest config, so that CI and make webtest are both using a single thread.
Describe the bug
Originally I was running
0.29.2
& ran into this frequently but since upgrading to0.29.7
I have only ran into it a few times but in summary I have a monorepo with lots of packages & lots of tests. On a rare occasion when I run it it would get hung / stuck & then prompt aclose timed out after 10000ms
, followed by aFailed to terminate worker while running [PATH_TO_TEST]
.When this happens I, like many, would try to abort the test using
^C
. However, when doing so Vitest seems to leave the hung Node.js process running with the process peaked at 100% CPU usage. I mainly noticed this issue after my Macbook battery life dropped from 100% to 9% in 2 hours & realised a handful of Node.js processes were left running from the night before.I assume this is because either Vitest is not correctly cleaning up processes when the CLI gets hung OR because the process is hung Vitest is not able to kill the process.
Reproduction
Reproduction has been relatively inconsistant so far but I was able to record the issue in a few steps:
close timed out after 10000ms
gets printed to the console^C
If you want to clone the source to run this exact how I am, I just open sourced the project: nurodev/untypeable
Kapture.2023-03-24.at.21.41.43.mp4
System Info
Used Package Manager
yarn
Validations
The text was updated successfully, but these errors were encountered: