Shared worker feedback & support #2703
Replies: 9 comments 25 replies
-
Sharing use-case here by request 🙂 I'm working on a public template where I need to test both I have a folder structure as such
and the "test": "npm run test:dev && npm run test:build",
"test:dev": "node test/wrapper dev ava test/{common,dev}/**/*.test.js",
"test:build": "node test/wrapper build ava test/{common,build}/**/*.test.js" The // assign `dev` or `build` to mode and `ava` to cmd
const [mode, cmd, ...args] = process.argv.slice(2)
async function wrapSpawn() {
const teardown = await setups[mode]()
spawnSync(npx, [cmd, ...args], { stdio: 'inherit' })
teardown()
} entire wrapper scriptconst { spawnSync, spawn } = require('child_process')
const { checkPort, wait } = require('./utils')
const fkill = require('fkill')
const npm = /^win/.test(process.platform) ? 'npm.cmd' : 'npm'
const npx = /^win/.test(process.platform) ? 'npx.cmd' : 'npx'
// assign `dev` or `build` to mode and `ava` to cmd
const [mode, cmd, ...args] = process.argv.slice(2)
const setups = {
async dev() {
const child = await spawn(npm, ['run', 'dev'], )
await checkPort(5000, 15000)
await wait(500)
return () => fkill(child.pid, { tree: true, force: true })
},
async build() {
spawnSync(npm, ['run', 'build'], )
const child = await spawn(npm, ['run', 'serve'], )
await checkPort(5000, 15000)
return () => fkill(child.pid, { tree: true, force: true })
}
}
async function wrapSpawn() {
const teardown = await setups[mode]()
spawnSync(npx, [cmd, ...args], { stdio: 'inherit' })
teardown()
}
wrapSpawn() I'm somewhat happy with the script in its current state. It's not 100% self explanatory in I had previously used shared context and being able to use For my use case the best option would be an argument that lets me provide a setup script. (I guess I could DIY that for now)
The setup file: //test/_dev-setup.js
module.exports = async function() {
const handle = await createMyDevServer()
// return a teardown function
return function(){
handle.kill()
}
} |
Beta Was this translation helpful? Give feedback.
-
@jakobrosenberg so the way I was thinking this might work:
What do you think? |
Beta Was this translation helpful? Give feedback.
-
Sounds good. What are your thoughts on being able to override the plugins through the CLI ( //myplugin.js
module.exports = payload => {
payload.foo = 'bar'
return process.argv.includes('--my-custom-flag')
? require('./alt-plugin')(payload)
: require('./main-plugin')(payload)
} |
Beta Was this translation helpful? Give feedback.
-
I'm reluctant to add CLI options for things that you would apply on every test run, I'd rather only expose flags for the stuff you need to do occasionally to debug a test run. Passing arguments to plugins that have been registered through the configuration could be interesting though. |
Beta Was this translation helpful? Give feedback.
-
Not too sure if this is the right place for this or if a new issue would be better, but I'll start here! I'll start this all of with a bug caveat that I don't have much experience with workers, memory sharing, or any of the other nitty gritty stuff involved here. My ultimate goal is to create a To run my app in This all works nicely and, importantly, it avoids serializing the contents of the JS file. This works fine but has a scaling issue as each test suite/process needs to build the app. I wondered if workers might help solve this by building once and providing the output to all tests that need it. I've put together a rudimentary example using workers that does indeed reduce the runs of This is where my limited knowledge of this type of thing starts to fade. It seems based on the docs for postMessage that the expected solution here would be to use Sorry for the long winded backstory! Thanks for all your work. |
Beta Was this translation helpful? Give feedback.
-
@nathanforce if you could repost that as a Discussion that would be more appropriate. Happy to explore this with you. |
Beta Was this translation helpful? Give feedback.
-
I'm a little torn on what I think the behavior should be, though as a baseline suggestion I think the behavior should be documented. It seems like maybe the default should be NODE_ENV="test" for consistency, but whatever the default, I think this should be customizable (without having to write to process.env from within the worker). This would probably mean adding a property to the SharedWorker constructor, which creates a problem similar to the existing initialData property. |
Beta Was this translation helpful? Give feedback.
-
I've been experimenting with running Puppeteer in a shared worker (and having it connect to Fastify in another one). It's been smooth sailing so far but there's one thing I can't figure out: robust disposal. The Node docs mention that there's no What I've done so far is use the message bus to have workers accept a dispose command. I trigger that command from the teardown hook on the outside. However, if the message bus itself gets stuck waiting for a reply, or if the process gets killed, that doesn't do anything since the message is never received. In my case, if disposal doesn't run, I get a headless Chrome process that I need to kill manually. Similarly, I use ngrok to add SSL to my Fastify server, and that also spawns a process that doesn't get killed automatically. It'd be great if AVA itself would let me register a (guaranteed) teardown function inside a worker as well. As far as I can tell, that's not available from AVA today and I also wouldn't know what underlying API to use. Any guidance around this? Thanks! |
Beta Was this translation helpful? Give feedback.
-
Is there a shared worker to run a setup operation once, with all test files waiting for it to complete before running their tests? This task configures something in the host environment.
An un-managed semaphore can ensure that only one worker does the setup task. A lock ensures all other workers wait for it to complete. The missing piece is error reporting. If the setup task fails, all workers should error. This could be as simple as: import {setValue, getValue} from @ava/shared-values';
const {errorMessage} = getValue('setup-task-result'); |
Beta Was this translation helpful? Give feedback.
-
We're pretty excited about shared workers! But we need more real-world experience in building AVA plugins before we can make it generally available. Please give feedback and build plugins. We'd be more than happy to promote them.
Not sure what to build? Previously folks have expressed a desire for mutexes, managing Puppeteer instances, starting (database) servers and so forth.
We could also extend the shared worker implementation in AVA itself. Perhaps so you can run code before a new test run, even with watch mode. Or so you can initialize a shared worker based on the AVA configuration, not when a test file runs.
Please comment here with ideas, questions and feedback.
Beta Was this translation helpful? Give feedback.
All reactions