-
Notifications
You must be signed in to change notification settings - Fork 1.4k
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Shared worker feedback & support #2605
Comments
Sharing use-case here by request 🙂 I'm working on a public template where I need to test both I have a folder structure as such
and the "test": "npm run test:dev && npm run test:build",
"test:dev": "node test/wrapper dev ava test/{common,dev}/**/*.test.js",
"test:build": "node test/wrapper build ava test/{common,build}/**/*.test.js" The // assign `dev` or `build` to mode and `ava` to cmd
const [mode, cmd, ...args] = process.argv.slice(2)
async function wrapSpawn() {
const teardown = await setups[mode]()
spawnSync(npx, [cmd, ...args], { stdio: 'inherit' })
teardown()
} entire wrapper scriptconst { spawnSync, spawn } = require('child_process')
const { checkPort, wait } = require('./utils')
const fkill = require('fkill')
const npm = /^win/.test(process.platform) ? 'npm.cmd' : 'npm'
const npx = /^win/.test(process.platform) ? 'npx.cmd' : 'npx'
// assign `dev` or `build` to mode and `ava` to cmd
const [mode, cmd, ...args] = process.argv.slice(2)
const setups = {
async dev() {
const child = await spawn(npm, ['run', 'dev'], )
await checkPort(5000, 15000)
await wait(500)
return () => fkill(child.pid, { tree: true, force: true })
},
async build() {
spawnSync(npm, ['run', 'build'], )
const child = await spawn(npm, ['run', 'serve'], )
await checkPort(5000, 15000)
return () => fkill(child.pid, { tree: true, force: true })
}
}
async function wrapSpawn() {
const teardown = await setups[mode]()
spawnSync(npx, [cmd, ...args], { stdio: 'inherit' })
teardown()
}
wrapSpawn() I'm somewhat happy with the script in its current state. It's not 100% self explanatory in I had previously used shared context and being able to use For my use case the best option would be an argument that lets me provide a setup script. (I guess I could DIY that for now)
The setup file: //test/_dev-setup.js
module.exports = async function() {
const handle = await createMyDevServer()
// return a teardown function
return function(){
handle.kill()
}
} |
@jakobrosenberg so the way I was thinking this might work:
What do you think? |
Sounds good. What are your thoughts on being able to override the plugins through the CLI ( //myplugin.js
module.exports = payload => {
payload.foo = 'bar'
return process.argv.includes('--my-custom-flag')
? require('./alt-plugin')(payload)
: require('./main-plugin')(payload)
} |
I'm reluctant to add CLI options for things that you would apply on every test run, I'd rather only expose flags for the stuff you need to do occasionally to debug a test run. Passing arguments to plugins that have been registered through the configuration could be interesting though. |
Not too sure if this is the right place for this or if a new issue would be better, but I'll start here! I'll start this all of with a bug caveat that I don't have much experience with workers, memory sharing, or any of the other nitty gritty stuff involved here. My ultimate goal is to create a To run my app in This all works nicely and, importantly, it avoids serializing the contents of the JS file. This works fine but has a scaling issue as each test suite/process needs to build the app. I wondered if workers might help solve this by building once and providing the output to all tests that need it. I've put together a rudimentary example using workers that does indeed reduce the runs of This is where my limited knowledge of this type of thing starts to fade. It seems based on the docs for postMessage that the expected solution here would be to use Sorry for the long winded backstory! Thanks for all your work. |
@nathanforce if you could repost that as a Discussion that would be more appropriate. Happy to explore this with you. |
I'm a little torn on what I think the behavior should be, though as a baseline suggestion I think the behavior should be documented. It seems like maybe the default should be NODE_ENV="test" for consistency, but whatever the default, I think this should be customizable (without having to write to process.env from within the worker). This would probably mean adding a property to the SharedWorker constructor, which creates a problem similar to the existing initialData property. |
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
We're pretty excited about shared workers! But we need more real-world experience in building AVA plugins before we can make it generally available. Please give feedback and build plugins. We'd be more than happy to promote them.
Not sure what to build? Previously folks have expressed a desire for mutexes, managing Puppeteer instances, starting (database) servers and so forth.
We could also extend the shared worker implementation in AVA itself. Perhaps so you can run code before a new test run, even with watch mode. Or so you can initialize a shared worker based on the AVA configuration, not when a test file runs.
Please comment here with ideas, questions and feedback.
The text was updated successfully, but these errors were encountered: