Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Shared worker feedback & support #2605

Closed
novemberborn opened this issue Nov 1, 2020 · 7 comments
Closed

Shared worker feedback & support #2605

novemberborn opened this issue Nov 1, 2020 · 7 comments

Comments

@novemberborn
Copy link
Member

novemberborn commented Nov 1, 2020

We're pretty excited about shared workers! But we need more real-world experience in building AVA plugins before we can make it generally available. Please give feedback and build plugins. We'd be more than happy to promote them.

Not sure what to build? Previously folks have expressed a desire for mutexes, managing Puppeteer instances, starting (database) servers and so forth.

We could also extend the shared worker implementation in AVA itself. Perhaps so you can run code before a new test run, even with watch mode. Or so you can initialize a shared worker based on the AVA configuration, not when a test file runs.

Please comment here with ideas, questions and feedback.

@jakobrosenberg
Copy link

Sharing use-case here by request 🙂

I'm working on a public template where I need to test both dev and build commands.

I have a folder structure as such

test/
  build/
  common/
  dev/

and the package.json tests

"test": "npm run test:dev && npm run test:build",
"test:dev": "node test/wrapper dev ava test/{common,dev}/**/*.test.js",
"test:build": "node test/wrapper build ava test/{common,build}/**/*.test.js"

The test/wrapper script

// assign `dev` or `build` to mode and `ava` to cmd
const [mode, cmd, ...args] = process.argv.slice(2)

async function wrapSpawn() {
    const teardown = await setups[mode]()
    spawnSync(npx, [cmd, ...args], { stdio: 'inherit' })
    teardown()
}
entire wrapper script
const { spawnSync, spawn } = require('child_process')
const { checkPort, wait } = require('./utils')
const fkill = require('fkill')

const npm = /^win/.test(process.platform) ? 'npm.cmd' : 'npm'
const npx = /^win/.test(process.platform) ? 'npx.cmd' : 'npx'

// assign `dev` or `build` to mode and `ava` to cmd
const [mode, cmd, ...args] = process.argv.slice(2)

const setups = {
    async dev() {
        const child = await spawn(npm, ['run', 'dev'], )
        await checkPort(5000, 15000)
        await wait(500)
        return () => fkill(child.pid, { tree: true, force: true })
    },
    async build() {
        spawnSync(npm, ['run', 'build'], )
        const child = await spawn(npm, ['run', 'serve'], )
        await checkPort(5000, 15000)
        return () => fkill(child.pid, { tree: true, force: true })
    }
}

async function wrapSpawn() {
    const teardown = await setups[mode]()
    spawnSync(npx, [cmd, ...args], { stdio: 'inherit' })
    teardown()
}

wrapSpawn()

I'm somewhat happy with the script in its current state. It's not 100% self explanatory in package.json, but works flawlessly.


I had previously used shared context and being able to use reserve was great for spinning up a single server. However I couldn't get the teardowns working. Either the server shut down along with the test file that spawned it or it didn't shut down at all.


For my use case the best option would be an argument that lets me provide a setup script. (I guess I could DIY that for now)

ava test/dev/*.js --setup test/_dev-setup.js
ava test/build/*.js --setup test/_build-setup.js

The setup file:

//test/_dev-setup.js
module.exports = async function() {
  const handle = await createMyDevServer()
  
  // return a teardown function
  return function(){
    handle.kill()
  }
}

@novemberborn
Copy link
Member Author

@jakobrosenberg so the way I was thinking this might work:

  • We add support in AVA config files for loading plugins
  • We allow plugins to register tasks that run before and after test runs
  • (And they'd be aware of watchers, to perhaps keep resources alive for longer)
  • The message passing capabilities already available would allow you to load the plugin in a test file and access URLs and whatnot

What do you think?

@jakobrosenberg
Copy link

Sounds good.

What are your thoughts on being able to override the plugins through the CLI (npx ava path/to/test --plugin path/to/plugin.js) and/or dynamic higher order components? (Am I using this term correctly?)

//myplugin.js
module.exports = payload => {
  payload.foo = 'bar'
  return process.argv.includes('--my-custom-flag')
    ? require('./alt-plugin')(payload)
    : require('./main-plugin')(payload)
}

@novemberborn
Copy link
Member Author

I'm reluctant to add CLI options for things that you would apply on every test run, I'd rather only expose flags for the stuff you need to do occasionally to debug a test run.

Passing arguments to plugins that have been registered through the configuration could be interesting though.

@nathanforce
Copy link

Not too sure if this is the right place for this or if a new issue would be better, but I'll start here!

I'll start this all of with a bug caveat that I don't have much experience with workers, memory sharing, or any of the other nitty gritty stuff involved here.

My ultimate goal is to create a jsdom instance for each test in my frontend application and avoid polluting the global namespace. I want to do this so that I can benefit from Ava's concurrency.

To run my app in jsdom I'm bundling the whole thing up using esbuild. When a <script> on the page requests my app bundle jsdom will respond with a Buffer created from esbuild's OutputFile.contents, a Uint8Array.

https://github.com/evanw/esbuild/blob/4e336e4f313a1f8a4bd6032f037d23f725f52e75/lib/types.ts#L101

This all works nicely and, importantly, it avoids serializing the contents of the JS file.

This works fine but has a scaling issue as each test suite/process needs to build the app. I wondered if workers might help solve this by building once and providing the output to all tests that need it.

I've put together a rudimentary example using workers that does indeed reduce the runs of esbuild to just 1, but the test time is far slower than multiple runs. My guess is that this is because the communication between the workers is serializing the build output.

This is where my limited knowledge of this type of thing starts to fade. It seems based on the docs for postMessage that the expected solution here would be to use SharedArrayBuffer to avoid copying the data. Would it be possible to expose that API in Ava's shared worker API or is it purposefully not exposed?

Sorry for the long winded backstory! Thanks for all your work.

@novemberborn
Copy link
Member Author

@nathanforce if you could repost that as a Discussion that would be more appropriate. Happy to explore this with you.

@djake
Copy link

djake commented Mar 5, 2021

NODE_ENV is set to test when running tests but not set when running a shared worker.

I'm a little torn on what I think the behavior should be, though as a baseline suggestion I think the behavior should be documented.

It seems like maybe the default should be NODE_ENV="test" for consistency, but whatever the default, I think this should be customizable (without having to write to process.env from within the worker). This would probably mean adding a property to the SharedWorker constructor, which creates a problem similar to the existing initialData property.

@avajs avajs locked and limited conversation to collaborators Mar 7, 2021
@novemberborn novemberborn unpinned this issue Oct 31, 2021

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Projects
None yet
Development

No branches or pull requests

4 participants