-
Notifications
You must be signed in to change notification settings - Fork 741
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support multiple workers in a single Miniflare instance #7251
Conversation
🦋 Changeset detectedLatest commit: 127b741 The changes in this PR will be included in the next version bump. This PR includes changesets to release 2 packages
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
A wrangler prerelease is available for testing. You can install this latest build in your project with: npm install --save-dev https://prerelease-registry.devprod.cloudflare.dev/workers-sdk/runs/12068817888/npm-package-wrangler-7251 You can reference the automatically updated head of this PR with: npm install --save-dev https://prerelease-registry.devprod.cloudflare.dev/workers-sdk/prs/7251/npm-package-wrangler-7251 Or you can use npx https://prerelease-registry.devprod.cloudflare.dev/workers-sdk/runs/12068817888/npm-package-wrangler-7251 dev path/to/script.js Additional artifacts:npx https://prerelease-registry.devprod.cloudflare.dev/workers-sdk/runs/12068817888/npm-package-create-cloudflare-7251 --no-auto-update npm install https://prerelease-registry.devprod.cloudflare.dev/workers-sdk/runs/12068817888/npm-package-cloudflare-kv-asset-handler-7251 npm install https://prerelease-registry.devprod.cloudflare.dev/workers-sdk/runs/12068817888/npm-package-miniflare-7251 npm install https://prerelease-registry.devprod.cloudflare.dev/workers-sdk/runs/12068817888/npm-package-cloudflare-pages-shared-7251 npm install https://prerelease-registry.devprod.cloudflare.dev/workers-sdk/runs/12068817888/npm-package-cloudflare-vitest-pool-workers-7251 npm install https://prerelease-registry.devprod.cloudflare.dev/workers-sdk/runs/12068817888/npm-package-cloudflare-workers-editor-shared-7251 npm install https://prerelease-registry.devprod.cloudflare.dev/workers-sdk/runs/12068817888/npm-package-cloudflare-workers-shared-7251 npm install https://prerelease-registry.devprod.cloudflare.dev/workers-sdk/runs/12068817888/npm-package-cloudflare-workflows-shared-7251 Note that these links will no longer work once the GitHub Actions artifact expires.
Please ensure constraints are pinned, and |
What would we need in order for this to work without the (not saying we need to solve for it rn, just curious what the config decision we're facing is) |
Yeah, something like |
The problem with adding a new field to services in wrangler.toml is that you're then encoding this idea of a primary worker, whereas you might want to start up one of the bound workers as a primary worker to work with it directly. It might be useful also to think about a true multi-worker config, where you don't have a single primary worker, and where you can start any of the defined workers as the primary. E.g.: {
...
workers: [
{
name: "a",
main: "../a/a.ts",
bindings: { c: { "type": "worker", name: "c" } },
},
{
name: "b",
main: "../b/b.ts",
bindings: { c: { "type": "worker", name: "c" } },
},
{
name: "c",
main: "../c/c.ts",
},
]
} # starts worker "a" at http:localhost:8787 with "c" accessible only through RPC
npx wrangler dev --worker a
# starts worker "b" at http:localhost:8787 with "c" accessible only through RPC
npx wrangler dev --worker b
# starts worker "c" at http:localhost:8787
npx wrangler dev --worker c Multi-worker is a new paradigm for (I don't say the above to derail the PR - @penalosa's solution is a neat and flexible one that doesn't tie us to any future arrangement). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is amazing work - very nice observation that we essentially already have this functionality in Miniflare and it just needed hooking up 👍
Let's start the conversation on whether or not users will still want/need workers to be accessed from distinct endpoints, or if the dev registry is something we could remove entirely (perhaps in v4?).
); | ||
}); | ||
}); | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might be good to test some failure cases here.
- What if a bound worker fails - do calls return the right response?
- What if the primary worker fails - do the bound workers shut down correctly?
port: parseInt(url.port), | ||
}) | ||
); | ||
if (Array.isArray(configPath)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel like we could clean things up if we just treated all scenarios as multi-worker. So instead of having a separate MultiworkerRuntimeController
, just modify the LocalRuntimeController
to handle multi-workers, and then cast the configPath to an array in the yargs definition.
const configs = Array.isArray(configPath) ? configPath : [configPath];
// Now no need to handle the single worker case. LocalRuntimeController assumes multi.
const runtime = new LocalRuntimeController(configs.length);
Or is there a reason to keep the single worker code path?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I went down this path in Merge Local & Multi controllers, but I'm starting to get a bit concerned about the potential for disrupting the normal dev
flow, and I think it might end up being more complexity than it's worth. The single worker dev flow has two runtimes, remote & local, and the remote runtime doesn't (and will never) work for the multiworker use case. Because of that there's some differences in how these flows are set up (as well as the disabling of the dev registry in the multiworker flow, which involves disabling the registry as well as a change to how DOs & entrypoints are registered). It's not that much duplicated code, and I think it's actually clearer to keep them separate, and easier to reason about. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, that sounds fair enough. Just for clarity, I assume the remote runtime will never be able to do this because it is always deploying a single worker.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume the remote runtime will never be able to do this because it is always deploying a single worker
Yes, the edge preview system deploys a single worker, and there's no way to hook up multiple edge preview sessions into a mesh of services1
Footnotes
-
Reliably, that is. We could probably do some stuff to make this work-ish with HTTP service bindings by writing worker middleware in Wrangler to inject some headers, but that would never work with RPC or cross-worker durable objects ↩
does this support pages , i tried |
if (getFlag("MULTIWORKER")) { | ||
const patchedConsoleFileToInject = path.join( | ||
tmpDir.path, | ||
"patch-console.js" | ||
); | ||
|
||
if (!fs.existsSync(patchedConsoleFileToInject)) { | ||
fs.writeFileSync( | ||
patchedConsoleFileToInject, | ||
fs.readFileSync( | ||
path.resolve(getBasePath(), "templates/patch-console.js") | ||
) | ||
); | ||
} | ||
|
||
inject.push(patchedConsoleFileToInject); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can this be done inside middleware so that we don't have to hard-code the code path in the main bundle handler?
(sorry this wasn't in the first review)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
globalThis.console = new Proxy(globalThis.console, { | ||
get(target, p, receiver) { | ||
if (p === "log" || p === "debug" || p === "info") { | ||
return (...args) => | ||
Reflect.get(target, p, receiver)(WRANGLER_WORKER_NAME, ...args); | ||
} | ||
return Reflect.get(target, p, receiver); | ||
}, | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps we could wrap the logger instead of patching the console method directly.
(sorry this wasn't in the first review)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think I quite know what you mean—which logger?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the logger I was thinking of: https://github.com/cloudflare/workers-sdk/blob/3dce3881bdaf373aa9b2e52483e340ab8193151c/packages/wrangler/src/logger.ts
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's the Wrangler logger, whereas this code is patching the console global inside a Worker, which is slightly different
@zhihengGet this does not support pages, but there is a followup to get it working with Workers + static assets |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Really great work, thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🎉 🎉 🎉
👏 |
This PR allows you to run multiple workers in a single Miniflare instance, allowing for cross-service RPC without the complexities of the dev registry. To try it out, pass multiple
-c
flags to Wrangler: i.e.wrangler dev -c wrangler.toml -c ../other-worker/wrangler.toml
. The first config will be treated as the primary worker and will be exposed over HTTP as usual (localhost:8787
) while the rest will be treated as secondary and will only be accessible via a service binding from the primary worker.