-
-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add support parallel builds #11984
add support parallel builds #11984
Conversation
🦋 Changeset detectedLatest commit: 775b042 The changes in this PR will be included in the next version bump. Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
2a97510
to
5c36939
Compare
At the beginning I thought you were implementing the solution using worker threads, but your solution stil uses a main thread. This solution feels more like a queue instead. Is my analysis incorrect? |
Thanks, as we're in the middle of releasing Astro 5 beta we likely won't get to this right away but we'll circle back in a couple of weeks and take a look. Note that we have had a few attempts to bring this in, but it never quite worked out. Might look at old PRs to see what went wrong in the past. Also, generating pages isn't usually the slow part of the builds, so would love to see some examples of where this has big benefits. cc @bluwy who might have thoughts. |
Yeah this seems to still run on the main thread, but what I understand is that it allows multiple fetch calls from different pages to be executed in parallel? Usually for cases like this, we suggest doing all the fetches beforehand (e.g. from a shared file, during build only), that way you're not blocked by the page rendering order and better cache things if needed. Maybe that's the better approach here for your app? I think there's some value in a |
Assuming I have 10 pages, from page 1 to page 10, is the current program generating page 2 after page 1 is completed, page 3 after page 2 is completed, until page 10 is generated. |
@bluwy could we maybe get an experimental release for this? I have an app that does all fetching beforehand and then renders pages from a shared cache of sorts so I could maybe see if this makes any difference. It is quite a bit smaller (around 800 pages) but we should still be able to see if there's any improvement. |
@gacek1123 you can clone and run test by yourself through pnpm link. https://github.com/withastro/astro/blob/main/CONTRIBUTING.md#development https://pnpm.io/cli/link#replace-an-installed-package-with-a-local-version-of-it |
I think using Like I mentioned, if you perform all the fetch calls beforehand, cache it, and share it between pages, does that help with the generation speed? That should make page generation even faster than having a
If you've not tested linking locally like @chaegumi mentioned, I can definitely cut a preview release for this. But it looks like the lockfile is borked as it's installed with pnpm v8, so i'm not sure publishing will pass. Would have to fix that first. |
My program was designed to do this, and I started using it https://github.com/11ty/eleventy-fetch I did request caching, but when my program cached the
ok |
!snapshot build-concurrency |
It looks like preview releases doesn't work on forks :(
Ah so you've already optimized network fetching, that's good to know. I'm curious what's making page rendering slow still. Anyways I can't seem to cut a preview release, so we might have to wait for the rest of the team to discuss about this feature. |
English is not my native language, so my expression may not be very accurate, causing your misunderstanding. My page rendering should not be slow, 200ms, 300ms, 500ms, 600ms, etc., and some may take about 1 second. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for this helpful feature @chaegumi ! 🙌
I am just reviewing the documentation part of the PR, and I've made some language suggestions. But, I will need your knowledge to make sure it's still correct! Can you please make sure nothing is wrong with the docs in the astro.ts
, and then I will review the changeset to make the CHANGELOG message match?
Co-authored-by: Sarah Rainsberger <[email protected]>
Co-authored-by: Sarah Rainsberger <[email protected]>
Co-authored-by: Sarah Rainsberger <[email protected]>
Co-authored-by: Sarah Rainsberger <[email protected]>
Co-authored-by: Sarah Rainsberger <[email protected]>
Thanks @sarah11918 those suggestions look great! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approving for docs! Let's go! 🚀
Thank you all for your help. |
Hey 👋 Docusaurus maintainer here. I'm curious, what's the reason for Astro to use To give some context, for Docusaurus, we went the other way around: we started with a Webpack SSG plugin we didn't build, that used That has worked well for us for 2020-2022 but some users reported memory spikes: This led us to implement a similar solution (using p-map which is kind-of the same as p-limit but allow collecting a result): slorber/static-site-generator-webpack-plugin@03b6fab Today we have internalized this code but still use const Concurrency = process.env.DOCUSAURUS_SSR_CONCURRENCY
? parseInt(process.env.DOCUSAURUS_SSR_CONCURRENCY, 10)
: // Not easy to define a reasonable option default
// See also https://github.com/sindresorhus/p-map/issues/24
32; Note that this code runs both CPU (React renderToString, HTML minification) and IO (writing HTML files to disk) I'm curious, did I miss something that prevents you from adding more concurrency by default? In the future, I'm also planning to introduce a thread pool for SSG, curious to check your past experiments and the lessons you learned here. Now that we migrated to Rspack, React SSG render has become a bottleneck for us. |
My idea: 1 represents the original program, and setting larger values is autonomous. If necessary, set them, and if not, do not set them. |
Thanks @chaegumi A value of 1 is a safe/conservative default. What makes me curious is that from the outside it seems that you discourage users from using it, and not really planning to increase that default. Was wondering why. |
@slorber I'm probably the main proponent to keep this behaviour so I'll try to explain:
Btw I noticed that docusaurus uses |
Thanks for the explanations @bluwy, that makes sense. We don't have the same constraint on Docusaurus: we don't support async/await (yet) in components, and users can only load data before the SSG phase. Regarding https://19.react.dev/reference/react-dom/server/renderToString In our case I didn't measure precisely but it didn't seem using streaming was much slower, and it looked like the new recommended way in particular if I want to later experiment with RSC ran at build time using async/await/Suspense (although similarly we may not want to encourage users to do that, it can remain useful for flexibility) |
@bluwy I tested on Docusaurus and saw no noticeable difference by moving back to My intuition is that it was only faster for Astro because you didn't have concurrent builds in the first place. When rendering things sequentially, a synchronous call will always be a bit faster than an asynchronous one, but this gets mitigated with parallelism. |
Interesting thanks for checking that! Yeah that could be it. I wonder if reducing/removing the parallelism in Docusaurus may reveal it better, but I won't bother you to test it 😅 If node is able to manage async queues well enough there might not be a significant perf difference. |
Can confirm that a small local benchmark shows const html =
process.env.DOCUSAURUS_AB_BENCHMARK === 'true'
? renderToString(app)
: await renderToHtml(app); hyperfine --prepare 'yarn clear:website' --runs 5 'DOCUSAURUS_SSR_CONCURRENCY=1 DOCUSAURUS_AB_BENCHMARK=true yarn build:website --locale en' 'DOCUSAURUS_SSR_CONCURRENCY=1 DOCUSAURUS_AB_BENCHMARK=false yarn build:website --locale en'
Benchmark 1: DOCUSAURUS_SSR_CONCURRENCY=1 DOCUSAURUS_AB_BENCHMARK=true yarn build:website --locale en
Time (mean ± σ): 42.539 s ± 0.371 s [User: 78.937 s, System: 15.350 s]
Range (min … max): 42.001 s … 42.844 s 5 runs
Benchmark 2: DOCUSAURUS_SSR_CONCURRENCY=1 DOCUSAURUS_AB_BENCHMARK=false yarn build:website --locale en
Time (mean ± σ): 45.323 s ± 2.018 s [User: 80.985 s, System: 14.029 s]
Range (min … max): 42.832 s … 48.423 s 5 runs
Summary
DOCUSAURUS_SSR_CONCURRENCY=1 DOCUSAURUS_AB_BENCHMARK=true yarn build:website --locale en ran
1.07 ± 0.05 times faster than DOCUSAURUS_SSR_CONCURRENCY=1 DOCUSAURUS_AB_BENCHMARK=false yarn build:website --locale en Note: it's the full website build, not just the SSG phase (which is ~15-20s with concurrency=1). If I isolated the SSG phase alone, the % would be higher. |
Very interesting! Thanks for testing it. Good to know that generally non-streaming is a little faster than streaming. (for non-parallel) |
Changes
Testing
When greater than 1 using parallel builds during the build.
pnpm run build
Docs
add a configuration options
concurrency
in build options