-
-
Notifications
You must be signed in to change notification settings - Fork 8.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. Weβll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[v2] npm start, build fails with JavaScript out of memory
because of huge files
#4785
Comments
Have you tried increasing the memory that you allocate with the nodejs process? Something like this might help for large sites: |
Tried changing the Also a note comes up:
|
We use MDX to parse the markdown content and transform it into React components. Unfortunately, it is probably difficult to process a file of that size, but there might be other ways to create that page? If the content is pure markdown, we'll enable later to provide an alternate md parser (#3018), which may be more memory efficient (but also more limited due to the inability to use React inside markdown). Why does this page need to be so long in the first place? Can it be splitted into multiple pages? Is the content auto-generated? My feeling is that this page is autogenerated, and generating markdown might not be the best solution in the first place. I would rather:
|
Will this alternative md parser be the key to make Docusaurus v2 available for large doc site? Or is there any other workaround? I've met similar problem when switching docusaurus v1 to v2. I have a very, very huge doc site (containing 7000+ md files, auto-generated by *.d.ts as api file). Docusaurus v1 generates the doc site in about one or two minutes. But Docusaurus v2 is unable to generate. The process breaks with a strange error after one hour. Docusaurus v2 has more amazing features than v1 (actually in my opinion v1 is only "can-use" but missing many configurations). So I'm really looking forward to switching to v2. With some effort, I've resolved the parsing error in md syntax. But stuck at compiling time. |
@adventure-yunfei here we talk about a very large file, not a lot of small files. Docusaurus 2 compiles each md file to a React component by default (with MDX), so this adds a bit of overhead if you don't need MDX, and allowing a less powerful parser can be useful but also make it easier to adopt for sites using commonmark and not willing to change the docs during the migration. I suggest opening another issue dedicated to your specific problem. It will be hard to troubleshoot that without a repro that I can run. Note we already have a few very large sites on Docusaurus but yes built time is definitively a pain point (ex: https://xsoar.pan.dev/docs/reference/index) |
Closing in favor of #4765. We should work hard on reducing memory / build time as I've heard many complaints about it. |
π Bug Report
npm start
crashes withJavaScript out of memory
error after 42%.npm run build
crashes withJavaScript out of memory
error after 23%.Error logs for
npm start
:Error logs for
npm run build
:Have you read the Contributing Guidelines on issues?
yes
To Reproduce
git clone https://github.com/covalentbond/docs-site.git
npm install
npm start
Actual Behavior
npm start
does not crash and server starts running atport:3000
.npm run build
successfully builds the app.Your Environment
migrate-docs
.2.0.0-beta.0
The text was updated successfully, but these errors were encountered: