-
-
Notifications
You must be signed in to change notification settings - Fork 798
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: enormous memory consumption of the dev server #4148
Conversation
🦋 Changeset detectedLatest commit: ecf8fca The changes in this PR will be included in the next version bump. This PR includes changesets to release 8 packages
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Thanks for the PR!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok this is good, but resolversDynamicImport
needs to default to true
on vercel.
Secondly, add more info to the release notes describing why we have this, why we're changing, that the new default is different than before, and that it defaults to true on vercel.
I just pushed a commit with a fix and refactor.
|
Added @jayu contributions for doc and code |
Thanks for help with shipping this 🙏 |
There was not issue for that, I've decided to fix the problem right away
Context
We have a blitz v2 app with ~290 queries and mutations.
Each query/mutation indirectly imports up to 900 source code files.
We use dependency injection and as a result each endpoint imports the whole backend code.
We use next@12 and have SWC enabled.
Our dev server used to take up to 18GB or RAM, which is enormous amount of memory for parsing and merging together 900 source code files + compiling the font-end.
The memory was allocated for short period of time, and then reduced to 6-8GB.
It was killing 2020 MacBook Pro with 16BG of Ram.
What are the changes and their implications?
I dig into the problem for quite a while and I found out that the issue was basically using dynamic imports for importing routes code.
Dynamic imports in Webpack automatically creates a separate chunk.
Separate chunk means a separate isolated compilation of code. Chunking backend code does not make any sense unless it's run as lambda function.
With dynamic imports and the size of our app, Webpack was bundling the same code ~290 times instead of doing it once.
That created huge memory peaks in quite random moments, eg. a 30 seconds after
pages/api/rpc/[[...blitz]].ts
finished to compile.To fix that I simply changed the dynamic import into
require
call, which do not create separate chunk in Webpack.We are using the patched version for more than a week now and everything works fine. Compilation is faster, there are no more memory peaks and most importantly we reduced the team frustration.
Preparing a reproducible demo that showcase the memory usage might not be easy without real production code, but at least I can prove that the chunks are created for each query/mutation file when using dynamic imports. Let me know if that's needed.
Bug Checklist
pnpm changeset
in the root directory)Integration test added (see test docs if needed)