-
-
Notifications
You must be signed in to change notification settings - Fork 461
Conversation
The reason for the drastic reduction in handler size is because the revert would now use back the default aws-sdk that's built into the lambda, so no code needs to be bundled. However, one problem is that when you import even just the S3 part, it imports all S3 operations, which I found caused poor cold start times (added 100 ms or so)...I did test this, will try to find the past issue and link it here. However, now it's using the aws sdk v3, which is not in Lambda yet, so its code has to be bundled. But all operations are modularized so we end up only needing to import S3 operations we need (get and put). I believe it only added a few ms (around 3 ms) to the cold start times. Let me double check on the dynamic imports thing if it's working correctly, I believe it was. But either way even if it's not, the cold start should be much better. |
Yes, I get that it is more efficient to be able to import just the parts used. But moving the old aws sdk import to be as-needed, like I did in this pseudo-revert, avoids the cold start penalty until the fallback paths are hit (which might be never). My admittedly poor testing seemed to indicate a performance boost. But I agree this is likely not the correct change, like I wrote above. |
Ok, let me test it out again and confirm. |
1 similar comment
Ok, let me test it out again and confirm. |
So this was the original issue: #580. I believe I saw ~250 ms cold start init time but that was AWS-SDK (built-in to Lambda, i.e v2) being initialized on cold start outside the handler (so every time even if it's not used) and when using the S3 client there, AFAIK, negligible init time (bulk was already initialized in cold start). Then with the AWS-SDK v3, I used the modularized client but it adds extra code to the handler, but is dynamically imported. I saw during cold starts, init time drop to ~180 ms and then the dynamic imports were ~3 ms, for a total of ~183 ms. I thought cold start due to S3 client was just ~3 ms, but yeah, it seems there could be cold start outside of the handler that a user would always pay for even if they don't use s3 client. I guess yours is slightly different since you moved AWS-SDK v2 to be a dynamic import instead. So I need to measure what the new cold start times are: 1) outside handler time (should be none added here besides other core initialization due to dynamic import) + 2) dynamic import time. Will try to setup a test later this week. Test results (will update as I added more). Using Node.js 12.x and 512 MB memory. fallback page s3 get (cold start) - init duration is 193.22 ms and dynamic import (albeit maybe it's not doing much since most of the code was already declared outside) is < 1 ms
Regular SSG page get, which won't import any S3 code (cold start) - init duration is 198.57 ms
Now I'll test with your changes (using built-in AWS-SDK v2, dynamically importing S3 client only when needed). Fallback page s3 get (cold start): init duration is 170.62 ms and then in origin response handler it is 331 ms to initialize aws-sdk s3 client (v2) dynamically.
Regular SSG page: as expected just 179.05 ms since there is no AWS-SDK v2 s3 initialization.
|
In conclusion, yes it does look like the use of AWS-SDK v3 for its modularized s3 client is adding a bit of cold start time to all requests, maybe around 10-15 ms? Though I think it is much better than AWS-SDK v2 which is taking ~330 ms to dynamically import the client (since it gets all S3 operations, not just GET/PUT), albeit this only occurs when fallback path is hit. Ideally, AWS Lambda should hopefully make AWS-SDK v3 available in the Lambda environment soon. Meanwhile, I think we can investigate how to get Rollup.js to inline all the S3 GET/PUT code into that line where we dynamically import S3 client. |
Thanks for confirming!
Yeah, or the S3 client could be bundled in a separate file that is dynamically imported. |
So I tried some things and we could solve it using a Rollup.js config like so: const generateConfig = (input) => ({
input: `./src/${input.filename}.ts`,
output: {
dir: "./dist/",
entryFileNames: `${input.filename}${input.minify ? ".min" : ""}.js`,
format: "cjs",
manualChunks(id) {
if (id.includes("aws-sdk")) {
return "aws-sdk";
} else {
return "serverless-next-js";
}
}
},
plugins: [
json(),
commonjs(),
externals({
exclude: "@sls-next/next-aws-cloudfront"
}),
nodeResolve(),
typescript({
tsconfig: "tsconfig.bundle.json"
}),
input.minify
? terser({
compress: true,
mangle: true,
output: { comments: false } // Remove all comments, which is fine as the handler code is not distributed.
})
: undefined
],
external: [...NPM_EXTERNALS, ...LOCAL_EXTERNALS],
inlineDynamicImports: false
}); The magic is basically in
It might just be a matter of playing around with Rollup.js config to get to where we want. EDIT: for (2) it looks like we can just return |
0f56398
to
2b880a3
Compare
@dphang Again, no experience with rollup... This produces more or less the desired result, I think. But maybe too ugly and may not resolve types correctly? |
Codecov Report
@@ Coverage Diff @@
## master #1021 +/- ##
=======================================
Coverage 82.32% 82.33%
=======================================
Files 69 70 +1
Lines 2574 2575 +1
Branches 613 613
=======================================
+ Hits 2119 2120 +1
Misses 389 389
Partials 66 66
Continue to review full report at Codecov.
|
Yeah no worries, I was trying to check if we can do it in rollup but it seems it's still doing some stuff outside the handler... seems it might not handle dynamic imports well? maybe I'll create an issue in Rollup to ask how to do it correctly as I couldn't figure it out. I will check your solution and see how it performs, I think it seems ok. EDIT: testing out proper bundling config in #1029. |
Obsolete |
Looking at the generated lambda, the dynamic S3 client imports do not seem to do anything. The code for them gets bundled up and there's just a wrapper promise in place of the
await import()
.This is a not-for-merging partial revert of f9eef45 that changes it back to the previous approach. I didn't even fix the tests yet.
There's probably a better way to solve this – I've never used rollup so I didn't try any packaging changes – but this more than halved the default-handler size:
Before:
After:
It seems to have shaved some memory and initialization time off the lambda as well, although that was just based on a few poorly controlled test runs. But I've not measured what happens to fallback response times.