-
Notifications
You must be signed in to change notification settings - Fork 586
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory issue with version v3.667.0 #6553
Comments
I know there is very little information in this ticket. I have no much time to investigate. |
Hello! |
In our case it is also making our Nest app go OOM on startup. |
I can confirm that our deployments run out of memory during startup too, when upgrading from |
Same here with node JS / Nest and these dependencies:
|
same here, the memory leak occurred with these aws deps
rolling back to |
Hi, we are marking 3.666.0 series of |
I'm sorry for not catching this bug. A fix has been made in PR #6555. We will release the new version later today. The root cause is calling the memoized credentials provider function in the user agent middleware. The credentials provider function may include an SDK operation to e.g. STS, and during that invocation it loops into the same middleware. After resolving the immediate issue, we will investigate how to improve test coverage to avoid recurrences. |
https://github.com/aws/aws-sdk-js-v3/releases/tag/v3.668.0 has been released with what I believe is the fix. That said, does anyone have a more specific reproduction setup for this issue? |
I don't have a full repro repo... can probably create one if needed? We were seeing it in a node API running in K8s with a service role granting A request would come in:
Our
It would crash immediately... |
We've observed the same behaviour. We have a simple tool that includes the AWS SDK as part of its dependencies. It instantiates an S3 client before doing anything else, using an OIDC token from GitLab CI. When the tool runs on a container with 2GB of RAM, if using version |
Edit: now I remember we were intially getting crashloops on start up, increased resource limits to get around that, and then saw the crashes on calling the client. Fog of war. |
@kuhe |
Not all packages get updated to every version. The clients on version 3.668.0 still use |
It looks like container credentials may be the precondition, which I'll investigate. |
Using a profile, I couldn't recreate locally with a basic benchmark so that's quite possible... I've attached some nasty code if it helps. You can change the client version in |
I believe this is fixed with v3.668.0, so I'll be closing the issue soon unless anyone reports the issue as persisting in v3.668.0, but I'll comment with the root cause details when I'm able to determine the preconditions. |
My testing shows the issue affected |
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs and link to relevant comments in this thread. |
Checkboxes for prior research
Describe the bug
We have three services using the aws-sdk` v3.667.0
This is the memory usage per version :
Regression Issue
SDK version number
@aws-sdk/[email protected], @aws-sdk/[email protected]
Which JavaScript Runtime is this issue in?
Node.js
Details of the browser/Node.js/ReactNative version
v22.4.1
Reproduction Steps
run
npm start
the container immediately runs out of memory.Observed Behavior
Memory increases at boot time
Expected Behavior
Memory do not increase at boot time
Possible Solution
No response
Additional Information/Context
No response
The text was updated successfully, but these errors were encountered: