-
Notifications
You must be signed in to change notification settings - Fork 335
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: MaxListenersExceededWarning
when using newer AWS SDK version
#500
Comments
Hey, thanks for raising an issue. 3.6 of the sdk would be a major version, we don't naturally support major version releases as they come out as they come with breaking changes like this. It would be something that we'd have to look at for our next major/minor release ourselves. I am rather busy at the moment though, so I would suggest staying on 3.5 for a little while longer. We'd be happy to take a pr though if this is an issue for anyone. |
Hej, thanks for the quick response. I can reproduce this issue just upgrading to 3.582.0 which if I understood correctly is part of 3.5, right? 🤔 I can look into it and open a PR. My fix during the test was very simple: passing new |
Ahh sorry, I thought you said it was 3.6, the pr you linked also seems like a breaking change to me. If I have time I might be able to look into it. In terms of a possible solution, that might be if, I've not looked too deep though. |
I'm having the same issue and can confirm that this is not just a warning but continues to leak until the process is out of memory. |
If I have spare time this week I'll take a look, unfortunately, until then, I'd have to ask that everyone bares with. If anyone has the time themselves to look, that'd be super helpful, but if not, you'll have to wait on me. Unfortunately, it has come at a particularly busy time for the BBC. |
I've possibly been able to resolve this by pinning the @smithy/node-http-handler package (which is used internally by @AWS-SDK) to version 3.0.1 With this, I have been able to upgrade the @AWS-SDK packages to version ^3.600.0. This is not a long-term solution but provides a bridging solution until the root cause is addressed. So, in short, add the following to your dependencies in your
Note: After doing this, you will need to delete your I've deployed this change into our systems and will monitor memory usage over the next few days, but the warning no longer presents. I hope this is helpful. |
Awesome! So it is a change in the next version that causes the issue, that makes it easier to figure out what's going on, thanks! (This one: https://github.com/smithy-lang/smithy-typescript/releases/tag/%40smithy/node-http-handler%403.1.0 which is the change mentioned at the top, spot on!) |
On further research, this may be because the Aborts are not automatically being cleaned up, there's an old Node issue around this: nodejs/node#46525 Possibly because smithy is adding event listeners: https://github.com/smithy-lang/smithy-typescript/pull/1308/files#diff-98216b22e6b39a1cd6f2ab013304d4964152b554c4ad8fee4182a961b42285d8R192 but not removing them? Update: I've opened an issue around this here: smithy-lang/smithy-typescript#1318 |
Update on this, I have published With this, I have replicated the issue: Still trying to figure out the best way to resolve it though. |
@erichkuba Still I have the issue. Node version : 18.20.2 I tried with @smithy/node-http-handler package and still the issue persists. |
Hey, I'm still waiting on a response from AWS on the issue I raised above. Outside of that, the current suggestion is to do what was described here: #500 (comment), or pin your AWS SDK version to before this change. Outside of that, if anyone has any ideas for how it might be resolved in SQS Consumer, feel free to open a PR, we are in an election period at the moment, so I'm strapped for time. Could we avoid adding "me too" comments though please? This just fills up the thread and makes it unreadable for other people coming in. |
@joseamt Thanks for bringing this up. I could have been clearer in #500 (comment), and have updated it accordingly. You need to pin the I hope that gets you across the line. |
@erichkuba Thanks for the update, it worked 👍 👍 👍 !!!. Please add the comments here if you get any updates from AWS. cheers 👍 💯 |
A PR has been raised by another member of the BBC that may fix this: smithy-lang/smithy-typescript#1331 Still waiting on a response from AWS on the issue, but hopefully they'll review this as a fix. |
I had a little time to mess around with this today. I ran the const options: ConsumerOptions = {
queueUrl: "http://127.0.0.1:4566/000000000000/sqs-consumer-test",
sqs: new SQSClient({
endpoint: "http://127.0.0.1:4566/000000000000/sqs-consumer-test",
// This is the latest version of @smithy/node-http-handler
requestHandler: new NodeHttpHandler({
// Disable keepAlive in order to keep as little state around in memory as possible
httpsAgent: new https.Agent({ keepAlive: false }),
httpAgent: new http.Agent({ keepAlive: false }),
}),
}),
// Artificially fast in order to test the memory pressure over time
pollingWaitTimeMs: 5,
waitTimeSeconds: 0.01,
postReceiveMessageCallback: async () => {
requests += 1;
if (requests % 20 === 0) {
// Print the memory usage
console.log(`${requests}\t\t${process.memoryUsage().heapUsed}`);
}
},
handleMessage: async () => {},
}; Then I run the scripts with: tsc && \
# Artificially limit the memory available to ensure that the Garbage Collector kicks in regularly \
NODE_OPTIONS='--max-old-space-size=14' \
AWS_REGION=eu-west-1 AWS_ACCESS_KEY_ID=123 AWS_SECRET_ACCESS_KEY=456 \
node ./index.js > some-data-file.tsv ResultsHere are some results for:
Now the interesting bit.... These results are for:
Discussion / ConclusionsIn the graph above, it's clear that the best memory performance comes when not using any From these measurements, I can say that we could make changes to
In the meantime I'll raise another PR for the |
As promised: smithy-lang/smithy-typescript#1332 |
@paulbrimicombe is this error fixed? |
I've just ran our Docker setup for 5 minutes (https://github.com/bbc/sqs-consumer-starter/tree/main/examples/docker), and got no errors. This is with If we don't see any further issues, we'll release v11 to the latest tag to indicate our support. Thanks @paulbrimicombe btw! |
Hey, we've now released v11.0.0 of the package (https://github.com/bbc/sqs-consumer/releases/tag/v11.0.0). I'm going to close this issue as it looks like the update the node-http-handler fixed the issue, please feel free to comment if the issue crops up again, if we receive no further comments, this issue will be auto locked in 30 days. |
@nicholasgriffintn @paulbrimicombe @erichkuba Thanks for the update and support!! 👍 💯 |
This issue has been closed for more than 30 days. If this issue is still occuring, please open a new issue with more recent context. |
Describe the bug
Hi,
Recently we tried to upgrade our dependencies, as part of this we tried to upgrade
@aws-sdk/client-sqs
from v3.576.0 to v3.600.0. After the upgrade I noticed we are gettingMaxListenersExceededWarning
warnings.After looking into it turns out that there were changes in one of the SDK dependencies and the issue can be triggered starting from version v3.582.0. The change smithy-lang/smithy-typescript#1308 .
I believe the issue is due to reusing the same abort controller in each request, if I set a new
AbortController
after a successful poll the error goes away.The full warning:
Your minimal, reproducible example
https://codesandbox.io/p/sandbox/upbeat-jerry-k6pplg
Steps to reproduce
1 .Get the example from the Code Sandbox
2. Start docker compose to run SQS mock service
3. Start the example script:
NODE_OPTIONS="--trace-warnings" DEBUG=sqs-consumer npm run start
4. observe the warning
Otherwise:
@aws-sdk/client-sqs
version (>= 3.582.0)WaitTimeSeconds
in theReceiveMessageCommand
to 1 second to trigger the warning fasterExpected behavior
There shouldn't be warning.
How often does this bug happen?
Every time
Screenshots or Videos
No response
Platform
Package version
v10.3.0
AWS SDK version
>=3.582.0
Additional context
No response
The text was updated successfully, but these errors were encountered: