-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory leak when logging large amount of logs #1871
Comments
I forgot to mention that even if the log level is lower than the log level currently setup, the problems occurs (as in my example script) |
I had this issue, i recommend moving to pino if you are generating a large amount of logs, even if your log level is high. took 10 minutes to read the docs and setup, if you use |
Surely there must be some other solution to this other than completely moving to pino! Any guidance? My fairly simple express app runnning via pm2 is reporting +1.2GB of memory usage!! |
We (maintainers) are extremely interested in any help from the community in investigating this issue and proposing a fix, especially if this is still happening in 3.5.0. Folks have provided some pretty small examples like in this thread, so it shouldn’t be impossible to determine which objects are leaking, and where those get allocated? |
@DABH I think I've cracked it. The root of the problem is the fact that The fix then is: Setting function onwrite(stream, er) {
var state = stream._writableState;
var sync = false; // state.sync;
... completely eliminates this massive backlog as the pending callbacks are called as they come in and allows the memory to be managed in real time instead of waiting for some arbitrary(?) amount of logging downtime to clear the pending callbacks. Video 1: As Is - Peak = 31.6 MB Video 2: hardcoding What do you think? How can this be fixed ASAP? |
Hmm, why would sync ever be set to true then? If that’s a memory issue, why wouldn’t that be hard coded to false in readable-stream? Your evidence is pretty compelling but I wonder if this means e.g. Winston is just not using readable-stream correctly, or if there’s some use case we aren’t aware of where we’d want that value to be true? I am supportive of fixing this ASAP, so no worries on that front, but I just want to ensure we are considering every possibility and not making any unintentional breaking changes. I also seem to remember Winston was using an older version of readable stream — you could double check whether that’s still the case, and if so, whether upgrading that dependency version would help with this issue. Huge thanks for your investigation into this, we will give you some giant shoutouts somehow if you are able to figure the rest of this out and we get a PR merged! :) |
I'm guessing it's this way because a writable prioritizes writing over clearing the pending callbacks. When a program has a lot of consistent logging, it might take a while to get to process.nextTick. Just need to figure out a way to override stream._writableState.sync on the way to onwrite. |
Hmm, I wonder if changing this could negatively impact logging throughput then? Maybe we should make sync false by default, but let it be configurable in case people want high write throughput and aren’t worried about callbacks? Can always add plumbing code to set that parameter… |
Also did you check if we’re using the latest readable-stream version and if that might impact anything? |
adding:
to this line: results in max mem of 20.3 Mb which is pretty much exactly midway between 9.5MB and 31.6MB which means that this only solves exactly half the problem. Where else is this issue? Here: Line 318 in 237534a
Adding
immediately before that line results in max memory usage of 9.4 MB Updating readable-stream wont make a difference based on the fact that the code for onwrite is still the same recent version changes on that package have been addressing node version capability updates. IIUC (I'm no wiz at underlying node) winston is using Writable for not it's original intended use case (file streams, etc) - a traditional write eventually ends, whereas logging in a node process can be constant. So if winston implements the sync override, there's not much downside from the logging perspective. If you test these changes and confirm locally then I don't mind writing up a PR to fix this once and for all (or you can just do it directly - I don't mind that either tbh might be faster). I very much do not want to rip out a logging library i just implemented in prod. winston is used for complicated logging with multiple transport with nary a spare tick - constantly growing memory usage resulting in production processes needing to be restarted takes precedence over maximum logging throughput IMO since there are other libs out there that do not have this exact issue. Regarding parameterizing it, I don't believe that should be priority at this very moment. I'm sure one day someone will make an issue about it then it can be added quickly. The fix is important. Let me know next steps. Thanks |
Feel free to open a PR! Thanks for hunting down the necessary places to make this change. I believe this is a non-breaking change so we could release this in the next release (3.6.0) |
Addresses the memory leak identified here: winstonjs/winston#1871 (comment)
Addresses the memory leak identified here: winstonjs#1871 (comment)
I have made 2 PRs to resolve this issue. One here and one in winston-transport winstonjs/winston-transport#81 thanks |
Addresses the memory leak identified here: winstonjs/winston#1871 (comment)
…callbacks (#2057) * Memory leak fix: do not wait process.nextTick Addresses the memory leak identified here: #1871 (comment) * resolve linting failure
Hello community, what is the state of this issue? We are facing to same problem. But looks like there is no more work since 2020 is created. It is waiting for something? |
Please tell us about your environment:
winston
version?winston@2
winston@3
v3.3.3node -v
outputs: v14.15.0Operating System? macOs and Linux (Docker)
Language? ES6 CommonJS
What is the problem?
When logging a large amount of logs, the memory keeps increasing without never decreasing at any time.
You can see an example script here: https://gist.github.com/TabestoJulienDuvignau/ec565b6ec2d788022a068be7f875ea87
Or running it here: https://codesandbox.io/s/solitary-cache-4r7v6?file=/src/index.js
You can increase the number of loop (
max
variable) performed to see the memory usage increasing.What do you expect to happen instead?
The memory used should be stable and decrease at the end of the processing.
Other information
The script is very aggressive but it highlight correctly the issue we are facing in our production application since we have implemented Winston.
The text was updated successfully, but these errors were encountered: