-
Notifications
You must be signed in to change notification settings - Fork 11.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[6.x] Memory leak on Redis queue when using relation #30114
Comments
I don't have time atm to deep dive into this so appreciating if anyone could help out here. |
Most leaks have been traced back to bugs in the application. @paul-thebaud please:
|
I have the same problem running queue with redis. |
@paul-thebaud I can confirm the leak; I followed your instructions from #30114 (comment) I pushed 1000 jobs each time I did a test. With your vanilla code:
When I remove the Eloquent code (no constructor args, not storing anything in the job), I got this:
So it seems the basic system itself is leaking already memory, but with an Eloquent in the mix it's amplified. My takeaway from this is that the leak itself is connected to the job class. The "smallest" job class (doing basically nothing) already leaks, so I wonder if something is holding a reference to it or some other information about it (or its past). The empty Job class consumed 12kb after 1000 runs: That would make |
@mfn Thank you so much for this analysis! Yes, that's what I was thinking too, it seems the Job reference is kept somewhere, maybe in the event pipeline or something like this? |
FTR, when I tested it my environment was:
However, to make double sure nothing is tainted, I made a from-scratch installation:
Outcome:
FTR, here's the complete output: php artisan queue:work log
As comes evident, there are very long stretches where "After running" is identical. |
I've then modified the Jobs Outcome:
Complete log: Complete log of queue:work with changed job
Turns out at least the method is useful to cross find oder issues, one I even commented on 😅, see #26952 And this one also with a comment from me (the plot thickens!): #16783 , best line is this one from me:
🤔 Anyway, it seems that calling |
I've already tried to call |
There are two memory leaks:
edit: turns out I just optimized it for faster gc |
Could not reproduce the error under php 7.3.10 env:
After 10MB there is a larger GC run which detects the closure reference and it removes the objects. |
@netpok Thanks for the PR and investigation. So the error might be du to PHP 7.3.9 and lower? |
@paul-thebaud You should try it, but I think it's not. Try running your shared Also this is probably amplified if you have more or nested eager loads. |
@netpok I will try those on Monday, and send you an update with my thoughts. I'm already using 7.3.10 on my server, this was my local environment. Thanks for the help ! |
@netpok I tried on PHP 7.3.10 (and PHP 7.2.22) and Laravel 6.2, and yes, it seems to be clearing the memory at ~20MB on my local env. But I do no think the issue should be closed, because the memory leak exists and is more important when having multiple eloquent relations. |
I wouldn't call it a memory leak, it's just not optimized. Is there a chance you are using Bugsnag in your jobs? |
Yes, but on some systems with low memory it can make the worker crash unexpectedly. No I do not use Bugsnag, why ? |
I had a similar problem but it turned out the reason was that the |
Ah ok. Thanks for information. |
What's the general consensus now here? Is this a bug or not? |
@driesvints The problem was mainly due to an unoptimized Eloquent Builder's relation feature which was delaying the garbage collect. Due to my low memory server and the multiple processes ran by Horizon, my queue worker dyno was crashing every 6 hours. So this is not a bug or a memory leak, but a lack of optimization which only affects a few systems (depending on jobs, memory, etc). The problem should have been fixed by @netpok PR, so the issue should be closed after this PR merge and release. |
@paul-thebaud Did this fix your problem? |
@netpok I don't know. But since we put each jobs on different queue (high, default, low) and change Horizon job repartition to "auto", we didn't have the memory leak anymore. Thanks for your help! |
That is such a great thread! Thanks everyone. @paul-thebaud - Can you explain why switching to different queues solves the problem? |
Hi @amosmos! Oh, it's been a while ^^ I can't remember sorry... But now we have a scheduled command which restart Horizon, to avoid memory growing. Are you experiencing this issue? |
Yes we do too :-) So you ended up where you started from with a scheduled horizon termination? I just did the same thing on my app. I wonder if it's just better to stop using horizon and instead use a regular worker, activated on forge without a daemon (I assume that's actually using |
On our side, we are keeping Horizon with a proper termination (without killing it when currently running a job). So I think it is not a problem, but you need to correctly terminate Horizon. As there is a new memory leak inside PHP 7.4 (which we use) I think we'll keep this to avoid memory leaks... |
I kinda stopped using horizon on new projects, so for now I'll try to run the queue without a daemon and see how that works. THANKS! |
predis
Description:
It seems that the worker is leaking memory. I already search for this issue, it seems to exists on multiple Laravel version (I tested 5.6, 5.8 and 6.x). It seems that the leak is more important when having models with loaded relations in the Job to handle.
Steps To Reproduce:
load
on the User to load the relationship with PasswordReset.Illuminate\Queue\Worker
process
method, just after thetry
, and just beforecatch
(you can echo result ofmemory_get_usage
for example).queue:work
command.References and why am I posting this issue
I know there have been multiple issue about queue memory leaking. Such as #25999.
They say "PHP is meant to die" and "just use queue:listen".
But we are using Horizon, and I think we cannot configure the command it is running on child process, am I right ?
Because of Horizon with 10 child process, and queue which runs a lot of job, our queue worker dyno is crashing each 12 hours. For the moment, I configured a
horizon:terminate
with a daemon auto restart each hour, but that's not a real solution.Can you help ? :)
The text was updated successfully, but these errors were encountered: