-
Notifications
You must be signed in to change notification settings - Fork 11.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[5.3] Queue worker memory leak #16783
Comments
Does this also happen when using a different driver? Beanstalk, redis DB? |
I think I know what it is, but want to wait a few hours watching the metrics before reporting back with 100% certainty. |
Ok, I've solved the problem, which was caused by my own stupidity, but still should not have lead to a memory leak. As I said, I haven't sent out any jobs yet. I was setting up a server for an upcoming project I'm working on and wanted to get the infrastructure like supervisor in place. Although I set the After setting the keys and other bits for 'sqs', there was no more memory leak. I still think this shouldn't happen, even if the 'sqs' details are incorrect, so I leave it up to you to decide if the issue should be closed or not. |
You should lower the flag that tells laravel how often to restart the queue. It's designed because there will be memory leaks. |
|
I think it was going way above 128MB otherwise my server would not have run out of memory... |
@GrahamCampbell After I configured my SQS correctly it didn't go up for half a day, but then again slowly starting to creep up. I wanted to see if it will restart when reaching the default 128MB, but as you can see in the screenshot I took today, it's not happening. The workers are now at 16.1%, 15.6%, and 15.4% (I have four running). The server has a total of 1GB, so 16% = over 164MB. Why do they not restart? This is my config and below a screenshot showing the memory on my server:
|
Try lowering the limit. What |
@GrahamCampbell I don't understand. When you look what I lowered it to 64MB. Let's see if they will restart. |
|
@GrahamCampbell But isn't 164MB way over where a restart is due, even if there is a difference in what PHP reports? I still don't get what you are trying to tell me. I understand that how the worker measures its memory usage might be different from what I see in the console with top. But what is the solution? Because I'm running out of RAM. Why is there such a big discrepancy? I got a couple of alarms today from AWS that my server is running low on memory. |
I realize this issue is closed but I am experiencing this problem too. I am running php 7.1.2 on ubuntu 16.04 and centos (CentOS Linux release 7.3.1611 (Core)) systems and I see a drastic difference between what top/htop/ps -aux all report vs what php's own memory_get_usage() reports (way less). Thus, my system runs out of memory while the processes themselves think they're well under the 128MB limit. I am not sure if this is a php internals issue or what. I will say my current work around is a scheduled hourly soft restart with this in Console/Kernel.php: // Used to combat memory creep issues that I can't solve otherwise at this moment.
$schedule->command('queue:restart')->hourly(); |
This is most definitely an unresolved bug. |
+1 unresolved bug! |
I've got the same issue. |
5.3 is not supported. Raise an issue with fresh Laravel version. |
+1 unresolved bug. |
also hitting this memory leak issue with laravel 5.6 php 7.2 on centos 7 |
Also having the same issue. +1 unresolved bug! |
+1 |
1 similar comment
+1 |
+1 ... as soon as I call a page that uses the database, memory usage increases by 200mb/s. It also happens with |
This issue still persists in 5.5. Event after using memory_get_usage(true)
197.266 MB php I am using the default value laravel configured => 128mb |
I'm having the same problem, with AWS and workers. the larval server just uses more and more memory until it fails. I have no idea what the issue is. All of the corncobs are queued up via HTTP requests from AWS workers environment, so they should all be closing and clearing memory after the completion of each. |
Running in to memory issues after upgrading PHP from 7.0 to 7.2 (Laravel 5.4). Perhaps an gc issue? I assume this Would be nice if the documentation was updated with description how this flag works. Does it kill the process mid-way, or it just stops it in between jobs? |
I just noticed the same thing for Redis driver and Laravel 5.7. Memory usage goes up about 1MB every 10-25 seconds. Thankfully |
The problem here is the memory used being calculated always returning the wrong value. What I am doing is running artisan queue:restart every 20 minutes as we have some memory intensive jobs. And its working well so far |
See also laravel/horizon#375 So far I've seen no one who experiences this actually sitting down and properly debugging this.
This usually is the case when there's leak in some extension/3rd party code which does not count towards PHPs own memory; likely because they use their own allocator. But just a guess, you didn't really provide much information (and maybe this issue isn't the best place to debug your code).
This sounds like it increases with every processed job? It's just a guess but it sounds like this. Only you are able to diagnose as you wrote the job/code.
There was posted an interesting solution more or less recently at laravel/ideas#1380 (comment) , you can add this yourself for the time being if it solves your issue. Just trying to help here, not dissing your problems 😄 |
how about this solution: https://medium.com/@orobogenius/laravel-queue-processing-on-shared-hosting-dedd82d0267a |
Also experiencing this issue. @mfn 's suggestion above to use I have another workaround that is helpful for anyone using In
|
I run a single queue worker on my production server (EC2, Amazon Linux, nginx, PHP70) with supervisor.
The supervisor config is:
The php process than slowly starts eating up memory and after 3-4 days the server runs out of memory and becomes unresponsive.
I'm not even running jobs yet! It's just idle. I'm tracking the memory usage now and can see that it slowly and steadily goes up.
The text was updated successfully, but these errors were encountered: