-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
redis script cache gets bloated after update to bull 2.0 #426
Comments
can you check what entries are there in the cache? there must be a spurious entry that is for some reason not being reused but created every time it is used. |
@manast I've added some console logs to the script caching function in Do you have a preferred way to analyse the redis lua script cache? |
@manast can it be that |
eval is not caching the script, so I don't think that is the reason of the problem. I am not aware of any other command in redis besides LOAD that caches scripts. So if you have time you could put a console.log just before calling load, and you should be able to see which script is being cached continuously: https://redis.io/commands/script-load |
You're right. I added a console.log in my dev environment. Thanks for helping |
The major new thing in 2.0 is the redlock algo, I would bet that is the root of the problem. @doublerebel any ideas? |
I think you're right @manast @doublerebel |
when the old locking system was replaced by redlock many other changes where performed, so it is not easy to replace it. I will investigate when I have some time why the script cache is increasing, there could be an issue in the redlock dependency as well: https://github.com/mike-marcacci/node-redlock |
I have analised the code, and I cannot see anything in redlock or in bull that would currently cache a lot of scripts. In redlock it is actually running eval directly (which is not so performant, I would like to improve this in the future), and in bull, the code related to lock is not using any scripts, it is relayed to redlock. If you put a console log here https://github.com/OptimalBits/bull/blob/master/lib/scripts.js#L46 outputing the hashes you should only see a few ones (correct behaviour) otherwise you should see a lot of them. |
From redis documentation https://redis.io/commands/eval "The EVAL command forces you to send the script body again and again. Redis does not need to recompile the script every time as it uses an internal caching mechanism, however paying the cost of the additional bandwidth may not be optimal in many contexts.hanism, however paying the cost of the additional bandwidth may not be optimal in many contexts." |
@manast well my redis db that is around 1GB reached 15GB of memory, that's a pretty big overhead in my opinion and it is a risk on a production environment since there is no clear limit for it. I think I found the issue: Line 328 in f1ea81c
Here you call redlock passing a variable locking Lua script. This is probably what's bloating the redis cache |
without checking redlock, I guess that line is there because it is not currently possible to specify extra keys to be used by redlock. That would certainly need a change in redlock as well, to be merged and released before we can fix it in bull... |
Sounds good. At least it seems the explanation of what I'm seeing. I'll see if I can prepare something for graylog. Thanks for this awesome lib btw, I truly like it |
Even if the 15Gb is scary, it may not be a problem if redis just reuse that memory when needed. I checked redis.conf file but unfortunately there is no setting to limit the max amount of lua cache. Maybe @antirez could give some explanation about this behaviour. |
I agree that it would not be scary if there was a recycle of the memory. I checked for that parameter as well. I have the feeling that the changing the script each time is against redis design. |
We are seeing this as well, our |
…ts#426 NOTES: this will still violate redis cluster rules of each script explicitly defining the keys it writes to in the KEYS array, howerver the current implementation also does this and this should fix the growing lua_memory_cache issue.
I think we have fixed this on our pull request. The issue is that line Line 328 in f1ea81c
|
We experience this issue as well with about 600k jobs a day. We use Redis 3.0.7 on Ubuntu 14.04 After 7 days of use Redis memory is arounf 8GB - data is less than 1GB, rest is reported as Lua scripts cache. |
Any update on this issue? |
we have a PR that solves this issue but there is another issue with that PR that needs to be resolved before it can be merged. |
That's how the memory on my Redis instance (that supports Bull) looks like. We are running about 1 million messages a day through the Bull queues. The server has 16 GB of memory, and looses about 3GB a day to Lua cache. |
@jeltok a fix for this issue is the next thing in my list. |
Fix Lock Redis script to not be a different script for every Job ID #426
@jeltok can you please verify that the issue is resolved with bull 2.2.0 so that we can close it? |
hmm, ok, I will need to dig more into it. |
Thanks! Just switched from kue, and other than this it's been working great! Let me know if you need anything 😄 |
@jeltok I would say nothing at all :). I will try to do a deeper analysis. |
Ok, I have created a custom bull-redlock package and fixed one last issue. The queue works pretty well now without leaks. I will make a new release in the following hours. |
Sorry for the noise, just wanted to send some gratitude to @manast for fixing this issue. |
Hello,
I'm experiencing a growing Lua script cache after updating to bull-2.0
After 20 days of uptime my redis Lua cache grown up to 15GB
Anyone experiencing same issue?
Thanks
The text was updated successfully, but these errors were encountered: