-
-
Notifications
You must be signed in to change notification settings - Fork 277
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
(6.0.7) uniquejobs:{digest}:AVAILABLE
keys never expire
#359
Comments
+1. I just came here to report the same thing. Our Redis on one project ran out of space, and literally 98.6 % of the keys are "uniquejobs:…" :) |
We're also seeing this - our total redis key count has been steadily rising since upgrading to v6.0.7 Is this a reasonable way to cleanup/mitigate until we can get a patched version out? Pinging @mhenrixon since this is a pretty big issue and can cause Redis instability num = 0
loop do
num, keys = Redis.current.scan(num, :match => '*uniquejobs*:AVAILABLE', :count => 1000)
next if keys.blank?
puts "Found #{keys.size} keys with cursor #{num}..."
keys.each do |key|
next unless key.match?('uniquejobs')
Redis.current.expire(key, 5.seconds)
end ; nil
end ; nil |
@jwg2s Deleting the keys immediately might mean the plugin doesn't do what it should, i.e. doesn't ensure jobs are unique. Though I don't know if these specific keys are used for that. What we did was something similar, looping over all those keys, but instead set an expiration on the keys, e.g. |
@henrik good point - I've updated my script in my comment to use that strategy instead |
@jwg2s great workaround! will be good to know why these keys are stuck in the redis? |
The available key becoming available is crucial for new jobs to be able to be scheduled. Them living on forever is most likely a terrible side-effect. Fix one bug (jobs can be scheduled properly) and cause another (redis grows out of proportion). They could be created/added with an expiry. The problem when using LUA is that it might be that the |
@mhenrixon what do you think, is it possible to fix, or we should leave with workaround that @jwg2s proposed? |
I'll try something tomorrow morning @vitalinfo. Will keep you guys posted |
Released with v6.0.8 @vitalinfo @jwg2s @henrik @zvkemp |
@mhenrixon Thanks so much ❤️ Would you recommend we do anything to expire existing keys, or will those automatically be handled when the gem is updated? |
Unfortunately, those keys need to be dropped manually @henrik. I wish I had a better way for you guys, I am working on something but there is so little time to sit down with it and now I have my second daughter in February... |
@mhenrixon No worries, thanks so much for fixing this and for clarifying. I'll write a command for that and share it here. Open source maintenance is so much more work than one might think, and I'm super grateful you decided to share this code – it's helped us a lot :) And congratulations on the second daughter! EDIT: @jwg2s updated their code snippet above to use I had to change Please note that if you rely on very long-lived locks, you may want to tweak the expiration time accordingly. And note that if your Redis is configured to drop keys with an expiration if memory gets full, then you could lose these keys prematurely. |
@henrik I updated the script to use |
Hm, that should be fine if locks aren’t needed for longer than 5 seconds. If one has much longer-running jobs then I guess there’s a risk that would stop locking those prematurely – assuming this key is important to keeping things locked. I think each person running it will need to figure out what expiry makes sense for them :)
I assume the lib only sets a 5 sec expiry after a job is done and the lock is no longer needed but I haven’t digested the code fully.
…On 15 Jan 2019, 04:34 +0000, John Gerhardt ***@***.***>, wrote:
@henrik I updated the script to use 5.seconds as well, so it should match the new behavior found in v6.0.8.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
Unfortunately that doesn't seem to have fixed it. 1 week after updating the gem there are 200k uniquejobs keys in our redis. |
@blarralde do you restart your workers a lot? If you, for instance, have a pretty big team and do continuous delivery then the restarts might mess things up for you. |
Yes workers do restart a lot as we scale them up and down. They're scaled down only after jobs are done though so they should have time to clean up the queue? |
Describe the bug
uniquejobs:{digest}:AVAILABLE
keys never expire. Given an unbounded set of unique arguments, this will fill up redis.Expected behavior
In version 6.0.6, these keys don't outlive the execution of their jobs.
Current behavior
AVAILABLE
keys are not cleaned up (introduced in #354). If there's a purpose to persisting them, it's not evident (at least in our use case)Worker class
The text was updated successfully, but these errors were encountered: