-
-
Notifications
You must be signed in to change notification settings - Fork 276
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problem with releasing uniquejobs locks after timeout expires #169
Comments
Thanks for reporting! I never thought that part of the code would be an issue. I'll make sure both calls are always made. |
Fixed by c22a5a3 |
thank you very much ! I was waiting for this |
+1 |
this is unfortunately not solving our problem... |
Bah I know why, sorry about that I forgot about the check for the key to existence. |
The fix doesn't actually solve my problem either. Should this issue be reopened? |
Any idea how to fix this @mhenrixon ? |
The redis/aquire_lock.lua script sets two keys in redis for each unique job; the first has an expiration time:
The redis/release_lock.lua script contains this to delete the same two keys:
However, if the job has already expired by the time this release_lock script is called then the first
redis.pcall
will return false and the 2nd one never gets executed. This causes the uniquejobs hash to keep some entries forever, and it just gets bigger and bigger, and in my case it was consuming all available memory and causing Sidekiq to reject all additional jobs even though the underlying queues were apparently empty.One simple fix may be to always remove the key from the uniquejobs hash before testing whether the timed-out key can also be deleted, but I'll leave it to someone who understands the locking mechanism better to decide if it's good:
My workaround (to avoid having to change the library) is to add a configuration setting the increase the value of the default timeout:
P.S.: the correct spelling of 'aquire' is 'acquire'
The text was updated successfully, but these errors were encountered: