-
-
Notifications
You must be signed in to change notification settings - Fork 277
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
:while_executing appears to be broken #159
Comments
Can you replicate it with a test? |
Another question then @dzader. Are you in a distributed setup when this happens? |
Having something similiar. So far switched to |
@Fredar I mean a distributed redis setup. The gem won't work with that and it has been on my todo list to solve that properly but haven't gotten around to it yet. Thank you for the added insight. Perhaps removing the The original idea was to expose the locks so that custom ones could be created in any project or application using |
Yeah I have figured out you meant it about redis, but it still doesnt tell me anything sorry 😉 I wouldnt remove |
Thank you for the feedback on keeping the lock @Fredar. Sounds like I should just invest the time to fix it then. Regarding distributed locking see http://redis.io/topics/distlock I've had a look at the source code of https://github.com/dv/redis-semaphore which seems to implement this pattern well. Not taken the time to try it out yet though. |
This issue looks like it is because of the currently hard coded 60s run lock expiration. So if you have jobs that are running for more than 60s, the gem will drop the run lock and let another similar enqueued job grab it. This should be fixed by #164. |
Doesn't seem to work with latest from trunk :( I'm able to add 2 identical tasks, but then it seems to lock only after the first 2 tasks, very strange, any ideas? - time-agent(staging)> WirelessRefreshWorker.perform_async('00:03:E8:02:FE:BE', false)
=> "7cc2cf58e242d89536c86ebc"
time-agent(staging)> WirelessRefreshWorker.perform_async('00:03:E8:02:FE:BE', false)
=> "80a8aa5094e883e0dee95291"
time-agent(staging)> WirelessRefreshWorker.perform_async('00:03:E8:02:FE:BE', false)
=> nil
time-agent(staging)> WirelessRefreshWorker.perform_async('00:03:E8:02:FE:BE', false)
=> nil |
I can confirm that this issue is not fixed, there are always exactly 2 identical tasks before the lock is actually doing its work (and never more than 2). Did somebody find a workaround for now? This issue seems to be a duplicate of #181. |
I can't replicate this problem against the current master. Having two jobs executing at the same time is quite possible. The job is only unique until it is picked of from the queue so if the first job goes immediately to processing the second one would be allowed to be scheduled as well. |
Using the above code I ran 10 workers - after 1 minute the second worker would start before the first one finished. Same was true for a worker using :until_and_while_executing
redis version 3.0.6
sidekiq 4.1.0
sidekiq-unique-jobs 4.0.13
The text was updated successfully, but these errors were encountered: