Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

:while_executing appears to be broken #159

Closed
dzader opened this issue Feb 2, 2016 · 10 comments
Closed

:while_executing appears to be broken #159

dzader opened this issue Feb 2, 2016 · 10 comments

Comments

@dzader
Copy link

dzader commented Feb 2, 2016

class Test
  include Sidekiq::Worker
  sidekiq_options unique: :while_executing

  def perform
    ap "WHILE EXECUTING"
    rv = sleep 120
    ap "DONE WHILE EXECTUTING slept for #{rv}"
  end
end

Using the above code I ran 10 workers - after 1 minute the second worker would start before the first one finished. Same was true for a worker using :until_and_while_executing

redis version 3.0.6
sidekiq 4.1.0
sidekiq-unique-jobs 4.0.13

@mhenrixon
Copy link
Owner

Can you replicate it with a test?

@mhenrixon
Copy link
Owner

Another question then @dzader. Are you in a distributed setup when this happens?

@Fredar
Copy link

Fredar commented Feb 15, 2016

Having something similiar.
:while_executing together with :until_and_while_executing do not stop the jobs from being started.
Also confirmed that until part of the second type is working correctly.

So far switched to :until_executed - that one is working just fine.
Not sure what do you mean by distributed setup.

@mhenrixon
Copy link
Owner

@Fredar I mean a distributed redis setup. The gem won't work with that and it has been on my todo list to solve that properly but haven't gotten around to it yet.

Thank you for the added insight. Perhaps removing the :until_and_while_executing lock would be the best approach. I merely added it to show case how easy it would be to create something custom. Someone had some crazy questions and I added it as a suggestion from that discussion.

The original idea was to expose the locks so that custom ones could be created in any project or application using sidekiq-unique-jobs but I never got around to finishing that.

@Fredar
Copy link

Fredar commented Feb 15, 2016

Yeah I have figured out you meant it about redis, but it still doesnt tell me anything sorry 😉
Redis is not my strong side yet, just tested that on development, think I have redis-server with defaults.
Cant say how its run on production either. (Though behaved the same as in dev)

I wouldnt remove until_and_while_executing, it is a really nice logic.
Especially that while_executing is the only part that is not working there.
(and you want to keep while_executing logic)

@mhenrixon
Copy link
Owner

Thank you for the feedback on keeping the lock @Fredar. Sounds like I should just invest the time to fix it then. Regarding distributed locking see http://redis.io/topics/distlock I've had a look at the source code of https://github.com/dv/redis-semaphore which seems to implement this pattern well.

Not taken the time to try it out yet though.

@Slania
Copy link

Slania commented Feb 17, 2016

This issue looks like it is because of the currently hard coded 60s run lock expiration. So if you have jobs that are running for more than 60s, the gem will drop the run lock and let another similar enqueued job grab it.

This should be fixed by #164.

@tetherit
Copy link

tetherit commented May 3, 2016

Doesn't seem to work with latest from trunk :(

I'm able to add 2 identical tasks, but then it seems to lock only after the first 2 tasks, very strange, any ideas? -

time-agent(staging)> WirelessRefreshWorker.perform_async('00:03:E8:02:FE:BE', false)
=> "7cc2cf58e242d89536c86ebc"
time-agent(staging)> WirelessRefreshWorker.perform_async('00:03:E8:02:FE:BE', false)
=> "80a8aa5094e883e0dee95291"
time-agent(staging)> WirelessRefreshWorker.perform_async('00:03:E8:02:FE:BE', false)
=> nil
time-agent(staging)> WirelessRefreshWorker.perform_async('00:03:E8:02:FE:BE', false)
=> nil

@lacco
Copy link

lacco commented Jul 22, 2016

I can confirm that this issue is not fixed, there are always exactly 2 identical tasks before the lock is actually doing its work (and never more than 2). Did somebody find a workaround for now?

This issue seems to be a duplicate of #181.

@mhenrixon
Copy link
Owner

I can't replicate this problem against the current master. Having two jobs executing at the same time is quite possible. The job is only unique until it is picked of from the queue so if the first job goes immediately to processing the second one would be allowed to be scheduled as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants