Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why is lock_timeout: nil VERY DANGEROUS? #313

Closed
tomafc330 opened this issue Aug 1, 2018 · 2 comments
Closed

Why is lock_timeout: nil VERY DANGEROUS? #313

tomafc330 opened this issue Aug 1, 2018 · 2 comments

Comments

@tomafc330
Copy link

tomafc330 commented Aug 1, 2018

I'm using a cron sidekiq plugin that schedules a job every 5 minutes, but I want that job to be unique (ie. only one instance ever is running at a time, so naturally I'm thinking of using while_executing. However, as per the comment here in the README:

sidekiq_options lock_timeout: nil # lock indefinitely, this process won't continue until it gets a lock. VERY DANGEROUS!!

I am seeing that later in the README it is using this configuration as an example:

sidekiq_options lock: :while_executing, retry: false, lock_timeout: nil

It seems that the job is still running multiple at a time when I restart the sidekiq server. Do I need to add in the lock_timeout: nil config into this? Or should I be using until_and_while_executing instead? If the former, then it is still very dangerous to use?

TIA

Tommy

@mhenrixon
Copy link
Owner

Good question @tommytcchan. WhileExecuting only locks in the server process and when the server picks the job off the queue. This means that any other server processes that attempts to run a job that has the same unique digest will wait for however long you specified lock_timeout for. It there for appears to be allowing multiple jobs to run simultaneously but in fact the other processes only wait for the first job to finish.

Ask yourself if you really want to push unlimited duplicates but only working off one at a time or if you rather would like to use until_executed or until_and_while_executing instead which kind of limits these problems a little but restricting the number of jobs that can be enqueued.

@tomafc330
Copy link
Author

Okay got it @mhenrixon -- I'll try out until_and_while_executing !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants