Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

until_and_while_executing + sidekiq retry mechanism #395

Closed
jchatel opened this issue May 18, 2019 · 5 comments
Closed

until_and_while_executing + sidekiq retry mechanism #395

jchatel opened this issue May 18, 2019 · 5 comments
Assignees
Milestone

Comments

@jchatel
Copy link

jchatel commented May 18, 2019

This is more of a question that I didn't see addressed in the doc.

By default, Sidekiq will retry a job that fails with incremental delay.

I'm using this config:

sidekiq_options lock: :until_and_while_executing
, lock_expiration: 60*60
, unique_across_queues: true
, unique_args: ->(args) { [ args.first ] }
, on_conflict: :reject

...to ensure:

only 1 job is enqueued at max (to prevent accumulations as I add this job whenever an action happens)
and only 1 job is being executed at max (as I need to process a sendqueue 1 item at a time with delay between them).

How does until_and_while_executing works with the retry of Sidekiq?

Should I assume that retry is just another queue of SideKiq?

If a job fails and goes to the retry queue of Sidekiq, does the lock apply? And so I can't push another similar job to a normal queue? Does that mean that would be the case for max 1 hour because of the lock expiration? So I may have to wait 1 hour (lock_expiration) before a job is processed (as the retry, with exponential retry may end up being 24hour).

If I remove the "unique_across_queues: true" argument, does that mean that I will potentially have plenty of similar jobs in the retry queue of sidekiq, but I asusme only 1 will be executed at the same time (or will execution of a retry bypass the lock?)

Are those assumptions correct?

Thanks

@mhenrixon
Copy link
Owner

Actually... unqiue_accross_queues is only relevant if you schedule the same job to multiple queues and want to enforce uniqueness on all of them.

retry and schedule are not regular queues but the job should be locked while there too.

@mhenrixon
Copy link
Owner

This will be solved by #402

@mhenrixon mhenrixon self-assigned this Jun 15, 2019
@mhenrixon mhenrixon added this to the V7.0 milestone Jun 15, 2019
@abankspdx
Copy link

How does lock_expiration work with fails/retries? As in, if a job goes to the retry queue with a lock, but the lock_expiration has elapsed, does the lock get released anyway? Or does lock expiration only work with successfully completed jobs?

@mhenrixon
Copy link
Owner

@AlexanderBanks v6 put the expiration on when the lock was done executing. In v7 (after a lot of requests) I reverted back to the v5 way of expiring by setting it at the time of creating the lock in the first place.

That way, the lock is removed either when the worker is done working or the expiration kicks in.

See #571 for information about how to upgrade. Closing this for now as I believe @jchatel has already upgraded to v7.

@jchatel
Copy link
Author

jchatel commented Jan 22, 2021

yep

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants