You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
"For jobs scheduled in the future it is possible to set for how long the job should be unique. The job will be unique for the number of seconds configured (default 30 minutes) or until the job has been completed. Thus, the job will be unique for the shorter of the two."
the SETEX doesn't care about the job finishing, and if the args stay the same, the same hash will be used to look at the lock. So if you set it to be unique for 30 minutes but it finishes in one, how would it get enqueued again?
The text was updated successfully, but these errors were encountered:
Trying this out a bit more now. If I have some jobs that quickly fail and go past their retry, the unique key is gone afterwards, before it's timeout would indicate. So, something del's the key I guess, it's just not obvious upon initial scan of the code, or from the readme.
Finding it 'deletes uniqueness lock on delete' in the specs helps.
"For jobs scheduled in the future it is possible to set for how long the job should be unique. The job will be unique for the number of seconds configured (default 30 minutes) or until the job has been completed. Thus, the job will be unique for the shorter of the two."
the SETEX doesn't care about the job finishing, and if the args stay the same, the same hash will be used to look at the lock. So if you set it to be unique for 30 minutes but it finishes in one, how would it get enqueued again?
The text was updated successfully, but these errors were encountered: