You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am seeing weird behavior in production where sidekiq:sidekiq_unique are not always removed after completing a job.
I am running an hourly import job, that queues over 1000 jobs to fetch and process data from an API. To prevent multiple workers processing the same job, I am using sidekiq-unique-jobs with a unique_job_expiration of 1.day.
When I run this on my development machine (OS X), everything is fine. When running in production (Linux, the uniqueness keys are not always removed. This causes import jobs not the run for a whole day.
Normally (and what I see in on my development machine) is that the number of sidekiq:sidekiq_unique keys is equal to the number of currently running jobs plus the queue size. When I running the same import on production, I see over 120 sidekiq:sidekiq_unique keys not being unlocked.
My first thought was that this is caused by some worker jobs, queueing other worker jobs. But I could also reproduce this in production by performing the same worker multiple times.
At this moment I don't have any clue what the cause of this is. But maybe someone has the same issue or is able to provide debugging instructions.
The text was updated successfully, but these errors were encountered:
I see the same thing, but on dev machine as well.
all I have is a simple test worker
class CountWorker
include Sidekiq::Worker
sidekiq_options retry: 3, queue: 'counter'
sidekiq_options unique: true, unique_job_expiration: 60
sidekiq_retries_exhausted do |msg|
# something wrong
end
def perform(id)
sleep(10)
end
end
and I have a loop try to schedule this worker CountWorker.perform_at(10.seconds,
1)
every second, only the first one is scheduled. In theory, the second one
should be scheduled after 10 seconds, since the first one will be finished
(sleep 10). But instead, the second one is only queued after 60 second.
Reply to this email directly or view it on GitHubhttps://github.com//issues/31#issuecomment-36803532
.
An old sidekiq process was still taking jobs from the queue, and failing them. Since this process did not care to remove the unique key from redis, this resulted in unexpected behavior.
I am seeing weird behavior in production where sidekiq:sidekiq_unique are not always removed after completing a job.
I am running an hourly import job, that queues over 1000 jobs to fetch and process data from an API. To prevent multiple workers processing the same job, I am using sidekiq-unique-jobs with a unique_job_expiration of 1.day.
When I run this on my development machine (OS X), everything is fine. When running in production (Linux, the uniqueness keys are not always removed. This causes import jobs not the run for a whole day.
Normally (and what I see in on my development machine) is that the number of sidekiq:sidekiq_unique keys is equal to the number of currently running jobs plus the queue size. When I running the same import on production, I see over 120 sidekiq:sidekiq_unique keys not being unlocked.
My first thought was that this is caused by some worker jobs, queueing other worker jobs. But I could also reproduce this in production by performing the same worker multiple times.
At this moment I don't have any clue what the cause of this is. But maybe someone has the same issue or is able to provide debugging instructions.
The text was updated successfully, but these errors were encountered: