-
Notifications
You must be signed in to change notification settings - Fork 413
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
duplicate jobId #1318
Comments
Since you have "removeOnComplete" true it will remove the job when completed so a job with the same jobId could still be added to the queue. To avoid this problem keep a sane amount of jobs in the queue, you can also keep based on time: |
hi @Alrz-a I recommend you to see our new debounce logic that is available since v5.11.0 |
Thank you very much. This feature makes things so much easier for us! |
Hello,
In our app, we receive webhooks to inform us about a change in some external resource. We then have to fetch some data and send a request to change the resource. The fetched data indicates if we already have change the resource or not.
Problem starts when we receive two webhooks simultaneously for the same resource (happens sometimes). app fetches data, see the resource needs changing and run the side effect twice for the same resource.
To combat this behavior, we decided to let workers execute this job (we are using bullmq in other parts of project, so it seemed an easy solution). After receiving webhook, we add a job to queue and set custom jobId and removeOnComplete: true. JobId is a mixture of resource attributes that ensures its uniqueness for each resource. This way if two webhooks arrive at the same time, we only add one job to the queue.
ISSUE: We get errors saying that we tried to change resource twice. In our logs we see that two jobs with the same id exist. jobs where picked up by the worker, whichever changed the resource firt would complete and the other would receive an error for trying to change the already changed resource. I was unable to reproduce this problem elsewhere.
this is 3 lines from our logs. jobId and a uid are logged at the first line of worker(add-metafields worker). We log again at the end of worker (add-metafields end). it takes 2 to 3 second for a job to complete. You can see one of the jobs has finished. The other job threw an error. you can see they both have the same jobId.
is there a problem with what we provide as jobId? do you have any idea why this would happen?
The text was updated successfully, but these errors were encountered: