You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Ver 8.0.33-0ubuntu0.22.04.2 for Linux on x86_64 ((Ubuntu))
Description
For a project that deals with an unstable and inconsequent external API, I'm running a lot of jobs in order to fetch and process the data. This results in many failed jobs, where I learn from the error messages, subsequently fix the processing code by making it more robust and requeue everything that failed. Until the next iteration...
Yesterday I suddenly started receiving
SQLSTATE[23000]: Integrity constraint violation: 1062
Duplicate entry '2158e3e7-78ef-4bc8-aa80-3e40f616cf06' for key 'failed_jobs.failed_jobs_uuid_unique'
(Connection: mysql, SQL: insert into `failed_jobs`
(`uuid`, `connection`, `queue`, `payload`, `exception`, `failed_at`)
values (
2158e3e7-78ef-4bc8-aa80-3e40f616cf06, database,
process-json,
{"uuid":"2158e3e7-78ef-4bc8-aa80-3e40f616cf06","displayName":"App\\Jobs\\ProcessJson\\ProcessProposals","job":"Illuminate\\Queue\\CallQueuedHandler@call","maxTries":null,"maxExceptions":null,"failOnTimeout":false,"backoff":null,"timeout":null,"retryUntil":null,"data":{"commandName":"App\\Jobs\\ProcessJson\\ProcessProposals","command":"O:37:\"App\\Jobs\\ProcessJson\\ProcessProposals\":2:{s:12:\"subdirectory\";s:13:\"proposals\/527\";s:5:\"queue\";s:12:\"process-json\";}"}},
Illuminate\Queue\TimeoutExceededException: App\Jobs\ProcessJson\ProcessProposals has timed out. in /home/forge/reto.ncpflanders.be/vendor/laravel/framework/src/Illuminate/Queue/Worker.php:793 Stack trace: #0
...
I'm using the database driver and am having 10 active queue workers for the process-json queue. And there have been massive amounts of failed jobs already. But I cannot really assess whether any of this is causing the duplicate key issue.
I'm not sure whether this issue is something specific for my application (in the way the jobs can be picked up by the workers?), or whether it is a framework issue (as I would assume it should be impossible to have duplicate uuid's, so maybe there is something wrong with the mechanism that calculates the uuids?). But I thought it would be good to report anyhow, just in case this is indeed an issue.
Steps To Reproduce
I don't know, as I just encountered it in production. Sorry. But I'm more than willing to add extra logging and other code into my jobs if that would help debugging this issue.
The text was updated successfully, but these errors were encountered:
@driesvints I'm not having time until next week to look at the issue in detail. However, I can already confirm that I'm in a similar situation of jobs that time out. And there were set to only 1 (re)try, when the error occurred. So my quick guess is that it's indeed an identical issue.
Laravel Version
10.10.1
PHP Version
8.1.18
Database Driver & Version
Ver 8.0.33-0ubuntu0.22.04.2 for Linux on x86_64 ((Ubuntu))
Description
For a project that deals with an unstable and inconsequent external API, I'm running a lot of jobs in order to fetch and process the data. This results in many failed jobs, where I learn from the error messages, subsequently fix the processing code by making it more robust and requeue everything that failed. Until the next iteration...
Yesterday I suddenly started receiving
I'm using the database driver and am having 10 active queue workers for the process-json queue. And there have been massive amounts of failed jobs already. But I cannot really assess whether any of this is causing the duplicate key issue.
I'm not sure whether this issue is something specific for my application (in the way the jobs can be picked up by the workers?), or whether it is a framework issue (as I would assume it should be impossible to have duplicate uuid's, so maybe there is something wrong with the mechanism that calculates the uuids?). But I thought it would be good to report anyhow, just in case this is indeed an issue.
Steps To Reproduce
I don't know, as I just encountered it in production. Sorry. But I'm more than willing to add extra logging and other code into my jobs if that would help debugging this issue.
The text was updated successfully, but these errors were encountered: