You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This topic was previously discussed in #24636 and #35199
I am using the database backend ending up exceeding the attempts columns size. I have $tries set on the job, but I am also using retryUntil().
I would expect both rules to be applied (retryUntil() and tries) but I see that instead if was agreed to silently ignore tries. IMHO if you are going to ignore a setting, you should then at least throw an error.
However, more importantly why isn't the job moved to failed_jobs once the exception is triggered? I assume this is because jobs are only moved to failed_jobs if the job itself failed and not when the "management" of the job failed. But the current behavior basically blocks the queue from processing jobs, which is imho the worst possible outcome.
I have not dug in deeply enough to confirm this but it seems like retryUntil is persisted in the DB, so I cannot add some logic in retryUntil() to check the tries myself.
So for now I guess my only option is to remove retryUntil, which is suboptimal as I am using a middleware to handle rate limiting (via SpatieRateLimitedMiddleware), with exponential backoff times and I do want to know if the job ends up taking too long and due to exponential backoff, it is tricky to set a low fixed number of tries to ensure this time period.
Steps To Reproduce
class MyJob implements ShouldQueue
{
use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
public $tries = 50;
public function retryUntil()
{
return ..;
}
The text was updated successfully, but these errors were encountered:
We'll need more info and/or code to debug this further. Can you please create a repository with the command below, commit the code that reproduces the issue as one separate commit on the main/master branch and share the repository here? Please make sure that you have the latest version of the Laravel installer in order to run this command. Please also make sure you have both Git & the GitHub CLI tool properly set up.
laravel new bug-report --github="--public"
Please do not amend and create a separate commit with your custom changes. After you've posted the repository, we'll try to reproduce the issue.
Laravel Version
v9.50.1
PHP Version
8.0.23
Database Driver & Version
MariaDB
Description
This topic was previously discussed in #24636 and #35199
I am using the database backend ending up exceeding the
attempts
columns size. I have$tries
set on the job, but I am also usingretryUntil()
.I would expect both rules to be applied (
retryUntil()
andtries
) but I see that instead if was agreed to silently ignoretries
. IMHO if you are going to ignore a setting, you should then at least throw an error.However, more importantly why isn't the job moved to
failed_jobs
once the exception is triggered? I assume this is because jobs are only moved tofailed_jobs
if the job itself failed and not when the "management" of the job failed. But the current behavior basically blocks the queue from processing jobs, which is imho the worst possible outcome.I have not dug in deeply enough to confirm this but it seems like
retryUntil
is persisted in the DB, so I cannot add some logic inretryUntil()
to check thetries
myself.So for now I guess my only option is to remove
retryUntil
, which is suboptimal as I am using a middleware to handle rate limiting (via SpatieRateLimitedMiddleware), with exponential backoff times and I do want to know if the job ends up taking too long and due to exponential backoff, it is tricky to set a low fixed number of tries to ensure this time period.Steps To Reproduce
The text was updated successfully, but these errors were encountered: