Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Potential Double Processing of SQS Jobs (and other drivers?) #19969

Closed
aarondfrancis opened this issue Jul 9, 2017 · 3 comments
Closed

Potential Double Processing of SQS Jobs (and other drivers?) #19969

aarondfrancis opened this issue Jul 9, 2017 · 3 comments

Comments

@aarondfrancis
Copy link
Contributor

  • Laravel Version: 5.4.20
  • PHP Version: 7.1.0

Description:

I think I've stumbled on to an issue where a queue can process the same job twice due to this line in the queue worker: https://github.com/laravel/framework/blob/5.4/src/Illuminate/Queue/Worker.php#L144.

Specifically, because the + $options->sleep is added in there.

Steps To Reproduce:

  1. Create an SQS Queue with a default visibility timeout of 15s
  2. Create a TestJob with a $timeout of 10s, where handle is:
for($i = 1; $i <= 17; $i++) {
    echo "Slept for $i";
    sleep(1);
}

echo "Done!";
  1. Add one TestJob to the queue.
  2. Start two queue workers with --sleep=10

You'll see one worker chug through and end with:

Slept for 17
Done!

And you'll see that the second worker picked up the same job around second 15 that the first worker was processing it.

If you remove the + $options->sleep from the Worker, then the first worker gets killed (correctly, IMO) at 10s and the second picks the job up 5s later when Amazon makes it visible again.

The docs clearly and correctly state that

The --timeout value should always be at least several seconds shorter than your retry_after configuration value. This will ensure that a worker processing a given job is always killed before the job is retried. If your --timeout option is longer than your retry_after configuration value, your jobs may be processed twice.

(Of course, SQS doesn't use retry_after, but rather the visibility timeout.)

In this case I followed the warning of the docs and made my timeout of 10s several seconds shorter than my retry_after (visibility) of 15s, yet still got bit by this. This isn't a theoretical problem, I actually had a few jobs processed twice due to this. It's not a catastrophic issue, but I do think fixing it could make Laravel even better.

At the very least, the docs should warn that the sleep value will be added to your timeout value and you should set your retry_after value several seconds longer than the sum of those two values.

@themsaid
Copy link
Member

Thanks, I've submitted a PR that removes the extra time so that people following the docs won't face this issue: #19978

@aarondfrancis
Copy link
Contributor Author

Awesome, that was fast! Thanks @themsaid / @taylorotwell

@themsaid
Copy link
Member

Glad it got resolved :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants