-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
It's difficult to understand if a very long timeout in fetch
could stall the extraction worker's progress
#23
Comments
Debugging journal:
|
Found that there's potentially a problem with better-queue's maxTimeout: diamondio/better-queue#81 |
|
I did an experiment. I pushed 6 tasks to the queue. The second task should take a very long time. I found that the second task did not stall the queue if the concurrency was greater than 1. The above makes sense. We can imagine it like this. With concurrency equal to two we have two workers that can execute our tasks in parallel. If one of the worker gets blocked due to a long task the other worker can keep on executing the tasks. Screen.Recording.2022-07-03.at.11.33.20.PM.mov |
yes, but I'm outlining the problem where we potentially have a concurrency of e.g. 200 parallel workers and then over time while all non-problematic tasks aren't blocking the queue, there are a total of > 200 tasks that can clog up the queue. Think about it this way: We have 20000 tasks to execute but only 200 tasks that take e.g. 5mins to clear, then if those 200 bad tasks are spread over those 20000 good tasks, we have a good chance that the queue is clogged up and not running at full concurrency all the time. Hence further allowing to configure timeouts to more efficiently ending uneconomic tasks can be a good thing. |
/cc @il3ven
The text was updated successfully, but these errors were encountered: