-
-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FR] Split large volume jobs into separate queue jobs #402
Comments
hmmmm. This seems like an environmental thing, perhaps? In theory, CLI-run PHP tasks should generally not have a max execution time involved? |
Definitely environmental. It’s on Fortrabbit so it’s not configurable, unfortunately. It seems to manage circa 500–600 entries and then fails, in my experience. |
hmmmm. I'm not sure how I feel about this; it's an externally imposed limitation that normally is configurable. Even if we did break it into paginated queue jobs, we'd be guessing at what the page size should be to ensure that it won't time out (which will depend on a number of factors, such as image size, number of variants, etc.) Have you tried contacting them to see if they can increase or remove this limitation? |
Fair feedback. I was actually thinking the existing page size (i.e. I’ll raise with Fortrabbit and see what they say, but I cannot imagine their removing that constraint. |
I wonder if this could help: Big job batching - craftcms/cms#12638 Looks like it would then be only for Craft 4.4+ -- what version of Craft are you using? |
This site is currently on 4.8.7. |
fortrabbit co-founder here. Sorry for being defensive: Our 'externally imposed limitations' are designed with good intentions at least. We have a couple of them. My experience is that they help to prevent 'incorrect setups' in most cases. This limitation is about deployment? 20 minutes for a deployment is a long time, too long we think. We also have a 20 minutes limit on SSH connection for similar reasons. Usually a misconfiguration is the cause which can be solved together with the client in support. We will be in contact with Josh through our client support and see what we can do about this case and of course share here if new ideas for ImageOptimize come to light. |
@frank-laemmer Discussed via the support ticket. For posterity, in case anyone else comes across this, the time constraints are implemented to avoid abuse of your hosting platform. @khalwat I resolved the immediately affected website manually (i.e. manually ran the blocking job locally and uploaded the database). Am I right in thinking that swapping to |
Sure, it makes sense as a default, but a way to override or change it when the client has extraordinary needs might be helpful.
Great, let me know! |
Well, it will also require a bump in the minimum version of Craft that can use the code (it would need to be |
Is your feature request related to a problem? Please describe.
For volumes with thousands of images we often run into time outs.
The next thing that happens is that the queue job fails, which means that we have to restart the process… ending up in a loop.
Describe the solution you would like
Ideally, instead of chunked/paged queues within a queue job, it’d be great for each batch/page to be split into its own queue job. That way it’s easy to just restart the one batch/page of 100.
The text was updated successfully, but these errors were encountered: