-
-
Notifications
You must be signed in to change notification settings - Fork 318
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor Model Architecture #1055
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm taking a pause at 25% (26/104 files viewed) so that I can give you the feedback I collected so far.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, I'm up to 40/104 files viewed; I need to pause here as it's 11:30pm already...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was hoping to finish today but I only got to 52 /105 files viewed and it's past 11pm. Weekend coding sucks when one has young kids at home... 😞
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Almost there (85 / 111 files viewed). I didn't want to push it as it's getting late and I need a fresh mind to look over app/Models/Photo.php
and such...
$this->extractReferenceFromRaw($original->full_path, $original->width, $original->height); | ||
} elseif ($this->photo->isVideo()) { | ||
if (empty($this->photo->aperture)) { | ||
Logs::error(__METHOD__, __LINE__, 'Media file is reported to be a video, but aperture (aka duration) has not been extracted'); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've been wondering for a while if we aren't being overly restrictive about this. If the duration is unknown, wouldn't it be better to simply extract the first frame (position 0
) rather than bailing out?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess we could do that. Shall we? I'm open for everything.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interestingly, if extractReferenceFromVideo(string $fullPath, float $framePosition)
is called with a frame position from the middle of the video and fails, then it tries to extract a frame at position 0 as a recovery strategy.
$video = $ffmpeg->open($fullPath);
try {
$this->extractFrame($video, $framePosition);
} catch (\RuntimeException $e) {
Logs::notice(__METHOD__, __LINE__, 'Fallback: Try to extract snapshot at position 0');
$this->extractFrame($video, 0);
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No strong opinion in my case.
throw new \RuntimeException($errMsg); | ||
} | ||
if (Configs::get_value('lossless_optimization')) { | ||
ImageOptimizer::optimize($this->referenceFullPath); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm assuming you inherited this code but it strikes me as completely unnecessary to optimize a temporary image (it's getting deleted as soon as the thumbs and small variants get extracted). If anywhere, I would expect it in the raw handler, not in videos, Or am I missing something?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I inherited the code and I had the very same thought. Moreover I have wondered, if the optimization does anything good or if the optimization has not actually the inverse effect. Normally, "optimization" always means to throw away information (e.g. smoothing things out). But we use that temporary image as an interim image for further scaling and cropping. So, maybe we should not optimize the image here, but only the final (scaled) result.
Anyway, I am not the original author of that code. So maybe @ildyria can shed a light on this. He is the author of the original code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#659 :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should not optimize the image here, but only the final (scaled) result
The image optimization is lossless and requires additional software.
I do agree that only the final scaled result should be optimized.
* IMHO, there is an amazing number of places which somehow deal with | ||
* "mime type-ish" sort of values with subtle differences. | ||
* | ||
* TODO: Thoroughly refactor this. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
➕
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that is another PR. I fear it will never end. 🥺
public function save(array $options = []): bool | ||
{ | ||
$result = false; | ||
$retryCounter = 5; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This limit is a new thing, isn't it? On the one hand, I agree that looping forever was a poor style. On the other hand, is 5
large enough? Keep in mind that for users with 32-bit IDs, the IDs have a full-second resolution, so they are very likely to conflict when uploading multiple photos at a time...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, the limit is a new thing to avoid infinite loops. I expect five to be sufficiently large. The IDs are time-based, i.e. strictly monotonic and not random (so the birthday paradox does not apply).
Moreover, PHP is single-threaded within the same request. Hence, if a user uploads or imports a set of photos within a single request then we have no problem even if the user as a high-speed connection and several photos are imported during the same second. If PHP sleeps for one second, then the whole sequence of photos is delayed. (In this case, the limit 1 should actually be enough.)
We only run into problems, if multiple users (or the same user) uploads photos parallel requests. Honestly, I have no gut feeling which would be an appropriate limit in this case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moreover, PHP is single-threaded within the same request. Hence, if a user uploads or imports a set of photos within a single request then we have no problem even if the user as a high-speed connection and several photos are imported during the same second. If PHP sleeps for one second, then the whole sequence of photos is delayed. (In this case, the limit 1 should actually be enough.)
Actually no, the JS front-end throws multiple requests at the same time to the server for a single user, that is why you are sometimes processing 4 images in parallel (and it is a good thing as long as you don't hit a memory limit).
Though I do expect the limit of 5 to be sufficiently large. :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done reviewing (finally)!
So where is the corresponding front end PR? I see the compiled changes here, and overall I must say that they seem surprisingly minor. I guess much of the work was done with the previous PR that introduced size variants to the front end? Unless that part is still WIP?
* ensures that the string is stored correctly formatted at the DB right | ||
* from the beginning and then simply return the stored string instead of | ||
* re-format the string on every fetch. | ||
* TODO: Refactor this. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, there is an argument to be made that there is no such thing as "stored correctly formatted" because it will depend on localization. But the current code ignores this issue as well, of course...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's inherited code. (I just improved PHPdoc comment.) Your are probably right about localization, but that's another topic. When I inherited the code, I just wondered why we do the conversion from a (arbitrary) rational fraction to a unit fraction each time when the data is fetched from the DB. Why don't we do it once when we store the data and then just serve it directly from the database on fetch?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed it would be nicer to optimize this conversion upon original upload instead of every time the data is fetched.
But for now I would leave it as a todo. :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
@nagmat84 please check if you are happy with the view very minor changes I made.
Kudos, SonarCloud Quality Gate passed! 0 Bugs No Coverage information |
We have getEffectiveSorting[Column|Order] instead.
70f4391
to
4459a25
Compare
dc9c1fd
to
ed17b7e
Compare
Kudos, SonarCloud Quality Gate passed! 0 Bugs No Coverage information |
Updated Summary Nov 28th 2021
This is the merger of the previous PR "Refactor photo model" and PR #1101 Refactor album model. See PR #1101 for what has changed there.
Old Summary (of "Refactor photo model" only)
As previously discussed on Gitter this PR intends to rectify two aspects:
size_variants
and are independent of each other, i.e. each has its own file path, its own size attributes and so on. The previously awkward mutual dependencies are gone.big
or that certain files have@2
in their filename.Unfortunately, the amount of things which needed to be reworked became much larger than expected. In detail the following aspected have been changed:
SizeVariantNamingStrategy
. At the moment the only implementing class isSizeVariantLegacyNamingStrategy
which mimics the old behaviour.SizeVariantFactory
. At the moment the only implementing class isSizeVariantDefaultFactory
which mimics the old behaviour.SizeVariantDefaultFactory
. Previously, the same code (with minor derivations) existed several times (for uploading/creating new photos for re-creating missing size variants by the console command, etc.)StrategyPhoto
andStrategyDuplicate
. The code to handle the upload of live photos had been scattered in between. As this part needed to be completely reworked anyway (due to the changes how the size variants are created), there are now four strategies which cleanly encapsulate the upload of live photos, too.predelete
on these entities in order to properly clean-up the actual files followed bydelete
. Now it is save to directly calldelete
and the classes will take care of proper deletion.save
method instead ofmyEntity->save
when an entity had to be persisted for the first time. The former created a time-based ID and took care of possible duplicates, the latter would have created incrementing IDs as it is the default for Laravel. The ID handling has been moved to a traitHasTimeBasedID
which is used by entities with time-based IDs. Now, it is save to call the usualmyEntity->save
.