Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Option to disable image processing #41

Open
KaMyKaSii opened this issue Sep 27, 2024 · 10 comments
Open

Option to disable image processing #41

KaMyKaSii opened this issue Sep 27, 2024 · 10 comments
Labels
enhancement New feature or request

Comments

@KaMyKaSii
Copy link

It is useful for those like me who do image processing with external tools like Image Magick and want to use your excellent tool just to package the final ebook file.

@celogeek
Copy link
Owner

hey,

I have 2 options:

-noresize
-nofilter

This will disable every processing.

I need to recheck if I keep the original untouch or not.

In any case, if you exceed the size of the SR profile (1200x1920), and you convert it to mobi or upload it to aws send to my kindle, then you will have a resize processing that apply to the image by amazon.

if you simply have image smaller that those profile, and you don't want to apply any filter, then the nofilter is the way to go.

If needed, I may add a "raw" mode, that will keep the original file. I think I read/decode/encode in the format you prefer (could be JPG 100% or less), so it's still applying compression.

will check that

@celogeek celogeek added the enhancement New feature or request label Oct 4, 2024
@KaMyKaSii
Copy link
Author

hey,

I have 2 options:

-noresize
-nofilter

This will disable every processing.

I need to recheck if I keep the original untouch or not.

In any case, if you exceed the size of the SR profile (1200x1920), and you convert it to mobi or upload it to aws send to my kindle, then you will have a resize processing that apply to the image by amazon.

if you simply have image smaller that those profile, and you don't want to apply any filter, then the nofilter is the way to go.

If needed, I may add a "raw" mode, that will keep the original file. I think I read/decode/encode in the format you prefer (could be JPG 100% or less), so it's still applying compression.

will check that

I tested both parameters, but options like format and quality are applied as you said. I would like a parameter to avoid all types of image processing, keeping the files intact. Could you add it?
About the maximum resolution supported by Send To Kindle, I made some test images (only noise) in ImageMagick with different resolutions, one smaller in both dimensions, one with the maximum resolution, one larger only in width, one larger only in height and one larger in both dimensions. Converted it to epub with the command: go-comic-converter -titlepage 0 -input "/home/linux/teste-conversao-amazon/6ª" -limitmb 200 -strip -nofilter -noresize -format png -quality 100. In the azw3 file delivered by Amazon to my Paperwhite 4, I noticed that except for the image with the resolution smaller than the limit in both dimensions, all the images were resized (keeping the aspect ratio) to resolutions that fit on the 1072x1448 screen (but no image has a width of 1072 or a height of 1448), so I imagine that Amazon produces more than one conversion and the Kindle only downloads the version with the closest resolution to the native one. What do you think?
files.zip

@celogeek
Copy link
Owner

celogeek commented Oct 9, 2024

The epub to mobi you can run locally doesn't take the device as a parameter. For me it means the maximum allowed resolution is the one from SR profile.
May be if you use the website they will reduce more because of the target you choose ?
What if you pick the cloud without any device, then download it from your device ?

@celogeek celogeek added the in progress I'm working on it label Oct 12, 2024
@celogeek
Copy link
Owner

I've check how to get the original image directly into the final epub.
the way the system works right now, doesn't allow that easily.

It's working like a pipeline:

  • read dir/cbr/cbz/pdf
  • stream decoded image
  • apply filter
  • encode image to zip file based on the param format: png, jpeg

so it can read a lot of different sources, and and then encode it to the one we choose in the quality associated for the jpg. png is lossless format, so the quality doesn't apply.

I need to reorg the projet into more functional peace that can be used independently.

For most of the format I can extract the raw bytes directly, and then decode later if needed.
The issue is the PDF module, it allow only to get the decoded version of the image, not the raw one.

I may create a dedicated streamer for raw format.

Well, I need a reorg so it may take sometimes.

@celogeek celogeek removed the in progress I'm working on it label Oct 27, 2024
@ssbroad
Copy link

ssbroad commented Jan 3, 2025

Using original images without any processing of image files (e.g. quality compression, image cropping) will significantly reduce program processing time. We just need to pack the original image and then write css according to the image size.

Let the image compression processing task be handed over to the Amazon cloud.

I'm looking forward to the release of the original version!

@celogeek
Copy link
Owner

celogeek commented Jan 3, 2025

This require heavy changes in the stack.
I need to split the code into small autonomous part to allow this change.
In any case I need to read the dimension of the image with the color grade. It may require to decode the image anyway.
Only the encoding could be skipped.
This may take a while.
I already started to simplify.
I need to make part of the code public so it can be used into another program with different ui.

@ssbroad
Copy link

ssbroad commented Jan 5, 2025

There is one case where you should avoid using the original file.
When the original image is PNG, it will be converted to GIF after sending it to Amazon.
This will seriously degrade the image quality to an unacceptable level. I am not kidding, you can try it.
In this case, it is mentioned

@celogeek
Copy link
Owner

celogeek commented Jan 5, 2025

Using original images without any processing of image files (e.g. quality compression, image cropping) will significantly reduce program processing time. We just need to pack the original image and then write css according to the image size.

Let the image compression processing task be handed over to the Amazon cloud.

I'm looking forward to the release of the original version!

Also, Amazon cloud take forever to reencode your image if it's not in a compatible format.
You can try it with the KS profile (the resolution exceed the size allowed by amazon).

Locally, on a M1 Pro mac with 8 cores, it takes 10s to encode a 230 pages of a manga with SR profile (the one compatible with amazon). Then if I upload this to sendmykindle, it take 30s to save it to the cloud.

If I send images that exceed the amazon limit, it will reencode it, and it can take 15 min or sometimes 30min or more for the same think.

So I don't really understand what's good to let amazon resize the file himself.

Also the CSS is computed based on the final dimension, which is no more the case with amazon compression. I guess they reduce the image size keeping the aspect ration but let the CSS in place, enlarging a smaller image then.

The only case that may be interesting, is when we don't resize, we don't apply filter, the file is in JPG format, and the source is anything but PDF (which doesn't return the raw data).

I try to figure out a clean way to do this, but the current implementation really don't allow easily this kind of changes.
I wonder if I should have another command line for that case. or a special option that will go to a special branch that will skip all the processing.

@ssbroad
Copy link

ssbroad commented Jan 5, 2025

After careful consideration, I think the main purpose of introducing the raw option should be to minimize the loss caused by re-encoding low-resolution images.

Currently, the original image will be encoded twice before it is displayed on the kindle.

We cannot avoid the re-encoding process of converting epub to mobi.

But we can avoid the first step of re-encoding the original image to epub.

If the original image is not of good quality, encoding it into jpg will cause some quality loss.

If it is exported to png, as mentioned above, there is a problem with mobi's compatibility with png, and png will be converted to gif.

@ssbroad
Copy link

ssbroad commented Jan 5, 2025

For some low-resolution images, I would like to give up the convenience of image processing such as cropping and improving contrast in exchange for retaining the quality of the original image. At this time, the raw option may be worth a try.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

When branches are created from issues, their pull requests are automatically linked.

3 participants