-
-
Notifications
You must be signed in to change notification settings - Fork 151
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Does nunif/waifu2x support pair training (x to y mapping) like the previous version does? #250
Comments
Currently not supported. |
I didn't have much time to understand your codebase but from what I can see, it seems the Regarding my use-case, I want to train a model that can remove anime illustration background. My targets are very specific illustrations with a solid white background, in a specific artist's artstyle. Of course it is not meant to be a plug-and-use solution and is more of an immediate processing step before I come in and manually remove the background that fit my personal standard. This picture is the result of a model I trained based on TensorFlow's pix2pix tutorial. Left is the input and right is the ouput, meant to be an alpha mask. The main focus is the edges, as those empty black areas inside the character can be easily filled in by a human. I want the model to perform better and more accurate, and also the model is quite heavy, with a 600MB checkpoint file and 15GB of VRAM usage, that why I seek to use your model instead, as it was able to achieve brilliant quality for both upscale and denoising with a small model. |
For character(person) segmentation, Also, background removal is a popular task, you can see many (pre-trained) models in For custom x,y input image, x,y are generated from a single image( nunif/waifu2x/training/dataset.py Line 393 in 31c0137
To use x,y two different images, it is possible to generate them from two images like
|
Hi. Thanks for the help! I decided to continue with waifu2x before trying out other solutions you suggested. Here is my model current accuracy: I trained it with the following command
As you can see the edges are a little jagged, I am unsure if this is because of the low number of epochs trained or not big enough datasets (my dataset has 10 images in eval and 18 images in train, all in 4K or above, which when splitted with I am currently training 30 more epochs (with For reference here is my fork with my modifications: nunif fork Edit: After 30 more epochs (60 epoch total) the model doesn't improve a bit. (No |
It looks like overfitting to the white color, so you may want to use random background color or composite with random background image. As for the training commands, The fundamental problem, as I wrote above, is that it is difficult to segment the characters from the background in 64x64 small areas. |
Hi. I am quite impressed with the performance of the models and methods used in waifu2x and I want to train my own model based on them. It will be a 1x mapping from an image x (RGBA) to an image y (RGB or L), similar to noise model. However the changes can't be generated on the fly like noise. It seemed the old code supported this functionality: nagadomi/waifu2x#193. Is it possible on this new repo too? If so how do I arrange my dataset files accordingly?
Thank you for reading
The text was updated successfully, but these errors were encountered: