-
-
Notifications
You must be signed in to change notification settings - Fork 376
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature request] Internal Resolution and txt2img/img2img on selection options #223
Comments
I support the request. Will be good to have a selection what's the internal resolution + some button to force generation in double-detailed resolution, something like this. At the moment the addon already works in way when you select a very small region, lets say 300x300 px, hit generate and internally it upscales it up to 768x768 at least, then after generation shrinks down back in the original selection resolution, which makes details ultra-sharp. sometimes helpful, sometimes not, but out of manual control, which doesn't always feel comfortable |
There is a concept of "native" resolution for a base model (ie. 512~768 for SD1.5, 1024 for SDXL). If your region is smaller, the image will be generated at the minimum resolution and downscaled to fit. If your region is larger, a 2-pass pipeline will be used. The intention here is to avoid generating garbage. Maybe this "native" resolution needs to be configurable because it's unfortunately hard to auto-detect it for all cases. But that would be buried in advanced settings somewhere and not really fit what you ask for. So for the case where you work on a low resolution and inpaint small regions where there simply are too few pixels to get good results, it already works as requested. Introducing choice here doesn't make much sense because using a lower resolution doesn't produce anything useful. If you don't want the disjoint between more and less detailed parts of the image, simply increase your canvas resolution. As for the case where your selection area has sufficient pixels to get good results and you want to use more to get additional detail. I'm not really convinced this warrants extra complexity. For general improved quality, upscale of the entire image is an option. If you consider that too expensive, do a simple resize to a higher resolution and then inpaint. You can scale back down to target resolution at some later point. I think this is already a common workflow for eg. painting and rendering, it works for SD too and does not require introducing separate fine granular controls. I use Ctrl+Alt+I a lot.
Yes. What's the problem with that?
There is the idea floating around to adapt the Generate button to the workflow that would run, and maybe allow choosing a workflow explicitly. So far not a really strong reason to do it, but comic panels are a nice use case. What makes you want to use a selection, rather than generate the entire image (where you wouldn't have inpaint problems)? |
This is exactly what I miss the most from an old, no longer updated, Krita plugin. Cool to know that there's a similar feature built-in already, but having the control would be great, even if it were an advanced option with a checkbox to turn on in the panel like the negative keywords are. It would help avoid some of the blurry generations I've been getting that I used to avoid with this method. Scaling up and then scaling down sounds like a good workaround, but it also sounds more cumbersome than the old method of just changing the minimum generation resolution. |
Hi, great job - love your plugin . :), this solution with render in bigger resolution and downscale in org selection was best, it was improving much more quality and detail. Interpause Krita plugin, also missing this option. :( |
@Rogal80 @lambschopping Technically you can select a global resolution multiplier now in 1.12.0 and use it to auto-downscale. I think this is a bad idea, and will degrade quality - a high resolution canvas will generate as much detail, but avoid information loss due to destructive downscales. Better avoid downscaling until you export. See #294 for more discussion. I'm keeping this open for the initial request for masked-but-not-inpaint generation. |
@Acly there's some kind of errors in code I believe in 1.12 for internal scaling making me think that 'auto-adaptive' resolution still applied on top of what you set there. 430x650 selection area in large canvas. (it's SDXL, 40 steps, DDIM, rtx 3090) |
Overriding internal checkpoint resolution works way better - I see the expected slow-down as I crank it up to 1280 or 1440 I suppose that the scaling happens before checkpoint resolution check, this is why it doesn't produce expected generation time and expected clarity. |
Yes it happens before. But there is no need to abuse checkpoint resolution. If you generate 1024 SDXL at 1.5 multiplier you will get a generation resolution of 1536. (Not that I recommend it, better to generate 1536 directly). Also if you generate 1536 with 0.7 multiplier or lower you will get image generated at 1024 and upscaled for improved performance. It doesn't go lower than checkpoint resolution (~1024 for SDXL) to avoid generating garbage. Similar with 2-pass generation to avoid repetition/tiling artifacts, it still applies as usual after scaling is applied. |
From original post:
since v1.14.0 there is a "Generate (Custom)" option which allows you to disable inpainting even with a selection. Works for <100% strength (Refine) too. |
Hi @Acly, your project it's amazing, and it's a must for Krita going forward. I think it would be amazing to have a internal resolution option, could be the default resolutions like 512, 768, 1024, so we could:
1 - start with a high resolution canvas and gradually work on the image by doing small resolution generations first (since they can get better compositions) and then start to add details on bigger resolution generations, the way it works now we need o start with a small canvas and increase it along the way.
2- The other benefit would be to work on a normal image size and have the option to inpaint in a bigger resolution to give it more details, here's a comparison:
Plus, it would be great to have the option to txt2img/img2img a selection of the image instead of just inpainting, it would be great for comics as inpaint is aware of it's sorroundings and txt2img/img2img is'nt, here's a example, left is inpaint using a ControlNet LineArt layer and right is txt2img on the selection using Auto1111 with ControlNet LineArt, then bringing it back and just erasing the text part.
And even to add some details on images, as i find that img2img in low denoise strength using this copy and paste back method (and erasing the corners of the result image) sometimes give better results than inpainting
Hope you take this all in consideration, Thanks!
The text was updated successfully, but these errors were encountered: