Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question] How to actually use ControlNet? #42

Open
ke1ne opened this issue Jul 11, 2023 · 4 comments
Open

[Question] How to actually use ControlNet? #42

ke1ne opened this issue Jul 11, 2023 · 4 comments

Comments

@ke1ne
Copy link

ke1ne commented Jul 11, 2023

Hi, I downloaded a canny model. Could you please give some giude, how to use it in real world? In UI I got this:
image
Thanks!

@axodox
Copy link
Owner

axodox commented Jul 11, 2023

Ah, ok there was a mistake in the code, that made UI weird in case of only one controlnet mode was available.
I pushed a fix for it.

You otherwise it would be like this:

  • You will need a condition image, this could be already a canny edge image, or it can convert it for you. You can either generate such image, or drag and drop it from another app. I think there is an exception if you drop an unsupported file format, I will fix that next weekend.
  • Normally the last image is used as input image, but you can also open up the input image panel (with the image button on the left) to use another one.
  • Once you have the image enable the control net, select your mode. And if your image is not in the input format enable auto-conversion as well.
  • You can adjust the conditioning scale currently it defaults to 0.5, but I will probably change the default to 0.8, as 0.5 does not conform so much to the condition yet.
  • You can adjust the denoising strength: a denosing strength < 1 will make this an img2img operation, where the image is used both as one to modify and to extract features from for the controlnet condition. If denoising strength is 1 then you generate image from scratch with using the condition only.
  • If you use a mask, then it will become controlnet based inpainting.

@axodox
Copy link
Owner

axodox commented Jul 11, 2023

The fixed version is released. Currently only canny, depth, hed and openpose have feature extractors, so all others need an input image already in the target format.

@axodox
Copy link
Owner

axodox commented Jul 11, 2023

I plan to do more bugfixing and documentation on the weekend. Also writing guides for controlnet ready model conversion.

@ke1ne
Copy link
Author

ke1ne commented Jul 11, 2023

Thank you for the update. Also, that guide could be useful in the Wiki. Cheers!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants