Replies: 29 comments 73 replies
-
My SD inpaint tab lost function after the latest update please fix it. Thank you so much |
Beta Was this translation helpful? Give feedback.
-
I already fixed. Not your update fault, it's A1111 latest update |
Beta Was this translation helpful? Give feedback.
-
I did an update right now, but depth_hand_refiner is not in the list ... ? 🤔 |
Beta Was this translation helpful? Give feedback.
-
It seems that if the Hand Refiner specific depth model is selected in ControlNet model on Adetailer, you're unable to select hand_depth_refiner under ControlNet module. I believe this should be added to improve quality. |
Beta Was this translation helpful? Give feedback.
-
But why though. You're already using adetailer to inpaint the hands, why force another pass in the controlnet module. If you wanted to do that anyway, just run the output through img2img at 0 denoising. |
Beta Was this translation helpful? Give feedback.
-
Mine is not work, controlnet keep asking about upscaler with inpaint image2image and all inpaint for hand is still bug |
Beta Was this translation helpful? Give feedback.
-
Fails to process correctly for me. Also tried deleting the venv. *** Error running process: E:\stable-diffusion-webui-master\extensions\sd-webui-controlnet\scripts\controlnet.py |
Beta Was this translation helpful? Give feedback.
-
I still dont know how to use it even using latest update of adetail still cant get any correct hand properly >< |
Beta Was this translation helpful? Give feedback.
-
How did you use such a high denoising strength, without supplying a separate prompt for the hands, and end up with a hand? I end up with a copy of the main image in place of the hand. The high denoising strength tries to reproduce the whole prompt into the space where the hands used to be. Right now, this is destroying hands, not making them better. Am I doing something wrong> |
Beta Was this translation helpful? Give feedback.
-
After update to 427 I have this error on startup:
|
Beta Was this translation helpful? Give feedback.
-
Be aware that for this to work, the play between denoise and steps is important and this is mostly not a "one round" fix. EDIT: And yes, do only one hand at a time, reimporting the best result and continue. |
Beta Was this translation helpful? Give feedback.
-
If I use this after the hires fix should I use pixel perfect in CN? Or always without it and 512 res? |
Beta Was this translation helpful? Give feedback.
-
After testing for a while, I found down that the detection hand of this new preprocessor is pretty weak compare with other model, it has the ability to fix the deformed hand however it cant recognize hand in the image, it can only detect the hand in the image which have very good contrast with hand and the background or bright image (such as in a morning...) otherwise it can't find it. If someone cant combine the detection model of adetailer for example with this new preprocessor it will be must better. So it's not like messing around with denoise and step to find the suitable hand, first it need to detect the correct region in the image is hand and create the depth map first. |
Beta Was this translation helpful? Give feedback.
-
The other possibility (seem more correct) is to download the three models and place them manually in webui folders 1/ stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads\hand_refiner\hr16\ControlNet-HandRefiner-pruned\graphormer_hand_state_dict.bin Restart the console and the webui Now I can use the controlnet preview and see the depthmap: A simple layer test for correspondence From here we can use the original controlnet depthmap or not. The problem is to find the corrects parameters :) Thank Update: In another test it miss one hand detection |
Beta Was this translation helpful? Give feedback.
-
So really fun fact: you can change the model this uses for hand detection. Currently, it uses mediapipe, which as I understand is shit for recognizing gloved hands. So you can change the model from controlnet's very own openpose_hand preprocessor or even adetailer's hands.pt. hands.pt almost instantly provides the depthmap too, being only 20MB large; I'm guessing that the current 500MB model takes a while to load. To replace with controlnet's openpose_hand
To replace with adetailer's hand.pt
I haven't done testing on which is more reliable. Maybe someone who can automate that can provide feedback on this. |
Beta Was this translation helpful? Give feedback.
-
Thank I will try PS: after trying to replace the code as you said, but the processor.py line 670 already had it. So Im not sure what can I do to wrap the call |
Beta Was this translation helpful? Give feedback.
-
@zcatharisis, I just give you some result about changing detection model, it still be the same images, just try to contribute some tests |
Beta Was this translation helpful? Give feedback.
-
No your solution not working for anything, I already reinstall A1111 and test. And you tell me that changing model not affect detection, not fix the output. So what is the point of changing ??? For Vram saving and speed, I dont feel anything change at all and I used 3090 by the way. Maybe it helping someone using lower vram card (no one show the result yet) but for me nothing change at all. And I already install comfy and test. Using workflow of this guy: https://www.youtube.com/watch?v=Tt-Fyn1RA6c&t=273s When I change the prompt to something new, result is worse than in the video. This preprocessor maybe can help a little bit but the effect is not as much as people thought. |
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
I can’t get it to work on my Mac (M1 Max chip):
|
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Hi, where is the link to the download the hand refiner model? |
Beta Was this translation helpful? Give feedback.
-
I am using MBP M1 Pro 16GB. There was error in running the preprocessor: /Users/user/pinokio/api/automatic1111.git/app/venv/lib/python3.10/site-packages/scipy/sparse/_index.py:102: SparseEfficiencyWarning: Changing the sparsity structure of a csr_matrix is expensive. lil_matrix is more efficient. CPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/build/aten/src/ATen/RegisterCPU.cpp:31034 [kernel] How to fix it? Thank you. |
Beta Was this translation helpful? Give feedback.
-
Hi, is it compatible with SDXL ? Thanks |
Beta Was this translation helpful? Give feedback.
-
Hello, |
Beta Was this translation helpful? Give feedback.
-
使用flux 的 controlnet depth 与 depth_hand_refiner时 搭配adetailer不生效 |
Beta Was this translation helpful? Give feedback.
-
Thanks Fannovel16 for his hard work extracting dependencies out for hand refiner in https://github.com/Fannovel16/comfyui_controlnet_aux/.
Previously users need to use extensions like https://github.com/jexom/sd-webui-depth-lib to pick a matching hand gesture and move it to a correct location.
depth_hand_refiner
preprocess now does this job automatically for you.How to use
Now you can manually draw the inpaint mask on hands and use a depth ControlNet unit to fix hands with following steps:
Step 1: Generate an image with bad hand.
Step 2: Switch to img2img inpaint. Draw inpaint mask on hands.
Step 3: Enable ControlNet unit and select
depth_hand_refiner
preprocessor.Step 4: Generate
ADetailer usage example (Bing-su/adetailer#460):
You need to wait for ADetailer author to merge that PR or checkout the PR manually. This section is independent of previous img2img inpaint example. Here the generation is txt2img.
Image generated without ADetailer
Image generated with ADetailer
ADetailer setting
Make sure you adjust denoising strength so that depth map can take control of hand rendering.
Known issues
Hand refiner cannot handle complex hand gestures such as crossed fingers. Example: only 1 hand detected when finger are crossed:
Beta Was this translation helpful? Give feedback.
All reactions