-
Notifications
You must be signed in to change notification settings - Fork 6.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
improve: add descriptions for clip loaders #5576
improve: add descriptions for clip loaders #5576
Conversation
Thanks, Good Idea ! |
There is another problem with the combination of clip type. It doesn't work always with SDXL SDXL works only un dual clip not in triple |
Another difference using dualclip GGuf or dualclip with the same type of clip give different results. There is a conversion ? Hands are better with regular dualclip |
Currently, TripleClipLoader is only being used in SD3.x. Since GGUF is not part of ComfyUI Core, it's completely separate. |
how about when using https://huggingface.co/BeichenZhang/LongCLIP-L (cos those who claims it works for them, generate really good outputs...) i notice i get very bad generations with this. so i simply stick to the normal clip-l and clip-g. ::editted:: i tried again and recreated a workflow... using longclip and clip-g. it's now producing decent outputs... 0: 640x640 1 face, 13.2ms
Speed: 2.2ms preprocess, 13.2ms inference, 1.5ms postprocess per image at shape (1, 3, 640, 640)
model weight dtype torch.float16, manual cast: None
model_type EPS
Using pytorch attention in VAE
Using pytorch attention in VAE
Requested to load SDXLClipModel
Loading 1 new model
loaded completely 0.0 1560.802734375 True
Warning torch.load doesn't support weights_only on this pytorch version, loading unsafely.
Requested to load SDXLClipModel
Loading 1 new model
loaded completely 0.0 1561.05322265625 True
Requested to load SDXL
Loading 1 new model
loaded completely 0.0 4897.0483474731445 True
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:03<00:00, 5.39it/s]
Requested to load AutoencoderKL
Loading 1 new model
loaded completely 0.0 159.55708122253418 True
Prompt executed in 19.56 seconds
got prompt
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:03<00:00, 5.86it/s]
Prompt executed in 3.83 seconds
got prompt
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:03<00:00, 5.52it/s]
Prompt executed in 4.05 seconds
got prompt
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:03<00:00, 5.60it/s]
Prompt executed in 4.01 seconds
got prompt
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:03<00:00, 5.74it/s]
Prompt executed in 3.92 seconds
got prompt
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:03<00:00, 5.28it/s]
Prompt executed in 4.24 seconds
got prompt
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:05<00:00, 3.43it/s]
Prompt executed in 4.32 seconds
got prompt
60%|████████████████████████████████████████████████████████████████▏ |
#5574