Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

improve: add descriptions for clip loaders #5576

Merged
merged 1 commit into from
Nov 11, 2024

Conversation

ltdrdata
Copy link
Collaborator

@Creative-comfyUI
Copy link

Thanks, Good Idea !

@Creative-comfyUI
Copy link

There is another problem with the combination of clip type. It doesn't work always with SDXL
Some combination works other don't. We receive an error.

SDXL works only un dual clip not in triple

@Creative-comfyUI
Copy link

Another difference using dualclip GGuf or dualclip with the same type of clip give different results. There is a conversion ? Hands are better with regular dualclip

@ltdrdata
Copy link
Collaborator Author

There is another problem with the combination of clip type. It doesn't work always with SDXL Some combination works other don't. We receive an error.

SDXL works only un dual clip not in triple

Currently, TripleClipLoader is only being used in SD3.x.
That's why there's only SD3 in the hint.

Since GGUF is not part of ComfyUI Core, it's completely separate.
And since quantization is a method that reduces file size while accepting information loss, it naturally produces different results.

@comfyanonymous comfyanonymous merged commit 2d28b0b into comfyanonymous:master Nov 11, 2024
5 checks passed
@kairin
Copy link

kairin commented Nov 11, 2024

how about when using https://huggingface.co/BeichenZhang/LongCLIP-L (cos those who claims it works for them, generate really good outputs...)

i notice i get very bad generations with this.

so i simply stick to the normal clip-l and clip-g.

::editted::

i tried again and recreated a workflow... using longclip and clip-g. it's now producing decent outputs...

0: 640x640 1 face, 13.2ms
Speed: 2.2ms preprocess, 13.2ms inference, 1.5ms postprocess per image at shape (1, 3, 640, 640)
model weight dtype torch.float16, manual cast: None
model_type EPS
Using pytorch attention in VAE
Using pytorch attention in VAE
Requested to load SDXLClipModel
Loading 1 new model
loaded completely 0.0 1560.802734375 True
Warning torch.load doesn't support weights_only on this pytorch version, loading unsafely.
Requested to load SDXLClipModel
Loading 1 new model
loaded completely 0.0 1561.05322265625 True
Requested to load SDXL
Loading 1 new model
loaded completely 0.0 4897.0483474731445 True
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:03<00:00,  5.39it/s]
Requested to load AutoencoderKL
Loading 1 new model
loaded completely 0.0 159.55708122253418 True
Prompt executed in 19.56 seconds
got prompt
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:03<00:00,  5.86it/s]
Prompt executed in 3.83 seconds
got prompt
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:03<00:00,  5.52it/s]
Prompt executed in 4.05 seconds
got prompt
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:03<00:00,  5.60it/s]
Prompt executed in 4.01 seconds
got prompt
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:03<00:00,  5.74it/s]
Prompt executed in 3.92 seconds
got prompt
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:03<00:00,  5.28it/s]
Prompt executed in 4.24 seconds
got prompt
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:05<00:00,  3.43it/s]
Prompt executed in 4.32 seconds
got prompt
 60%|████████████████████████████████████████████████████████████████▏       

@ltdrdata ltdrdata deleted the improve/clip-recipes branch November 11, 2024 13:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants