-
Notifications
You must be signed in to change notification settings - Fork 6.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ZLUDA support #2810
Comments
Ok so i got comfy working, havent tested much beside the basic node layout but here is what i did here is the code i added to comfy/model_management.py, the @@ -194,11 +194,10 @@ if args.fp16_vae:
elif args.bf16_vae:
VAE_DTYPE = torch.bfloat16
elif args.fp32_vae:
VAE_DTYPE = torch.float32
-
if ENABLE_PYTORCH_ATTENTION:
torch.backends.cuda.enable_math_sdp(True)
torch.backends.cuda.enable_flash_sdp(True)
torch.backends.cuda.enable_mem_efficient_sdp(True)
@@ -222,11 +221,10 @@ if args.force_fp16:
if lowvram_available:
if set_vram_to in (VRAMState.LOW_VRAM, VRAMState.NO_VRAM):
vram_state = set_vram_to
-
if cpu_state != CPUState.GPU:
vram_state = VRAMState.DISABLED
if cpu_state == CPUState.MPS:
vram_state = VRAMState.SHARED
@@ -252,11 +250,28 @@ def get_torch_device_name(device):
return "{} {}".format(device, torch.xpu.get_device_name(device))
else:
return "CUDA {}: {}".format(device, torch.cuda.get_device_name(device))
try:
- print("Device:", get_torch_device_name(get_torch_device()))
+ torch_device_name = get_torch_device_name(get_torch_device())
+
+ if "[ZLUDA]" in torch_device_name:
+ print("Detected ZLUDA, this is experimental and may not work properly.")
+
+ if torch.backends.cudnn.enabled:
+ torch.backends.cudnn.enabled = False
+ print("cuDNN is disabled because ZLUDA does currently not support it.")
+
+ torch.backends.cuda.enable_flash_sdp(True)
+ torch.backends.cuda.enable_math_sdp(False)
+ torch.backends.cuda.enable_mem_efficient_sdp(False)
+
+ if ENABLE_PYTORCH_ATTENTION:
+ print("Disabling pytorch cross attention because it's not supported by ZLUDA.")
+ ENABLE_PYTORCH_ATTENTION = False
+
+ print("Device:", torch_device_name)
except:
print("Could not pick default device.")
print("VAE dtype:", VAE_DTYPE) note that it is still required to run comfy with @@ -48,11 +48,13 @@ def cuda_malloc_supported():
try:
names = get_gpu_names()
except:
names = set()
for x in names:
- if "NVIDIA" in x:
+ if "AMD" in x:
+ return False
+ elif "NVIDIA" in x:
for b in blacklist:
if b in x:
return False
return True |
Going to try this out on my 7900xtx, I'll report back after installing. |
Think I did everything, but I'm getting this error when launching:
|
@Andyholm did you find the files they were talking about? if so where are they or how get them |
@CraftMaster163 Yup! Not in the link provided, but here: https://github.com/lshqqytiger/ZLUDA/releases/tag/v3.2-win |
The reason for my error was because I didn't install HIP sdk. I only did what OP said in the original post, but it has successfully launched now. :)) |
Got this now when trying to generate an image:
|
vlads zluda wiki entry has been linked before the changes as well, given how it works i was expecting that the bare minimum is already setup but thats why i kept only the link to vlads setup now 😉 |
I thought you were just stating your sources, lol |
anyone know why on a 7800xt i get the error 215 on HIP SDK installer and how to fix it? |
anyone know why i get this error, i did what the guide said and edited the file in the issue to do the same thing Prompt executed in 3.26 seconds |
@CraftMaster163 Did you install HIP SDK? |
i did, i think the issue was i had the wrong cuda compiled torch, seems to be working now |
now when it loads a model and tries to use it it just crashes my system |
Loading went fine for me, just took a really long time the first time. |
before my system freezes i see it says torch not compiled with flash attention, could that be why it freezes? |
could u guys keep that stuff out of this issue, improper installation isnt something that needs to be discussed here, just spams everyones inbox for no reason |
My installation isn't improper. I followed this https://github.com/vladmandic/automatic/wiki/ZLUDA like you said, and made the edits you did, but I still get an error. Since the error is relevant to this issue, I've posted it here. |
@CraftMaster163 |
@Andyholm any luck fixing the sampling issue? im having similar issue to where cudnn has an internal error |
@Andyholm if you getting cudnn error add torch.backends.cudnn.enabled = False to the sampling file or somewhere torch is defined |
@LeagueRaINi I imagine just cloning your fork is an easy way of getting started? ;-) Got it running, but oddly enough dml is faster than zluda in comfyui.. perhaps because it's not using dynamic BMM, but subquad cross attn method. |
@CraftMaster163 Nope, but I tested ZLUDA on vladmandic/automatic and didn't notice any speed improvements over directml. I might be doing something wrong though, but idk. |
Very good, it works well on my 6900xt.
can enjoying ComfyuUI. |
Can this be integrated into the main already ? Have to keep editing the files all the time otherwise. |
Kind of but not really, i have extra patches in my fork that injects code into custom nodes to keep them from re'enabling cudnn so that needs to be fixed somehow but i'm working on something else and dont have time rn, nor does there seem to be any interest in implementing this on the main branch |
I forked it myself (not a dev don't know how to code just a curious fellow) , only two files needs to be changed now (as far as I know) , Modified the requirements.txt so it gets installed correctly for the 1st time. I also wrote detailed instructions, it is up-to-date as of today and I am going to try to keep it that way. So feel free to try it. |
Currently broken after merge https://github.com/patientx/ComfyUI-Zluda |
In what way ? Updating ? The solution is on the [github page] (https://github.com/patientx/ComfyUI-Zluda#-whats-new-) . It is working after that update fix. In spite of that I also tried installing from zero and it is working without a hitch. |
Updating works fine. Just when i press start it auto closes soo fast i can't see error. also patch zluda dose not work |
delete everything and try from the start, I don't know whats happening. Others successfully updated in the last few hours / days. |
I did just that, ./zluda is not created on install. So i coplied mine. I've fixed it manually. |
controlled the batch files and the zluda adress everything is all right, dunno what happened. check your bat files maybe they dont have the zluda lines somehow ... |
Its fixed. So no worries! |
Maybe you can add "pause" at the end of the bat file |
Its working now! |
any chance we will be seeing zluda support for comfy? automatic runs fine for the most part but its not as nice as comfy to work with
so far when forking the repo and applying the same steps as for automatic https://github.com/vladmandic/automatic/wiki/ZLUDA
running it with
--disable-cuda-malloc
crashes the driver,running it with
--disable-cuda-malloc --use-quad-cross-attention
gets further but errors out when when samplingThe text was updated successfully, but these errors were encountered: