You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
work perfectly as it was before git pull or OS update (I'm really not sure what could have caused it)
Actual Behavior
Not Work
Steps to Reproduce
Trying to generate anything
Debug Logs
[miles@archlinuxComfyUI_backup]$ source ./venv/bin/activate
(venv) [miles@archlinuxComfyUI_backup]$ HSA_OVERRIDE_GFX_VERSION=11.0.0 python3.12 main.py
amdgpu.ids: No such file or directory
Total VRAM 8176 MB, total RAM 32015 MB
pytorch version: 2.4.1+rocm6.1
Set vram state to: NORMAL_VRAM
Device: cuda:0 AMD Radeon Graphics : native
Using sub quadratic optimization for cross attention,if you have memory or speed issues try using: --use-split-cross-attention
[PromptServer] web root: /home/miles/Загрузки/ComfyUI_backup/web
Import times for custom nodes:
0.0 seconds: /home/miles/Загрузки/ComfyUI_backup/custom_nodes/websocket_image_save.py
Starting server
To see the GUI go to: http://127.0.0.1:8188
got prompt
model weight dtype torch.float16, manual cast: None
model_type EPS
Using split attention in VAE
Using split attention in VAE
Requested to load SD1ClipModel
Loading 1 new model
Segmentation fault (core dumped)
(venv) [miles@archlinuxComfyUI_backup]$
Other
gpu RX 7600, arch linux, rocm 6.1, python 3.12.7
i tried pyenv 3.12.6 same issue
The text was updated successfully, but these errors were encountered:
I am having this issue too and am also using an RX 7600 gpu on arch linux with python 3.12.7. I am using rocm 6.2 instead of 6.1, but the resulting problem is exactly the same as here.
I am having this issue too and am also using an RX 7600 gpu on arch linux with python 3.12.7. I am using rocm 6.2 instead of 6.1, but the resulting problem is exactly the same as here.
This is a bit offtop but I believe rocm 6.2 never worked with RX 7600, even using HSA_OVERRIDE_GFX_VERSION=11.0.0. Can you confirm that it actually worked before and how did you do it? What pytorch version?
This is a bit offtop but I believe rocm 6.2 never worked with RX 7600, even using HSA_OVERRIDE_GFX_VERSION=11.0.0. Can you confirm that it actually worked before and how did you do it? What pytorch version?
Is that so? I couldn't find anything about 6.2 not working with the 7600 but maybe I am missing something. I was simply following the readme for installing which gets you to install rocm 6.2, so I just assumed it would work. Pytorch version is 2.5.1.
Expected Behavior
work perfectly as it was before git pull or OS update (I'm really not sure what could have caused it)
Actual Behavior
Not Work
Steps to Reproduce
Trying to generate anything
Debug Logs
Other
gpu RX 7600, arch linux, rocm 6.1, python 3.12.7
i tried pyenv 3.12.6 same issue
The text was updated successfully, but these errors were encountered: