Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Segmentation fault error after git pull or system packages update #5756

Open
Serjyon opened this issue Nov 24, 2024 · 3 comments
Open

Segmentation fault error after git pull or system packages update #5756

Serjyon opened this issue Nov 24, 2024 · 3 comments
Labels
Potential Bug User is reporting a bug. This should be tested.

Comments

@Serjyon
Copy link

Serjyon commented Nov 24, 2024

Expected Behavior

work perfectly as it was before git pull or OS update (I'm really not sure what could have caused it)

Actual Behavior

Not Work

Steps to Reproduce

Trying to generate anything

Debug Logs

[miles@archlinux ComfyUI_backup]$ source ./venv/bin/activate
(venv) [miles@archlinux ComfyUI_backup]$ HSA_OVERRIDE_GFX_VERSION=11.0.0 python3.12 main.py
amdgpu.ids: No such file or directory
Total VRAM 8176 MB, total RAM 32015 MB
pytorch version: 2.4.1+rocm6.1
Set vram state to: NORMAL_VRAM
Device: cuda:0 AMD Radeon Graphics : native
Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --use-split-cross-attention
[Prompt Server] web root: /home/miles/Загрузки/ComfyUI_backup/web

Import times for custom nodes:
   0.0 seconds: /home/miles/Загрузки/ComfyUI_backup/custom_nodes/websocket_image_save.py

Starting server

To see the GUI go to: http://127.0.0.1:8188
got prompt
model weight dtype torch.float16, manual cast: None
model_type EPS
Using split attention in VAE
Using split attention in VAE
Requested to load SD1ClipModel
Loading 1 new model
Segmentation fault (core dumped)
(venv) [miles@archlinux ComfyUI_backup]$

Other

gpu RX 7600, arch linux, rocm 6.1, python 3.12.7
i tried pyenv 3.12.6 same issue

@Serjyon Serjyon added the Potential Bug User is reporting a bug. This should be tested. label Nov 24, 2024
@Yunoxa
Copy link

Yunoxa commented Nov 28, 2024

I am having this issue too and am also using an RX 7600 gpu on arch linux with python 3.12.7. I am using rocm 6.2 instead of 6.1, but the resulting problem is exactly the same as here.

@Serjyon
Copy link
Author

Serjyon commented Nov 28, 2024

I am having this issue too and am also using an RX 7600 gpu on arch linux with python 3.12.7. I am using rocm 6.2 instead of 6.1, but the resulting problem is exactly the same as here.

This is a bit offtop but I believe rocm 6.2 never worked with RX 7600, even using HSA_OVERRIDE_GFX_VERSION=11.0.0. Can you confirm that it actually worked before and how did you do it? What pytorch version?

@Yunoxa
Copy link

Yunoxa commented Nov 28, 2024

This is a bit offtop but I believe rocm 6.2 never worked with RX 7600, even using HSA_OVERRIDE_GFX_VERSION=11.0.0. Can you confirm that it actually worked before and how did you do it? What pytorch version?

Is that so? I couldn't find anything about 6.2 not working with the 7600 but maybe I am missing something. I was simply following the readme for installing which gets you to install rocm 6.2, so I just assumed it would work. Pytorch version is 2.5.1.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Potential Bug User is reporting a bug. This should be tested.
Projects
None yet
Development

No branches or pull requests

2 participants