You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What I'm trying to troubleshoot now is that my memory is apparently tanked by the addition of HIP to my system. I get the following message when trying to gen reasonable images:
20:33:00-759347 ERROR VAE decode: HIP out of memory. Tried to allocate 8.86 GiB. GPU 0 has a total capacity of 15.98 GiB of which 5.58 GiB is free. Of the allocated memory 9.44 GiB is allocated by PyTorch, and 597.74 MiB is
reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_HIP_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory
Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
NGL, I don't even know why HIP is detected by my app (SD Next) and I'm not sure where to begin with fixing this. I'm actually contemplating doing a fresh install of my arch linux set up.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
So while trying to install llama.cpp (and failing), I tried the instructions for the HIP section (https://github.com/ggerganov/llama.cpp/blob/master/docs/build.md#hip) of the build.md. Doing so seemingly did nothing to enable llama.cpp, but that's a matter I've given up on.
What I'm trying to troubleshoot now is that my memory is apparently tanked by the addition of HIP to my system. I get the following message when trying to gen reasonable images:
NGL, I don't even know why HIP is detected by my app (SD Next) and I'm not sure where to begin with fixing this. I'm actually contemplating doing a fresh install of my arch linux set up.
Beta Was this translation helpful? Give feedback.
All reactions