You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First, I love this project. I have seen what others are making with it and it seems really powerful for fine tuning exactly the prompt you want.
The only way I have been able to get this to run locally on my 3080 10G is by lowering the resolution in the prompt to 352, 1 batch. It looks like others have gotten it to work, but is everyone just using colab? I'd like to run it locally. I have 2 3080s so theoretically I should have enough vram between them, but it looks like this isn't set up to train on both. I tried to add this myself with using torches DataParallel, but unfortunately I have no idea what I am doing when it comes to coding ML stuff yet. Any chance of getting multigpu to work or any advise on lowering vram usage?
The text was updated successfully, but these errors were encountered:
First, I love this project. I have seen what others are making with it and it seems really powerful for fine tuning exactly the prompt you want.
The only way I have been able to get this to run locally on my 3080 10G is by lowering the resolution in the prompt to 352, 1 batch. It looks like others have gotten it to work, but is everyone just using colab? I'd like to run it locally. I have 2 3080s so theoretically I should have enough vram between them, but it looks like this isn't set up to train on both. I tried to add this myself with using torches DataParallel, but unfortunately I have no idea what I am doing when it comes to coding ML stuff yet. Any chance of getting multigpu to work or any advise on lowering vram usage?
The text was updated successfully, but these errors were encountered: