We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Has anyone managed to get this running on Mac in any device other than cpu? I would like to try and use mps device. For example using accelerate: https://github.com/huggingface/accelerate
cpu
mps
Unfortunately when I try to use mps device and then re-run setup.py, any programs with mps will segfault. I'm on MBP M1 Max.
The text was updated successfully, but these errors were encountered:
@mattdesl it's not that simple, (the source code is not purely python). It turns out it uses CUDA directly, for instance in the scene.cpp file.
scene.cpp
[Upd] Agree, it should be possible to speedup PyTorch via mps.. I tried, but with this I got segfauls, bus errors etc:
import torch +MPS_OR_CPU_BACKEND= 'mps' if torch.backends.mps.is_available() else 'cpu' + use_gpu = torch.cuda.is_available() -device = torch.device('cuda') if use_gpu else torch.device('cpu') +device = torch.device('cuda') if use_gpu else torch.device(MPS_OR_CPU_BACKEND) + def set_use_gpu(v): global use_gpu global device use_gpu = v if not use_gpu: - device = torch.device('cpu') + device = torch.device(MPS_OR_CPU_BACKEND) def get_use_gpu(): global use_gpu
Sorry, something went wrong.
No branches or pull requests
Has anyone managed to get this running on Mac in any device other than
cpu
? I would like to try and usemps
device. For example using accelerate:https://github.com/huggingface/accelerate
Unfortunately when I try to use
mps
device and then re-run setup.py, any programs withmps
will segfault. I'm on MBP M1 Max.The text was updated successfully, but these errors were encountered: