-
Notifications
You must be signed in to change notification settings - Fork 89
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
gpu out of memory #6
Comments
If you want to use 16G GPU memory V100, there are also multiple tricks to reduce memory usage. For example, you can consider reducing stylegan model size to 256* 256 or you can use fewer stylegan features and reduce DatasetGAN input feature size as indicated in https://github.com/nv-tlabs/editGAN_release/blob/release_final/experiments/datasetgan_car.json#L6. or reduce ensemble model number as indicated https://github.com/nv-tlabs/editGAN_release/blob/release_final/experiments/datasetgan_car.json#L15. However we didn't test that. |
So would this imply that the existing checkpoints are unusable? Also can multiple GPUs be used to not encounter this problem? Regardless of the fact that this is mostly research driven, one ought not expect people to have a 12K GPU lying around ;) |
I have changed dim=64 batch_size=1, model_num=1,and run 'python run_app.py'.but still out of memory ,and tried to allocate 5.88GB has not changed. shoud I retrain the model? |
You can use pytorch half precision inference, which saves lots of GPU memory |
Could you tell me where you changed in the scripts? |
HI:
I only have v100 16G not 32G. How to testing face?and How to change xxx.json for my 16G gpu?
thank you very mush
The text was updated successfully, but these errors were encountered: