Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gpu out of memory #6

Open
sybilWsybil opened this issue Apr 6, 2022 · 5 comments
Open

gpu out of memory #6

sybilWsybil opened this issue Apr 6, 2022 · 5 comments

Comments

@sybilWsybil
Copy link

HI:
I only have v100 16G not 32G. How to testing face?and How to change xxx.json for my 16G gpu?
thank you very mush

@arieling
Copy link
Collaborator

arieling commented Apr 7, 2022

If you want to use 16G GPU memory V100, there are also multiple tricks to reduce memory usage. For example, you can consider reducing stylegan model size to 256* 256 or you can use fewer stylegan features and reduce DatasetGAN input feature size as indicated in https://github.com/nv-tlabs/editGAN_release/blob/release_final/experiments/datasetgan_car.json#L6. or reduce ensemble model number as indicated https://github.com/nv-tlabs/editGAN_release/blob/release_final/experiments/datasetgan_car.json#L15.

However we didn't test that.

@TSTsankov
Copy link

If you want to use 16G GPU memory V100, there are also multiple tricks to reduce memory usage. For example, you can consider reducing stylegan model size to 256* 256 or you can use fewer stylegan features and reduce DatasetGAN input feature size as indicated in https://github.com/nv-tlabs/editGAN_release/blob/release_final/experiments/datasetgan_car.json#L6. or reduce ensemble model number as indicated https://github.com/nv-tlabs/editGAN_release/blob/release_final/experiments/datasetgan_car.json#L15.

However we didn't test that.

So would this imply that the existing checkpoints are unusable? Also can multiple GPUs be used to not encounter this problem?

Regardless of the fact that this is mostly research driven, one ought not expect people to have a 12K GPU lying around ;)

@sybilWsybil
Copy link
Author

I have changed dim=64 batch_size=1, model_num=1,and run 'python run_app.py'.but still out of memory ,and tried to allocate 5.88GB has not changed. shoud I retrain the model?
Thanks

@silaopi
Copy link

silaopi commented Jun 8, 2022

You can use pytorch half precision inference, which saves lots of GPU memory

@MrTornado24
Copy link

You can use pytorch half precision inference, which saves lots of GPU memory

Could you tell me where you changed in the scripts?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants