Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hardware requirements #54

Closed
johannes-graeter opened this issue Nov 28, 2018 · 3 comments
Closed

Hardware requirements #54

johannes-graeter opened this issue Nov 28, 2018 · 3 comments

Comments

@johannes-graeter
Copy link
Contributor

Hi Simon,

I tried to run the full and finetuned net on my Geforce GTX 980 with 4 GB memory and directly ran out of memory. What are the minimum hardware requirements for the net presented in the Paper (CSS-ft I guess :) )? (couldn't find it in there...)

Best,

Johannes

@simonmeister
Copy link
Owner

We used a Titan X with 12GB, and if I remember correctly we also trained the non-stacked models on a 8GB GPU, however I no longer recall if we also could fit the stacked ones on these. You could also take a look at https://github.com/openai/gradient-checkpointing.

@johannes-graeter
Copy link
Contributor Author

Worked like a charm (close to black magic ;) ) thx for the hint!
I can now train the full CSS net on a 4 GB graphics card and the speed loss is acceptable.

Have you thought about implementing LiteFlowNet (https://github.com/twhui/LiteFlowNet) with the unsupervised Unflow architecture? For runtime speed that seems to be great! What are the limits?

@simonmeister
Copy link
Owner

Sorry, I am not familiar with the details of the paper. I skimmed it and it might be a good fit, but I don't have time to try this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants