Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to run the network in parallel on multiple cards #89

Open
activate-an opened this issue Dec 4, 2020 · 2 comments
Open

How to run the network in parallel on multiple cards #89

activate-an opened this issue Dec 4, 2020 · 2 comments

Comments

@activate-an
Copy link

When I try to increase the size of the output image, it will prompt that there is not enough memory, so how can I run the network on multiple cards in parallel to achieve a larger output?

@fyw1999
Copy link

fyw1999 commented Jul 9, 2021

I have run the code in multiple GPUS successfully! You need to wrap models by DataParallel firstly. There are still some changes to be made in the code, beacuse when you use multiple cards, a batch will be separated and distributed on each GPU, after each part of a batch through models, they will be merged to a batch again, which will cause a lot of bugs in code. For example in RNN model, when different part of a batch through the model, the max value in argument cap_lens will be different.

@AudityGhosh
Copy link

@fyw1999 , could you please enlighten us with the changes specifically ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants