Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can multiple GPUs be utilized with the PyTorch DataParallel class? #1

Open
Tpool1 opened this issue Jan 7, 2023 · 1 comment
Open

Comments

@Tpool1
Copy link

Tpool1 commented Jan 7, 2023

Hello,

I am looking to use the 3D_GMIC model with a batch size greater than 1 by using the PyTorch DataParallel class and multiple GPUs. I noticed there are assertions at the beginning of the forward function expecting a batch size of only 1. If a higher batch size has been successfully employed, assistance would be greatly appreciated!

@Tpool1 Tpool1 changed the title Can multiple GPU Can multiple GPU's be utilized with the PyTorch DataParallel class? Jan 7, 2023
@Tpool1 Tpool1 changed the title Can multiple GPU's be utilized with the PyTorch DataParallel class? Can multiple GPUs be utilized with the PyTorch DataParallel class? Jan 7, 2023
@aisosalo
Copy link

aisosalo commented May 8, 2024

Tips from pytorch/pytorch#8637 might come handy when adjusting the method

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants