Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix Trainer when model is loaded on a different GPU #23792

Merged
merged 1 commit into from
May 31, 2023
Merged

Conversation

sgugger
Copy link
Collaborator

@sgugger sgugger commented May 26, 2023

What does this PR do?

When a small model is loaded with device_map="auto" it might end up all on GPU 1, so currently is_model_parallel is set to False (cause one device) and later on the Trainer moves the model to GPU 0 which fails the execution of all the Accelerate hooks.

This PR fixes this by making sure is_model_parallel is set to True when there is one device but it's not GPU 0.

@sgugger sgugger requested a review from younesbelkada May 26, 2023 13:33
@HuggingFaceDocBuilderDev
Copy link

HuggingFaceDocBuilderDev commented May 26, 2023

The documentation is not available anymore as the PR was closed or merged.

Copy link
Contributor

@younesbelkada younesbelkada left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks so much for this! LGTM

@sgugger sgugger merged commit 68d53bc into main May 31, 2023
@sgugger sgugger deleted the trainer_mp_devices branch May 31, 2023 11:54
sheonhan pushed a commit to sheonhan/transformers that referenced this pull request Jun 1, 2023
gojiteji pushed a commit to gojiteji/transformers that referenced this pull request Jun 5, 2023
novice03 pushed a commit to novice03/transformers that referenced this pull request Jun 23, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants