Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[fix] Change the condition of ValueError in "convert_checkpoint_from_transformers_to_megatron" #24769

Merged
merged 6 commits into from
Jul 13, 2023

Conversation

SeongBeomLEE
Copy link
Contributor

The "target_tensor_model_parallel_size" is related to "num_attention_heads", and the "target_pipeline_model_parallel_size" is related to "num_hidden_layers".

However, the old code had "target_tensor_model_parallel_size" related to "num_hidden_layers".

So we modified the code and added the part about "target_tensor_model_parallel_size".

Thanks!

norm_factor is still torch.float32 after using model.half

So I changed it to register_buffer so I can change it to torch.float16 after using model.half
convert_checkpoint_from_transformers_to_megatron
layers -> attention heads
@amyeroberts
Copy link
Collaborator

cc @pacman100

Copy link
Contributor

@pacman100 pacman100 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice catch! thank you for the fix

Copy link
Collaborator

@amyeroberts amyeroberts left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for fixing!

@HuggingFaceDocBuilderDev
Copy link

HuggingFaceDocBuilderDev commented Jul 13, 2023

The documentation is not available anymore as the PR was closed or merged.

@amyeroberts amyeroberts merged commit 21946a8 into huggingface:main Jul 13, 2023
Lorenzobattistela pushed a commit to Lorenzobattistela/transformers that referenced this pull request Jul 13, 2023
…transformers_to_megatron" (huggingface#24769)

* fix: half inference error

norm_factor is still torch.float32 after using model.half

So I changed it to register_buffer so I can change it to torch.float16 after using model.half

* fix: Added a variable "persistent=False"

* run make style

* [fix] Change the condition of ValueError
convert_checkpoint_from_transformers_to_megatron

* [fix] error wording
layers -> attention heads
blbadger pushed a commit to blbadger/transformers that referenced this pull request Nov 8, 2023
…transformers_to_megatron" (huggingface#24769)

* fix: half inference error

norm_factor is still torch.float32 after using model.half

So I changed it to register_buffer so I can change it to torch.float16 after using model.half

* fix: Added a variable "persistent=False"

* run make style

* [fix] Change the condition of ValueError
convert_checkpoint_from_transformers_to_megatron

* [fix] error wording
layers -> attention heads
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants