Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Try to fix training Loss inconsistent after resume from old checkpoint #25872

Merged
merged 10 commits into from
Sep 7, 2023
Merged

Try to fix training Loss inconsistent after resume from old checkpoint #25872

merged 10 commits into from
Sep 7, 2023

Conversation

dumpmemory
Copy link
Contributor

@dumpmemory dumpmemory commented Aug 30, 2023

What does this PR do?

Fixes #25340 (issue)

From my side, it might relate to the RandomSampler. i just recopy the logic from 4.29.2

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@dumpmemory dumpmemory changed the title Try to fix #25340 Try to fix https://github.com/huggingface/transformers/issues/25340 Aug 30, 2023
@dumpmemory dumpmemory changed the title Try to fix https://github.com/huggingface/transformers/issues/25340 Try to fix training Loss inconsistent after resume from old checkpoint Aug 30, 2023
@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint.

@amyeroberts
Copy link
Collaborator

cc @muellerzr

@muellerzr
Copy link
Contributor

Hi @dumpmemory thanks! Can you do pip install -e -U .[quality] and run make style; make quality again? This should fix that failing test

@dumpmemory
Copy link
Contributor Author

patch-1

I will check it again.

@dumpmemory
Copy link
Contributor Author

Hi @dumpmemory thanks! Can you do pip install -e -U .[quality] and run make style; make quality again? This should fix that failing test

Done

@muellerzr
Copy link
Contributor

@dumpmemory what does the following show:

pip show black isort ruff

@dumpmemory
Copy link
Contributor Author

pip show black isort ruff

@dumpmemory ➜ /workspaces/transformers (patch-1) $ pip show black isort ruff
Name: black
Version: 23.7.0
Summary: The uncompromising code formatter.
Home-page: 
Author: 
Author-email: Łukasz Langa <[email protected]>
License: MIT
Location: /usr/local/python/3.10.8/lib/python3.10/site-packages
Requires: click, mypy-extensions, packaging, pathspec, platformdirs, tomli
Required-by: 
---
Name: isort
Version: 5.12.0
Summary: A Python utility / library to sort Python imports.
Home-page: https://pycqa.github.io/isort/
Author: Timothy Crosley
Author-email: [email protected]
License: MIT
Location: /usr/local/python/3.10.8/lib/python3.10/site-packages
Requires: 
Required-by: 
---
Name: ruff
Version: 0.0.259
Summary: An extremely fast Python linter, written in Rust.
Home-page: https://github.com/charliermarsh/ruff
Author: Charlie Marsh <[email protected]>
Author-email: Charlie Marsh <[email protected]>
License: MIT
Location: /usr/local/python/3.10.8/lib/python3.10/site-packages
Requires: 
Required-by: 

Copy link
Contributor

@muellerzr muellerzr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Looks good to me and my tests all pass locally

@muellerzr
Copy link
Contributor

@amyeroberts feel free to merge if it looks good with you

@dumpmemory
Copy link
Contributor Author

@amyeroberts feel free to merge if it looks good with you

I am ok for this pr 😁. Thanks for your support.

Copy link
Collaborator

@amyeroberts amyeroberts left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for working on fixing this! Overall change looks OK, however the logic should be simplified.

@muellerzr what was the reason for removing this logic originally?

src/transformers/trainer.py Outdated Show resolved Hide resolved
src/transformers/trainer.py Outdated Show resolved Hide resolved
src/transformers/trainer.py Outdated Show resolved Hide resolved
@muellerzr
Copy link
Contributor

Originally we had thought Accelerate handled this, but it turns out it does not

@dumpmemory
Copy link
Contributor Author

dumpmemory commented Aug 31, 2023

@amyeroberts , pls help me to check the current version.

@dumpmemory
Copy link
Contributor Author

dumpmemory commented Aug 31, 2023

@amyeroberts can the current version be merged ? is there any thing else i need to change, pls just tell me

@muellerzr
Copy link
Contributor

@dumpmemory please have a bit of patience, our team works across multiple timezones and have many other PR's and responsibilities to get to aside this one. We'll get to this when we can, please don't spam :) Thanks

Copy link
Collaborator

@amyeroberts amyeroberts left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for iterating - code it looking good!

Just a comment on the utility function - we want functions to be as atomic as possible. Once updated we'll be good to merge.

Comment on lines 58 to 63
def check_dataloader_randomsampler(dataloader):
if hasattr(dataloader, "sampler") and isinstance(dataloader.sampler, RandomSampler):
return dataloader.sampler, True
if hasattr(dataloader, "batch_sampler"):
return check_dataloader_randomsampler(dataloader.batch_sampler)
return dataloader.sampler, False
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should just return the sampler, and then the user can choose what they do with the output e.g. check if it's a random sampler. This ensures the function is as versatile as possible and can be used / extended without issue.

Suggested change
def check_dataloader_randomsampler(dataloader):
if hasattr(dataloader, "sampler") and isinstance(dataloader.sampler, RandomSampler):
return dataloader.sampler, True
if hasattr(dataloader, "batch_sampler"):
return check_dataloader_randomsampler(dataloader.batch_sampler)
return dataloader.sampler, False
def get_dataloader_sampler(dataloader):
if hasattr(dataloader, "sampler"):
return dataloader.sampler
if hasattr(dataloader, "batch_sampler"):
return get_dataloader_sampler(dataloader.batch_sampler)

Copy link
Contributor Author

@dumpmemory dumpmemory Sep 1, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as what i found in #25862, if hasattr(dataloader, "sampler"), might not be enough. after accelerate.prepare function, the dataloader.sampler changed from random to torch.utils.data.sampler.SequentialSampler. i will modify the code to just return sampler

Copy link
Collaborator

@amyeroberts amyeroberts left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for iterating on this. Just one last comment on the structure of get_dataloader_sampler

src/transformers/trainer_pt_utils.py Outdated Show resolved Hide resolved
@dumpmemory
Copy link
Contributor Author

@amyeroberts How about currently version. I have checked the sampler in final if statement.

Copy link
Collaborator

@amyeroberts amyeroberts left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dumpmemory Could you explain in some more detail why the suggested implementation of get_dataloader_sampler isn't the one being used? For the current diff, it's not clear why some of the additional logic e.g. checking isinstance is added.

src/transformers/trainer_pt_utils.py Outdated Show resolved Hide resolved
@dumpmemory
Copy link
Contributor Author

Thanks for your reviews. I think it is ready now. Thanks for your kind helping.

Copy link
Collaborator

@amyeroberts amyeroberts left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for iterating!

Copy link
Contributor

@muellerzr muellerzr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work!

@amyeroberts
Copy link
Collaborator

@dumpmemory There's a current failing test (which I believe is unreleated to your PR). Could you rebase on main to include any recent updates on this branch and trigger a re-run of the CI?

@dumpmemory
Copy link
Contributor Author

@dumpmemory There's a current failing test (which I believe is unreleated to your PR). Could you rebase on main to include any recent updates on this branch and trigger a re-run of the CI?

ok, i will do that

@amyeroberts amyeroberts merged commit fb7d246 into huggingface:main Sep 7, 2023
parambharat pushed a commit to parambharat/transformers that referenced this pull request Sep 26, 2023
huggingface#25872)

* fix loss inconsistent after resume  huggingface#25340

* fix typo

* clean code

* reformatted code

* adjust code according to comments

* adjust check_dataloader_randomsampler location

* return sampler only

* handle sampler is None

* Update src/transformers/trainer_pt_utils.py

thanks @amyeroberts

Co-authored-by: amyeroberts <[email protected]>

---------

Co-authored-by: amyeroberts <[email protected]>
blbadger pushed a commit to blbadger/transformers that referenced this pull request Nov 8, 2023
huggingface#25872)

* fix loss inconsistent after resume  huggingface#25340

* fix typo

* clean code

* reformatted code

* adjust code according to comments

* adjust check_dataloader_randomsampler location

* return sampler only

* handle sampler is None

* Update src/transformers/trainer_pt_utils.py

thanks @amyeroberts

Co-authored-by: amyeroberts <[email protected]>

---------

Co-authored-by: amyeroberts <[email protected]>
EduardoPach pushed a commit to EduardoPach/transformers that referenced this pull request Nov 18, 2023
huggingface#25872)

* fix loss inconsistent after resume  huggingface#25340

* fix typo

* clean code

* reformatted code

* adjust code according to comments

* adjust check_dataloader_randomsampler location

* return sampler only

* handle sampler is None

* Update src/transformers/trainer_pt_utils.py

thanks @amyeroberts

Co-authored-by: amyeroberts <[email protected]>

---------

Co-authored-by: amyeroberts <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Training Loss inconsistent after resume from old checkpoint
4 participants