Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Fix] fix bug in load_model_state_dict of BaseStrategy #1447

Conversation

SCZwangxiao
Copy link
Contributor

Motivation

Fix the bug in the load_model_state_dict() function in BaseStrategy. The bug is caused by incorrect argument assignments.

Specifically, the variable revise_keys in load_model_state_dict():

_load_checkpoint_to_model(model, state_dict, strict, revise_keys)

is passed into the logger argument in _load_checkpoint_to_model():
def _load_checkpoint_to_model(model,
checkpoint,
strict=False,
logger=None,
revise_keys=[(r'^module\.', '')]):
# get state_dict from checkpoint
if 'state_dict' in checkpoint:

Modification

Use key-value assignment for strict and revise_keys.

Checklist

  1. Pre-commit or other linting tools are used to fix the potential lint issues.
  2. The modification is covered by complete unit tests. If not, please add more unit test to ensure the correctness.
  3. If the modification has potential influence on downstream projects, this PR should be tested with downstream projects, like MMDetection or MMPretrain.
  4. The documentation has been modified accordingly, like docstring or example tutorials.

@CLAassistant
Copy link

CLAassistant commented Nov 30, 2023

CLA assistant check
All committers have signed the CLA.

@SCZwangxiao SCZwangxiao force-pushed the SCZwangxiao/strategy_load_model_state_dict branch from b6a4f00 to 39703cc Compare November 30, 2023 06:47
@HAOCHENYE
Copy link
Collaborator

LGTM 😆 !!!
image

@zhouzaida zhouzaida merged commit efcd364 into open-mmlab:main Dec 23, 2023
10 of 13 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants