-
Notifications
You must be signed in to change notification settings - Fork 516
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] finetuning property fitting with multi-dimensional data causes error #4108
Comments
The model |
I am using the newest version of the devel branch which obtains the new |
Which commit do you use? 46632f9 does not contain |
should be #3867 |
It looks like a bug in finetune, but not related to the property fitting. |
This bug appears when finetune task's label is multi-dimensional. |
…nsional data causes error (#4145) Fix issue #4108 If a pretrained model is labeled with energy and the `out_bias` is one dimension. If we want to finetune a dos/polar/dipole/property model using this pretrained model, the `out_bias` of finetuning model is multi-dimension(example: numb_dos = 250). An error occurs: `RuntimeError: Error(s) in loading state_dict for ModelWrapper:` ` size mismatch for model.Default.atomic_model.out_bias: copying a param with shape torch.Size([1, 118, 1]) from checkpoint, the shape in current model is torch.Size([1, 118, 250]).` ` size mismatch for model.Default.atomic_model.out_std: copying a param with shape torch.Size([1, 118, 1]) from checkpoint, the shape in current model is torch.Size([1, 118, 250]).` When using new fitting, old out_bias is useless because we will recompute the new bias in later code. So we do not need to load old out_bias when using new fitting finetune. <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Enhanced parameter collection for fine-tuning, refining criteria for parameter retention. - Introduced a model checkpoint file for saving and resuming training states, facilitating iterative development. - **Tests** - Added a new test class to validate training and fine-tuning processes, ensuring model performance consistency across configurations. <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Bug summary
I have tested the new property fitting model in fine-tuning procedures with the pre-trained OpenLAM_2.2.0_27heads_beta3.pt.
The dataset I used is in the examples folder and has a dimension of 3. Raised errors about tensor size mismatch. See the Error Log below.
DeePMD-kit Version
DeePMD-kit v3.0.0a1.dev320+g46632f90
Backend and its version
torch 2.4.1+cu121
How did you download the software?
Built from source
Input Files, Running Commands, Error Log, etc.
Commands:
dp --pt train input_finetune.json --finetune OpenLAM_2.2.0_27heads_beta3.pt
Input File:
input_finetune.json
The data files I used are in
examples/property/data
Error Log:
Steps to Reproduce
Just run the command with the datasets and the input file
Further Information, Files, and Links
No response
The text was updated successfully, but these errors were encountered: