-
Notifications
You must be signed in to change notification settings - Fork 516
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request] Consistent user experience for finetuning and init-model #3747
Labels
Comments
github-merge-queue bot
pushed a commit
that referenced
this issue
Jun 13, 2024
Fix #3747. Fix #3455. - Consistent fine-tuning with init-model, now in pt, fine-tuning include three steps: 1. Change model params (for multitask fine-tuning, random fitting and type-related params), 2. Init-model, 3. Change bias - By default, input will use user input while fine-tuning, instead of being overwritten by that in the pre-trained model. When adding “--use-pretrain-script”, user can use that in the pre-trained model. - Now `type_map` will use that in the user input instead of overwritten by that in the pre-trained model. Note: 1. After discussed with @wanghan-iapcm, **behavior of fine-tuning in TF is kept as before**. If needed in the future, it can be implemented then. 2. Fine-tuning using DOSModel in PT need to be fixed. (an issue will be opened, maybe fixed in another PR, cc @anyangml ) <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Added support for using model parameters from a pretrained model script. - Introduced new methods to handle type-related parameters and fine-tuning configurations. - **Documentation** - Updated documentation to clarify the model section requirements and the new `--use-pretrain-script` option for fine-tuning. - **Refactor** - Simplified and improved the readability of key functions related to model training and fine-tuning. - **Tests** - Added new test methods and utility functions to ensure consistency of type mapping and parameter updates. <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Signed-off-by: Duo <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Han Wang <[email protected]> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
github-project-automation
bot
moved this from Backlog
to Done
in DeePMD-3.0.0 beta release
Jun 13, 2024
mtaillefumier
pushed a commit
to mtaillefumier/deepmd-kit
that referenced
this issue
Sep 18, 2024
Fix deepmodeling#3747. Fix deepmodeling#3455. - Consistent fine-tuning with init-model, now in pt, fine-tuning include three steps: 1. Change model params (for multitask fine-tuning, random fitting and type-related params), 2. Init-model, 3. Change bias - By default, input will use user input while fine-tuning, instead of being overwritten by that in the pre-trained model. When adding “--use-pretrain-script”, user can use that in the pre-trained model. - Now `type_map` will use that in the user input instead of overwritten by that in the pre-trained model. Note: 1. After discussed with @wanghan-iapcm, **behavior of fine-tuning in TF is kept as before**. If needed in the future, it can be implemented then. 2. Fine-tuning using DOSModel in PT need to be fixed. (an issue will be opened, maybe fixed in another PR, cc @anyangml ) <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Added support for using model parameters from a pretrained model script. - Introduced new methods to handle type-related parameters and fine-tuning configurations. - **Documentation** - Updated documentation to clarify the model section requirements and the new `--use-pretrain-script` option for fine-tuning. - **Refactor** - Simplified and improved the readability of key functions related to model training and fine-tuning. - **Tests** - Added new test methods and utility functions to ensure consistency of type mapping and parameter updates. <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Signed-off-by: Duo <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Han Wang <[email protected]> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Summary
Consistent user experience for finetuning and init-model
Detailed Description
Currently, finetuning uses model structure defined in the pretrained model, ignoring those in
input.json
. In comparison, training with init-model uses model structure defined ininput.json
, inconsistency with the init model will cause exception. I suggest a more consistent user experience for the two modes.Further Information, Files, and Links
No response
The text was updated successfully, but these errors were encountered: