-
Notifications
You must be signed in to change notification settings - Fork 516
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(pt): consistent fine-tuning with init-model #3803
Conversation
WalkthroughWalkthroughThe updates primarily enhance the finetuning process in the DeePMD-kit by allowing users to use model parameters from a pretrained model script instead of manually inputting them. Additionally, the changes address issues related to type mapping during finetuning, ensuring the Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant MainParser
participant Trainer
participant FinetuneRuleItem
participant Descriptor
User->>MainParser: Run with --use-pretrain-script
MainParser->>Trainer: Initialize with pretrained model parameters
Trainer->>FinetuneRuleItem: Apply fine-tuning rules
FinetuneRuleItem->>Descriptor: Update type maps and statistics
Descriptor-->>FinetuneRuleItem: Confirm updates
FinetuneRuleItem-->>Trainer: Return updated model
Trainer-->>User: Provide finetuned model
Assessment against linked issues
Recent review detailsConfiguration used: CodeRabbit UI Files selected for processing (19)
Additional context usedRuff
GitHub Check: codecov/patch
Additional comments not posted (17)
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (invoked as PR comments)
Additionally, you can add CodeRabbit Configration File (
|
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## devel #3803 +/- ##
==========================================
+ Coverage 82.66% 82.70% +0.03%
==========================================
Files 517 517
Lines 49724 50141 +417
Branches 2984 2984
==========================================
+ Hits 41105 41467 +362
- Misses 7709 7764 +55
Partials 910 910 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 19
Outside diff range and nitpick comments (9)
deepmd/pt/infer/inference.py (1)
Line range hint
14-14
: Consider adding error handling for file not found or corrupted model checkpoint scenarios.doc/train/finetuning.md (1)
Line range hint
11-11
: Standardize the terminology and fix grammatical issues.
- Standardize the use of "pretrain" vs. "pre-train" to maintain consistency throughout the document.
- Add a comma after "Recently" in line 11 for clarity.
- Correct the word "multitask" to "multi-task" where applicable.
- Fix the missing comma before "which" in line 96.
Also applies to: 19-19, 28-28, 35-35, 62-62, 90-90, 92-92, 96-96, 97-97, 114-114
deepmd/dpmodel/descriptor/se_e2_a.py (1)
267-274
: Clarify the purpose of theupdate_type_params
method.Consider adding a comment or expanding the docstring to explain that this method is a placeholder and outline any plans for its future implementation. This will help maintainers and other developers understand the current state and future expectations.
source/tests/pt/test_training.py (1)
38-47
: Enhance robustness of file cleanup intearDown
.Consider adding error handling in the
tearDown
method to gracefully manage exceptions that may occur during file deletion. This can prevent the test suite from failing due to issues unrelated to the test logic.Also applies to: 51-67
deepmd/pt/model/descriptor/se_a.py (1)
168-193
: Consider adding more inline comments in theforward
method ofDescrptBlockSeA
to explain the tensor operations, which will enhance readability and maintainability.deepmd/main.py (1)
258-261
: Add documentation for the new--use-pretrain-script
argument.It would be beneficial to include a brief explanation in the documentation about when and why to use the
--use-pretrain-script
option, especially since it's specific to the PyTorch backend.deepmd/dpmodel/descriptor/dpa1.py (1)
368-375
: Ensure proper documentation for theupdate_type_params
method.It would be beneficial to provide a more detailed docstring for the
update_type_params
method, explaining the parameters and the expected behavior, especially since it raisesNotImplementedError
.deepmd/pt/train/training.py (2)
Line range hint
34-465
: Consider refactoring theTrainer
constructor to improve readability and maintainability.The constructor of the
Trainer
class is quite lengthy and handles multiple aspects of the training setup. It would be beneficial to break down this method into smaller, more manageable functions. This can improve readability and make the code easier to maintain and test.
Line range hint
561-1196
: Optimize the training loop for performance and correctness.The training loop method is critical for the performance of the training process. Consider optimizing the data handling and computation steps to improve efficiency. Additionally, ensure that all tensor operations are correctly managed to avoid memory leaks and ensure computational correctness.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 21
Outside diff range and nitpick comments (7)
doc/train/finetuning.md (6)
Line range hint
11-11
: Add a comma after "Recently" for clarity.Consider revising the sentence to: "Recently, the emerging of methods such as [DPA-1]..."
Line range hint
19-35
: Ensure consistent use of the term "pre-trained".The document inconsistently uses "pretrain" and "pre-trained". It's important to maintain consistency to avoid confusion. Consider using "pre-trained" throughout the document.
Line range hint
54-54
: Clarify the conjunction in the sentence.The sentence "In PyTorch version, we have introduced an updated, more adaptable approach to fine-tuning." might be clearer with "and" instead of "an". Consider revising to: "In the PyTorch version, we have introduced an updated and more adaptable approach to fine-tuning."
Line range hint
90-114
: Use "multitask" as one word.The document inconsistently uses "multi-task" and "multitask". For consistency, consider using "multitask" as one word throughout the document.
Line range hint
96-96
: Add a comma after "multitask".Consider revising the sentence to: "For fine-tuning using this multitask, pre-trained model..."
Line range hint
133-133
: Correct the unpaired quotation mark.There appears to be an unpaired quotation mark in the sentence. Consider revising to ensure proper pairing of quotation marks.
source/tests/pt/test_finetune.py (1)
Line range hint
95-154
: Consider adding more detailed comments explaining the steps in thetest_finetune_change_out_bias
method, especially for complex tensor manipulations and assertions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
(I realize that the following behavior is all expected, and you can ignore this commit....) There is an obvious problem with this design, when I don't use For example, if Singletask finetuning from a multitask model: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Outside diff range and nitpick comments (3)
doc/train/finetuning.md (3)
Line range hint
11-11
: Consider adding a comma after "Recently" for clarity.- Recently the emerging of methods such as [DPA-1]... + Recently, the emerging of methods such as [DPA-1]...
Line range hint
54-54
: Correct the grammatical error in "an updated".- In PyTorch version, we have introduced an updated, more adaptable approach... + In the PyTorch version, we have introduced an updated, more adaptable approach...
Line range hint
134-134
: Correct the typographical error in the documentation.- ...put.json` should appear as follows ("..." means copied from input script of pre-t... + ...put.json` should appear as follows ("..." means copied from the input script of the pre-t...
Signed-off-by: Duo <[email protected]>
for more information, see https://pre-commit.ci
Fix #3747. Fix #3455. - Consistent fine-tuning with init-model, now in pt, fine-tuning include three steps: 1. Change model params (for multitask fine-tuning, random fitting and type-related params), 2. Init-model, 3. Change bias - By default, input will use user input while fine-tuning, instead of being overwritten by that in the pre-trained model. When adding “--use-pretrain-script”, user can use that in the pre-trained model. - Now `type_map` will use that in the user input instead of overwritten by that in the pre-trained model. Note: 1. After discussed with @wanghan-iapcm, **behavior of fine-tuning in TF is kept as before**. If needed in the future, it can be implemented then. 2. Fine-tuning using DOSModel in PT need to be fixed. (an issue will be opened, maybe fixed in another PR, cc @anyangml ) <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Added support for using model parameters from a pretrained model script. - Introduced new methods to handle type-related parameters and fine-tuning configurations. - **Documentation** - Updated documentation to clarify the model section requirements and the new `--use-pretrain-script` option for fine-tuning. - **Refactor** - Simplified and improved the readability of key functions related to model training and fine-tuning. - **Tests** - Added new test methods and utility functions to ensure consistency of type mapping and parameter updates. <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Signed-off-by: Duo <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Han Wang <[email protected]> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Fix deepmodeling#3747. Fix deepmodeling#3455. - Consistent fine-tuning with init-model, now in pt, fine-tuning include three steps: 1. Change model params (for multitask fine-tuning, random fitting and type-related params), 2. Init-model, 3. Change bias - By default, input will use user input while fine-tuning, instead of being overwritten by that in the pre-trained model. When adding “--use-pretrain-script”, user can use that in the pre-trained model. - Now `type_map` will use that in the user input instead of overwritten by that in the pre-trained model. Note: 1. After discussed with @wanghan-iapcm, **behavior of fine-tuning in TF is kept as before**. If needed in the future, it can be implemented then. 2. Fine-tuning using DOSModel in PT need to be fixed. (an issue will be opened, maybe fixed in another PR, cc @anyangml ) <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Added support for using model parameters from a pretrained model script. - Introduced new methods to handle type-related parameters and fine-tuning configurations. - **Documentation** - Updated documentation to clarify the model section requirements and the new `--use-pretrain-script` option for fine-tuning. - **Refactor** - Simplified and improved the readability of key functions related to model training and fine-tuning. - **Tests** - Added new test methods and utility functions to ensure consistency of type mapping and parameter updates. <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Signed-off-by: Duo <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Han Wang <[email protected]> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Fix #3747. Fix #3455.
By default, input will use user input while fine-tuning, instead of being overwritten by that in the pre-trained model. When adding “--use-pretrain-script”, user can use that in the pre-trained model.
Now
type_map
will use that in the user input instead of overwritten by that in the pre-trained model.Note:
Summary by CodeRabbit
New Features
Documentation
--use-pretrain-script
option for fine-tuning.Refactor
Tests