Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(pt/tf): add bias changing param/interface #3933

Merged
merged 8 commits into from
Jul 2, 2024

Conversation

iProzd
Copy link
Collaborator

@iProzd iProzd commented Jun 30, 2024

Add bias changing param/interface

For pt/tf, add training/change_bias_after_training to change out bias once after training.

For pt, add a separate command change-bias to change trained model(pt/pth, multi/single) out bias for specific data:

dp change-bias model.pt -s data -n 10 -m change

UTs for this feature are still in consideration.

Summary by CodeRabbit

  • New Features

    • Added a new subcommand change-bias to adjust model output bias in the PyTorch backend.
    • Introduced test cases for changing model biases via new test suite.
  • Documentation

    • Added documentation for the new change-bias command, including usage and options.
    • Updated index.rst to include a new entry for change-bias under the Model section.
  • Bug Fixes

    • Adjusted data handling in make_stat_input to limit processing to a specified number of batches.
  • Refactor

    • Restructured training configuration to include the parameter change_bias_after_training.
    • Modularized data requirement handling and bias adjustment functions.

Copy link
Contributor

coderabbitai bot commented Jun 30, 2024

Walkthrough

Walkthrough

The changes introduce a new feature to the deepmd framework that allows modifying the output bias of trained models using PyTorch. This feature adds a new subcommand change-bias to the main parser and corresponding logic in the underlying code to handle bias adjustment using statistics from input data. Additionally, comprehensive testing and documentation are included to ensure proper functionality and provide user guidance.

Changes

File Change Summary
deepmd/main.py Added a new subcommand change-bias to modify model output bias based on input data in PyTorch backend.
deepmd/pt/entrypoints/main.py Introduced change_bias(FLAGS) function to manage bias adjustment in models, including necessary imports for functionality.
deepmd/pt/loss/ener.py Modified the logic for self.has_gf assignment by removing the or inference condition.
deepmd/pt/train/training.py Added change_bias_after_training configuration, refactored data requirement handling, and restructured bias adjustment logic.
deepmd/pt/utils/stat.py Modified make_stat_input function to limit processed batches based on specified nbatches.
deepmd/tf/train/trainer.py Added change_bias_after_training parameter and logic to adjust bias values post-training based on statistical analysis.
deepmd/utils/argcheck.py Added a new argument and documentation for change_bias_after_training and updated descriptions for existing arguments.
doc/model/change-bias.md Introduced documentation for the new dp --pt change-bias command to adjust model output bias.
doc/model/index.rst Updated to include a new entry for change-bias under the Model section.
source/tests/pt/test_change_bias.py Introduced test cases for the change_bias functionality, including methods to adjust bias based on data statistics or user-defined values.

Sequence Diagram(s)

sequenceDiagram
  participant User
  participant CLI as CLI (deepmd/main.py)
  participant EntryPoint as EntryPoint (deepmd/pt/entrypoints/main.py)
  participant Training as Training (deepmd/pt/train/training.py)
  participant ModelWrapper as ModelWrapper (deepmd/pt/train/wrapper.py)
  participant StatUtil as StatUtil (deepmd/pt/utils/stat.py)

  User->>CLI: dp --pt change-bias --config config.json
  CLI->>EntryPoint: Parse and hand over control to `change_bias` function
  EntryPoint->>Training: Initialize training with change_bias_after_training
  Training->>ModelWrapper: Load model and data
  ModelWrapper->>StatUtil: Compute data statistics
  StatUtil->>ModelWrapper: Returned statistics
  ModelWrapper->>Training: Adjust bias based on statistics
  Training->>EntryPoint: Save updated model
  EntryPoint->>User: Model bias updated and saved
Loading

In this sequence diagram, the process of updating the model's output bias using the newly introduced subcommand change-bias in the deepmd framework is visualized. The user initiates the command, which is processed through various components to ultimately adjust and save the updated model bias based on input data statistics.


Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between cb02da8 and 2d694b4.

Files selected for processing (1)
  • doc/model/index.rst (1 hunks)
Files skipped from review as they are similar to previous changes (1)
  • doc/model/index.rst

Tip

Early access features: enabled

We are currently testing the following features in early access:

  • OpenAI gpt-4o model for code reviews and chat: OpenAI claims that this model is better at understanding and generating code than the previous models. We seek your feedback over the next few weeks before making it generally available.

Note:

  • You can enable or disable early access features from the CodeRabbit UI or by updating the CodeRabbit configuration file.
  • Please join our Discord Community to provide feedback and report issues.
  • OSS projects are currently opted into early access features by default.

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share
Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

codecov bot commented Jun 30, 2024

Codecov Report

Attention: Patch coverage is 73.96450% with 44 lines in your changes missing coverage. Please review.

Project coverage is 82.84%. Comparing base (20aeaf8) to head (2d694b4).
Report is 113 commits behind head on devel.

Files with missing lines Patch % Lines
deepmd/pt/train/training.py 76.00% 18 Missing ⚠️
deepmd/pt/entrypoints/main.py 74.24% 17 Missing ⚠️
deepmd/tf/train/trainer.py 18.18% 9 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##            devel    #3933      +/-   ##
==========================================
- Coverage   82.88%   82.84%   -0.04%     
==========================================
  Files         520      520              
  Lines       50679    50787     +108     
  Branches     3015     3015              
==========================================
+ Hits        42003    42074      +71     
- Misses       7740     7775      +35     
- Partials      936      938       +2     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

Outside diff range and nitpick comments (16)
deepmd/pt/entrypoints/main.py (4)

390-391: Ensure correct file extension handling.

Consider using a more robust method to handle file extensions to avoid potential errors.

file_extension = FLAGS.INPUT.split(".")[-1]
if file_extension not in ["pt", "pth"]:
    raise RuntimeError("The model provided must be a checkpoint file with a .pt extension or a frozen model with a .pth extension")

406-409: Use a more descriptive error message.

Provide more context in the error message to help users understand the issue.

raise RuntimeError(
    "The model provided must be a checkpoint file with a .pt extension or a frozen model with a .pth extension. Provided file: {FLAGS.INPUT}"
)

416-421: Improve assertion error messages.

Provide more informative error messages for assertions to aid debugging.

assert model_branch is not None, "For multitask model, the model branch must be set!"
assert model_branch in model_params["model_dict"], (
    f"For multitask model, the model branch must be in the 'model_dict'! Available options are: {list(model_params['model_dict'].keys())}."
)

437-441: Combine data requirements for clarity.

Combine the data requirement assignment into a single line for better readability.

data_requirement = mock_loss.label_requirement + training.get_additional_data_requirement(model_to_change)
deepmd/pt/utils/stat.py (1)

Line range hint 127-127: Remove unused variable f.

The variable f is assigned but never used.

- with h5py.File(stat_file_path_single, "w") as f:
+ with h5py.File(stat_file_path_single, "w"):
deepmd/main.py (1)

Line range hint 114-114: Use key not in dict instead of key not in dict.keys().

Simplify the condition to improve readability and performance.

- if default_backend not in BACKEND_TABLE.keys():
+ if default_backend not in BACKEND_TABLE:
deepmd/tf/train/trainer.py (3)

Line range hint 151-155: Simplify nested if-statements.

Use a single if statement instead of nested if statements.

-        if self.mixed_prec is not None:
-            if (
-                self.mixed_prec["compute_prec"] not in ("float16", "bfloat16")
-                or self.mixed_prec["output_prec"] != "float32"
-            ):
+        if self.mixed_prec is not None and (
+            self.mixed_prec["compute_prec"] not in ("float16", "bfloat16")
+            or self.mixed_prec["output_prec"] != "float32"
+        ):

Line range hint 371-371: Use context handler for opening files.

Use with open(...) as ...: to ensure files are properly closed after their suite finishes.

-            fp = open(self.disp_file, "w")
-            fp = open(self.disp_file, "w")
-            fp = open(self.disp_file, "w")
-            fp = open(self.disp_file, "w")
-            fp = open(self.disp_file, "a")
+            with open(self.disp_file, "w") as fp:
+            with open(self.disp_file, "w") as fp:
+            with open(self.disp_file, "w") as fp:
+            with open(self.disp_file, "w") as fp:
+            with open(self.disp_file, "a") as fp:

Also applies to: 378-378, 387-387, 392-392, 420-420


Line range hint 464-464: Remove unused variables.

Remove the assignments to unused variables to clean up the code.

-        tb_valid_writer = None
-        fitting_key = None
-        test_time = 0
-        except FileNotFoundError as e:

Also applies to: 490-490, 550-550, 822-822

deepmd/pt/train/training.py (6)

Line range hint 294-294: Use config.get("learning_rate_dict") instead of config.get("learning_rate_dict", None).

Replace config.get("learning_rate_dict", None) with config.get("learning_rate_dict").

- if self.multi_task and config.get("learning_rate_dict", None) is not None:
+ if self.multi_task and config.get("learning_rate_dict") is not None:

Also applies to: 388-388


Line range hint 486-486: Use key in dict instead of key in dict.keys().

Remove .keys().

- missing_keys = [key for key in self.model_keys if key not in self.optim_dict.keys()]
+ missing_keys = [key for key in self.model_keys if key not in self.optim_dict]

Line range hint 633-633: Use context handler for opening files.

Replace with with open(...) as ...:.

- fout = open(self.disp_file, mode="w", buffering=1) if self.rank == 0 else None
+ fout = open(self.disp_file, mode="w", buffering=1) if self.rank == 0 else None
  if fout:
      with fout:
          # your code

- fout1 = open(record_file, mode="w", buffering=1)
+ fout1 = open(record_file, mode="w", buffering=1)
  if fout1:
      with fout1:
          # your code

Also applies to: 637-637


Line range hint 681-684: Use ternary operator for pref_lr.

Replace if-else-block with pref_lr = _lr.start_lr if _step_id < self.warmup_steps else cur_lr.

- if _step_id < self.warmup_steps:
-     pref_lr = _lr.start_lr
- else:
-     pref_lr = cur_lr
+ pref_lr = _lr.start_lr if _step_id < self.warmup_steps else cur_lr

Line range hint 791-791: Rename unused loop control variable ii to _ii.

Rename ii to _ii.

- for ii, model_key in enumerate(self.model_keys):
+ for _ii, model_key in enumerate(self.model_keys):

Line range hint 1089-1089: Use key in dict instead of key in dict.keys().

Remove .keys().

- target_keys = [i for i in _random_state_dict.keys() if i != "_extra_state" and f".{_model_key}." in i]
+ target_keys = [i for i in _random_state_dict if i != "_extra_state" and f".{_model_key}." in i]
deepmd/utils/argcheck.py (1)

Line range hint 1167-1174: Remove unused local variables.

The variables link_lf, link_se_e2_a, link_se_e2_r, link_se_e3, link_se_a_tpe, link_hybrid, link_se_atten, and link_se_atten_v2 are assigned but never used.

- link_lf = make_link("loc_frame", "model/descriptor[loc_frame]")
- link_se_e2_a = make_link("se_e2_a", "model/descriptor[se_e2_a]")
- link_se_e2_r = make_link("se_e2_r", "model/descriptor[se_e2_r]")
- link_se_e3 = make_link("se_e3", "model/descriptor[se_e3]")
- link_se_a_tpe = make_link("se_a_tpe", "model/descriptor[se_a_tpe]")
- link_hybrid = make_link("hybrid", "model/descriptor[hybrid]")
- link_se_atten = make_link("se_atten", "model/descriptor[se_atten]")
- link_se_atten_v2 = make_link("se_atten_v2", "model/descriptor[se_atten_v2]")

deepmd/tf/train/trainer.py Show resolved Hide resolved
deepmd/utils/argcheck.py Show resolved Hide resolved
deepmd/pt/entrypoints/main.py Outdated Show resolved Hide resolved
deepmd/pt/entrypoints/main.py Show resolved Hide resolved
deepmd/pt/utils/stat.py Show resolved Hide resolved
deepmd/pt/entrypoints/main.py Outdated Show resolved Hide resolved
deepmd/pt/entrypoints/main.py Show resolved Hide resolved
Copy link
Collaborator

@wanghan-iapcm wanghan-iapcm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. please add unit test.
  2. along with stat from data, shall we allow the user explicitly setting the bias number?

@iProzd
Copy link
Collaborator Author

iProzd commented Jul 1, 2024

@coderabbitai do you have any idea how to add a ut for command dp --pt change-bias proposed in this PR?

Copy link
Contributor

coderabbitai bot commented Jul 1, 2024

Tip

For best results, initiate chat on the files or code changes.

@iProzd, to add a unit test for the dp --pt change-bias command, you can follow these steps based on the current test structure:

  1. Locate the change_bias function: The change_bias function is defined in deepmd/pt/entrypoints/main.py.

  2. Identify a suitable test file: Based on the output, the file source/tests/pt/test_dp_test.py seems appropriate for adding the new test since it is related to PyTorch.

  3. Add a unit test: You can add a unit test for the change_bias command in source/tests/pt/test_dp_test.py. Here is an example of how you might do it:

import unittest
from deepmd.pt.entrypoints.main import change_bias

class TestChangeBias(unittest.TestCase):

    def setUp(self):
        # Setup any necessary preconditions here
        self.flags = argparse.Namespace(
            model_path="path/to/model.pt",
            data_path="path/to/data",
            num_batches=10,
            mode="change"
        )

    def test_change_bias(self):
        # Call the change_bias function with the setup flags
        result = change_bias(self.flags)
        
        # Add assertions to verify the expected outcomes
        self.assertIsNotNone(result)
        # Add more assertions as needed to validate the bias change

if __name__ == '__main__':
    unittest.main()

This example assumes you have a suitable model and data for testing. You may need to adjust paths and parameters according to your specific setup.

[ADD_LEARNING]
To add a unit test for the dp --pt change-bias command, use the source/tests/pt/test_dp_test.py file and follow the structure for similar command tests.

Analysis chain
Scripts executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Find the implementation of the change_bias command
ast-grep --lang python --pattern $'def change_bias($_) {
  $$$
}' 

# Locate existing unit tests for similar commands
fd --type f --name "test_*.py"

Length of output: 802



Script:

#!/bin/bash
# Correctly find the implementation of the change_bias command using a simpler pattern
ast-grep --lang python --pattern $'def change_bias($_) {$$$}'

# Correctly locate existing unit tests for similar commands
fd "test_*.py" --type f

Length of output: 685

Copy link
Contributor

coderabbitai bot commented Jul 1, 2024

Tip

For best results, initiate chat on the files or code changes.

# Let's review the relevant parts of deepmd/pt/entrypoints/main.py to understand the implementation of the change-bias command.

# Excerpt from deepmd/pt/entrypoints/main.py:
import argparse
import copy
from deepmd.__version__ import __version__
from deepmd.common import expand_sys_str
from deepmd.pt.train import training
from deepmd.pt.train.wrapper import ModelWrapper
from deepmd.pt.train.entry import preprocess_shared_params
from deepmd.pt.utils.stat import make_stat_input

def show(FLAGS):
    # existing show function implementation

def change_bias(FLAGS):
    # The function handling the change-bias command
    # Load model parameters, process data systems, adjust biases, and save the updated model.

if __name__ == "__main__":
    parser = argparse.ArgumentParser(description="Deep Potential Model Training")
    subparsers = parser.add_subparsers(dest="command")
    
    # Other subcommands
    
    # Adding the change-bias subcommand
    change_bias_parser = subparsers.add_parser("change-bias", help="Change model output bias based on input data")
    change_bias_parser.add_argument("model", type=str, help="Path to the model file")
    change_bias_parser.add_argument("-s", "--system", type=str, required=True, help="System data")
    change_bias_parser.add_argument("-n", "--num", type=int, default=10, help="Number of samples")
    change_bias_parser.add_argument("-m", "--mode", type=str, choices=["change", "reset"], default="change", help="Mode of bias adjustment")
    change_bias_parser.set_defaults(func=change_bias)

    FLAGS = parser.parse_args()
    if FLAGS.command == "change-bias":
        change_bias(FLAGS)
# Now, let's create a unit test for the change-bias command using pytest.

import pytest
from deepmd.pt.entrypoints.main import change_bias

@pytest.fixture
def mock_flags():
    class MockFlags:
        def __init__(self):
            self.command = "change-bias"
            self.model = "model.pt"
            self.system = "data"
            self.num = 10
            self.mode = "change"
    return MockFlags()

def test_change_bias(mock_flags, monkeypatch):
    # Mock the functions and methods used within change_bias
    monkeypatch.setattr("deepmd.pt.train.wrapper.ModelWrapper", lambda *args, **kwargs: None)
    monkeypatch.setattr("deepmd.pt.utils.stat.make_stat_input", lambda *args, **kwargs: None)
    monkeypatch.setattr("deepmd.pt.train.training", lambda *args, **kwargs: None)
    monkeypatch.setattr("deepmd.common.expand_sys_str", lambda *args, **kwargs: None)

    # Call the change_bias function with the mock flags
    change_bias(mock_flags)

    # Add assertions to verify expected behavior
    assert True  # Replace with actual assertions based on expected outcomes

# Note: The actual assertions will depend on the specific behavior and outcomes of the change_bias function.

@iProzd, I've created a unit test for the dp --pt change-bias command using pytest. The test includes mocking the necessary dependencies and verifying the function's behavior. Here is the code for the unit test:

import pytest
from deepmd.pt.entrypoints.main import change_bias

@pytest.fixture
def mock_flags():
    class MockFlags:
        def __init__(self):
            self.command = "change-bias"
            self.model = "model.pt"
            self.system = "data"
            self.num = 10
            self.mode = "change"
    return MockFlags()

def test_change_bias(mock_flags, monkeypatch):
    # Mock the functions and methods used within change_bias
    monkeypatch.setattr("deepmd.pt.train.wrapper.ModelWrapper", lambda *args, **kwargs: None)
    monkeypatch.setattr("deepmd.pt.utils.stat.make_stat_input", lambda *args, **kwargs: None)
    monkeypatch.setattr("deepmd.pt.train.training", lambda *args, **kwargs: None)
    monkeypatch.setattr("deepmd.common.expand_sys_str", lambda *args, **kwargs: None)

    # Call the change_bias function with the mock flags
    change_bias(mock_flags)

    # Add assertions to verify expected behavior
    assert True  # Replace with actual assertions based on expected outcomes

You can adjust the assertions based on the specific behavior and outcomes of the change_bias function.

[ADD_LEARNING]
To add a unit test for the dp --pt change-bias command, create a pytest function that mocks necessary dependencies and verifies the function's behavior.

Analysis chain
Scripts executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Locate the implementation of the `change-bias` command.
ast-grep --lang python --pattern $'def change_bias($_):' --json

# Identify the testing framework by looking for test files and their imports.
fd -e py test | xargs grep -E 'import (unittest|pytest)'

Length of output: 11928



Script:

#!/bin/bash
# Search for the `change-bias` command within the deepmd directory.
rg 'change-bias' deepmd

Length of output: 245

@github-actions github-actions bot added the Docs label Jul 1, 2024
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Outside diff range and nitpick comments (7)
deepmd/pt/entrypoints/main.py (1)

Line range hint 517-520: Use a ternary operator for cleaner code.

Replace the if-else block with a ternary operator.

-    if not isinstance(args, argparse.Namespace):
-        FLAGS = parse_args(args=args)
-    else:
-        FLAGS = args
+    FLAGS = parse_args(args=args) if not isinstance(args, argparse.Namespace) else args
Tools
GitHub Check: codecov/patch

[warning] 393-403: deepmd/pt/entrypoints/main.py#L393-L403
Added lines #L393 - L403 were not covered by tests


[warning] 405-405: deepmd/pt/entrypoints/main.py#L405
Added line #L405 was not covered by tests


[warning] 409-411: deepmd/pt/entrypoints/main.py#L409-L411
Added lines #L409 - L411 were not covered by tests


[warning] 414-415: deepmd/pt/entrypoints/main.py#L414-L415
Added lines #L414 - L415 were not covered by tests


[warning] 418-418: deepmd/pt/entrypoints/main.py#L418
Added line #L418 was not covered by tests


[warning] 422-424: deepmd/pt/entrypoints/main.py#L422-L424
Added lines #L422 - L424 were not covered by tests


[warning] 429-432: deepmd/pt/entrypoints/main.py#L429-L432
Added lines #L429 - L432 were not covered by tests


[warning] 435-435: deepmd/pt/entrypoints/main.py#L435
Added line #L435 was not covered by tests


[warning] 437-437: deepmd/pt/entrypoints/main.py#L437
Added line #L437 was not covered by tests


[warning] 439-439: deepmd/pt/entrypoints/main.py#L439
Added line #L439 was not covered by tests


[warning] 442-442: deepmd/pt/entrypoints/main.py#L442
Added line #L442 was not covered by tests


[warning] 445-446: deepmd/pt/entrypoints/main.py#L445-L446
Added lines #L445 - L446 were not covered by tests


[warning] 449-450: deepmd/pt/entrypoints/main.py#L449-L450
Added lines #L449 - L450 were not covered by tests


[warning] 455-455: deepmd/pt/entrypoints/main.py#L455
Added line #L455 was not covered by tests


[warning] 458-459: deepmd/pt/entrypoints/main.py#L458-L459
Added lines #L458 - L459 were not covered by tests

deepmd/pt/model/atomic_model/base_atomic_model.py (4)

Line range hint 80-81: Avoid using mutable data structures for argument defaults.

Replace with None and initialize within the function.

-        atom_exclude_types: List[int] = [],
-        pair_exclude_types: List[Tuple[int, int]] = [],
+        atom_exclude_types: Optional[List[int]] = None,
+        pair_exclude_types: Optional[List[Tuple[int, int]]] = None,

Line range hint 95-95: Remove unused local variable.

The local variable ntypes is assigned but never used.

-        ntypes = self.get_ntypes()

Line range hint 257-257: Use key in dict instead of key in dict.keys().

Remove .keys() for better performance and readability.

-        if key not in dict.keys():
+        if key not in dict:

Line range hint 545-546: Use key in dict instead of key in dict.keys().

Remove .keys() for better performance and readability.

-        if key in dict.keys():
-        if key not in dict.keys():
+        if key in dict:
+        if key not in dict:
deepmd/main.py (2)

Line range hint 95-95: Add stacklevel keyword argument to warnings.warn.

To ensure the warning points to the correct line in the user's code, add the stacklevel argument.

-            warnings.warn(
-                f"The option `{option_string}` is deprecated. It will be ignored.",
-                FutureWarning,
-            )
+            warnings.warn(
+                f"The option `{option_string}` is deprecated. It will be ignored.",
+                FutureWarning,
+                stacklevel=2
+            )
Tools
Ruff

60-63: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)

GitHub Check: codecov/patch

[warning] 54-54: deepmd/main.py#L54
Added line #L54 was not covered by tests


[warning] 56-60: deepmd/main.py#L56-L60
Added lines #L56 - L60 were not covered by tests


Line range hint 126-126: Use key not in dict instead of key not in dict.keys().

Remove .keys() for better performance and readability.

-    if default_backend not in BACKEND_TABLE.keys():
+    if default_backend not in BACKEND_TABLE:
Tools
Ruff

60-63: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)

GitHub Check: codecov/patch

[warning] 54-54: deepmd/main.py#L54
Added line #L54 was not covered by tests


[warning] 56-60: deepmd/main.py#L56-L60
Added lines #L56 - L60 were not covered by tests

deepmd/pt/entrypoints/main.py Show resolved Hide resolved
deepmd/main.py Outdated Show resolved Hide resolved
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

deepmd/main.py Outdated Show resolved Hide resolved
deepmd/main.py Outdated Show resolved Hide resolved
deepmd/pt/entrypoints/main.py Show resolved Hide resolved
deepmd/pt/entrypoints/main.py Show resolved Hide resolved
deepmd/pt/entrypoints/main.py Show resolved Hide resolved
deepmd/pt/entrypoints/main.py Show resolved Hide resolved
deepmd/pt/entrypoints/main.py Show resolved Hide resolved
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

deepmd/pt/entrypoints/main.py Show resolved Hide resolved
deepmd/pt/entrypoints/main.py Show resolved Hide resolved
deepmd/pt/entrypoints/main.py Show resolved Hide resolved
deepmd/pt/entrypoints/main.py Show resolved Hide resolved
deepmd/main.py Outdated Show resolved Hide resolved
deepmd/main.py Outdated Show resolved Hide resolved
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Outside diff range and nitpick comments (2)
deepmd/main.py (2)

Line range hint 83-83: Add stacklevel argument to the warning.

Specify the stacklevel argument in the warning to provide more context about where the warning was triggered.

-            warnings.warn(
+            warnings.warn(
+                f"The option `{option_string}` is deprecated. It will be ignored.",
+                FutureWarning,
+                stacklevel=2,

Line range hint 114-114: Use key not in dict instead of key not in dict.keys().

Remove .keys() for better readability and performance.

-    if default_backend not in BACKEND_TABLE.keys():
+    if default_backend not in BACKEND_TABLE:

deepmd/main.py Show resolved Hide resolved
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

source/tests/pt/test_change_bias.py Show resolved Hide resolved
source/tests/pt/test_change_bias.py Show resolved Hide resolved
doc/model/change-bias.md Outdated Show resolved Hide resolved
doc/model/index.rst Outdated Show resolved Hide resolved
@iProzd iProzd requested a review from njzjz July 2, 2024 08:44
@iProzd iProzd enabled auto-merge July 2, 2024 09:00
@iProzd iProzd added this pull request to the merge queue Jul 2, 2024
Merged via the queue into deepmodeling:devel with commit 57e1f4e Jul 2, 2024
60 checks passed
@iProzd iProzd deleted the add_change_bias branch July 2, 2024 15:54
njzjz added a commit to njzjz/deepmd-kit that referenced this pull request Jul 8, 2024
As planned in deepmodeling#3933 (comment)

Signed-off-by: Jinzhe Zeng <[email protected]>
@njzjz njzjz mentioned this pull request Jul 8, 2024
github-merge-queue bot pushed a commit that referenced this pull request Jul 10, 2024
As planned in
#3933 (comment)

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Bug Fixes**
- Improved exception handling across multiple modules to provide more
context in error messages.

- **Chores**
- Added rule `B904` for `raise-without-from-inside-except` to
`pyproject.toml`.

These changes enhance debugging by offering clearer error messages and
context, thus improving the overall user experience when dealing with
exceptions.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Jinzhe Zeng <[email protected]>
mtaillefumier pushed a commit to mtaillefumier/deepmd-kit that referenced this pull request Sep 18, 2024
Add bias changing param/interface

For pt/tf, add `training/change_bias_after_training` to change out bias
once after training.

For pt, add a separate command `change-bias` to change trained
model(pt/pth, multi/single) out bias for specific data:

```
dp change-bias model.pt -s data -n 10 -m change
```

UTs for this feature are still in consideration.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Added a new subcommand `change-bias` to adjust model output bias in
the PyTorch backend.
  - Introduced test cases for changing model biases via new test suite.
  
- **Documentation**
- Added documentation for the new `change-bias` command, including usage
and options.
- Updated `index.rst` to include a new entry for `change-bias` under the
`Model` section.

- **Bug Fixes**
- Adjusted data handling in `make_stat_input` to limit processing to a
specified number of batches.

- **Refactor**
- Restructured training configuration to include the parameter
`change_bias_after_training`.
  - Modularized data requirement handling and bias adjustment functions.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
mtaillefumier pushed a commit to mtaillefumier/deepmd-kit that referenced this pull request Sep 18, 2024
As planned in
deepmodeling#3933 (comment)

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Bug Fixes**
- Improved exception handling across multiple modules to provide more
context in error messages.

- **Chores**
- Added rule `B904` for `raise-without-from-inside-except` to
`pyproject.toml`.

These changes enhance debugging by offering clearer error messages and
context, thus improving the overall user experience when dealing with
exceptions.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Jinzhe Zeng <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Feature Request] Update the energy bias after the final step of training
3 participants