Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat (tf/pt): add atomic weights to tensor loss #4466

Merged
merged 6 commits into from
Dec 18, 2024

Conversation

ChiahsinChu
Copy link
Contributor

@ChiahsinChu ChiahsinChu commented Dec 10, 2024

Interfaces are of particular interest in many studies. However, the configurations in the training set to represent the interface normally also include large parts of the bulk material. As a result, the final model would prefer the bulk information while the interfacial information is less learnt. It is difficult to simply improve the proportion of interfaces in the configurations since the electronic structures of the interface might only be reasonable with a certain thickness of bulk materials. Therefore, I wonder whether it is possible to define weights for atomic quantities in loss functions. This allows us to add higher weights for the atomic information for the regions of interest and probably makes the model "more focused" on the region of interest.
In this PR, I add the keyword enable_atomic_weight to the loss function of the tensor model. In principle, it could be generalised to any atomic quantity, e.g., atomic forces.
I would like to know the developers' comments/suggestions about this feature. I can add support for other loss functions and finish unit tests once we agree on this feature.

Best.

Summary by CodeRabbit

  • New Features

    • Introduced an optional parameter for atomic weights in loss calculations, enhancing flexibility in the TensorLoss class.
    • Added a suite of unit tests for the TensorLoss functionality, ensuring consistency between TensorFlow and PyTorch implementations.
  • Bug Fixes

    • Updated logic for local loss calculations to ensure correct application of atomic weights based on user input.
  • Documentation

    • Improved clarity of documentation for several function arguments, including the addition of a new argument related to atomic weights.

Copy link
Contributor

coderabbitai bot commented Dec 10, 2024

📝 Walkthrough
📝 Walkthrough

Walkthrough

The changes introduced in this pull request enhance the TensorLoss class in both the deepmd/pt/loss/tensor.py and deepmd/tf/loss/tensor.py files by adding a new boolean parameter enable_atomic_weight. This parameter allows for the optional use of atomic weights during loss calculations. Modifications are made to the __init__ and label_requirement methods to accommodate this feature, and the logic in the forward and build methods is updated to utilize the atomic_weight based on the value of enable_atomic_weight. Additionally, the deepmd/utils/argcheck.py file is updated to improve documentation and include the new argument.

Changes

File Path Change Summary
deepmd/pt/loss/tensor.py - Added enable_atomic_weight parameter to __init__.
- Updated forward method to use atomic_weight based on enable_atomic_weight.
- Modified label_requirement to include DataRequirementItem for atomic_weight.
deepmd/tf/loss/tensor.py - Added enable_atomic_weight parameter to __init__.
- Updated build method to incorporate atomic_weight.
- Modified label_requirement to conditionally append DataRequirementItem for atom_weight.
deepmd/utils/argcheck.py - Updated documentation for doc_global_weight and doc_local_weight.
- Added doc_enable_atomic_weight.
- Updated loss_tensor function to include enable_atomic_weight.

Possibly related PRs

  • feat(pt): support complete form energy loss #3782: The changes in this PR also introduce a new boolean parameter enable_atom_ener_coeff in the EnergyStdLoss and EnergySpinLoss classes, similar to the enable_atomic_weight parameter in the TensorLoss class, indicating a related enhancement in loss calculations involving atomic weights.
  • feat(pt): add universal test for loss #4354: This PR introduces a universal test for loss functions, which includes testing for TensorLoss. The modifications in the main PR regarding enable_atomic_weight would likely be relevant for the tests being added here.
  • docs: move arg docs from Variant to sub Arguments #4369: While this PR focuses on documentation updates in deepmd/utils/argcheck.py, it mentions the addition of an argument enable_atomic_weight in the loss_tensor function, which is directly related to the changes made in the main PR regarding the same parameter.

Suggested reviewers

  • wanghan-iapcm
  • njzjz

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Experiment)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (2)
deepmd/tf/loss/tensor.py (1)

76-79: Consider adding shape validation for atomic_weight

While the implementation is correct, it would be beneficial to validate that the shape of atomic_weight matches the expected dimensions.

 diff = polar - atomic_polar_hat
 diff = tf.reshape(diff, [-1, self.tensor_size])
+# Validate atomic_weight shape
+atomic_weight = tf.ensure_shape(atomic_weight, [tf.shape(diff)[0], 1])
 diff = diff * atomic_weight
deepmd/pt/loss/tensor.py (1)

92-97: Consider adding shape validation for atomic_weight

While the implementation is correct, it would be beneficial to validate the shape of atomic_weight.

 if self.enable_atomic_weight:
     atomic_weight = label["atom_weight"].reshape([-1, 1])
+    # Validate shape matches expected dimensions
+    assert atomic_weight.shape[0] == label["atom_" + self.label_name].reshape([-1, self.tensor_size]).shape[0], \
+        f"Atomic weight shape {atomic_weight.shape} does not match expected shape"
 else:
     atomic_weight = 1.0
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between ec3b83f and d53d8af.

📒 Files selected for processing (3)
  • deepmd/pt/loss/tensor.py (6 hunks)
  • deepmd/tf/loss/tensor.py (3 hunks)
  • deepmd/utils/argcheck.py (2 hunks)
🔇 Additional comments (9)
deepmd/tf/loss/tensor.py (3)

43-43: LGTM: Added atomic weight flag

The addition of enable_atomic_weight with a default value of False maintains backward compatibility.


70-74: LGTM: Proper atomic weight initialization

The atomic weight is properly initialized either from label_dict or defaults to 1.0 when disabled.


174-185: LGTM: Added atomic weight requirement

The atomic weight requirement is properly added to data_requirements when enabled.

deepmd/pt/loss/tensor.py (4)

25-25: LGTM: Well-documented parameter addition

The enable_atomic_weight parameter is properly added with clear documentation explaining its purpose.

Also applies to: 44-45


56-56: LGTM: Class initialization

The enable_atomic_weight flag is properly initialized as a class member.


116-116: LGTM: Atomic weight application

The atomic weight is correctly applied to the difference calculation.


185-195: LGTM: Added atomic weight requirement

The atomic weight requirement is properly added to label_requirement when enabled.

deepmd/utils/argcheck.py (2)

2514-2515: LGTM: Improved documentation clarity

The documentation for global_weight and local_weight has been updated to be more precise and informative.


2516-2516: LGTM: Added atomic weight parameter

The enable_atomic_weight parameter is properly added with clear documentation and appropriate default value.

Also applies to: 2528-2534

Copy link

codecov bot commented Dec 10, 2024

Codecov Report

Attention: Patch coverage is 88.88889% with 2 lines in your changes missing coverage. Please review.

Project coverage is 84.42%. Comparing base (104fc36) to head (a8e204f).
Report is 1 commits behind head on devel.

Files with missing lines Patch % Lines
deepmd/pt/loss/tensor.py 85.71% 1 Missing ⚠️
deepmd/tf/loss/tensor.py 87.50% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##            devel    #4466      +/-   ##
==========================================
+ Coverage   84.39%   84.42%   +0.03%     
==========================================
  Files         670      670              
  Lines       62130    62145      +15     
  Branches     3487     3488       +1     
==========================================
+ Hits        52432    52464      +32     
+ Misses       8571     8554      -17     
  Partials     1127     1127              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Collaborator

@wanghan-iapcm wanghan-iapcm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide the unit test for the feature.

do you consider implementing it to the pt backend?

@wanghan-iapcm wanghan-iapcm requested a review from njzjz December 12, 2024 00:41
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (3)
source/tests/pt/test_loss_tensor.py (2)

57-57: Simplify dictionary membership test

Instead of if key in np_batch.keys():, you can use if key in np_batch: for better readability and performance.

Apply this diff to implement the change:

-if key in np_batch.keys():
+if key in np_batch:
🧰 Tools
🪛 Ruff (0.8.2)

57-57: Use key in dict instead of key in dict.keys()

Remove .keys()

(SIM118)


176-178: Remove unnecessary return statement in tearDown method

The tearDown method should not return a value. Remove the return statement to follow standard unittest practices.

Apply this diff to implement the change:

 def tearDown(self) -> None:
     tf.reset_default_graph()
-    return super().tearDown()
+    super().tearDown()
deepmd/tf/loss/tensor.py (1)

Line range hint 49-49: Fix typo in assertion message

There's a typo in the assertion message: "assian" should be "assign".

Apply this diff to fix the typo:

         ),
-        "Can not assian zero weight both to `pref` and `pref_atomic`"
+        "Can not assign zero weight both to `pref` and `pref_atomic`"
     )
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d53d8af and b4899a1.

📒 Files selected for processing (2)
  • deepmd/tf/loss/tensor.py (3 hunks)
  • source/tests/pt/test_loss_tensor.py (1 hunks)
🧰 Additional context used
🪛 Ruff (0.8.2)
source/tests/pt/test_loss_tensor.py

57-57: Use key in dict instead of key in dict.keys()

Remove .keys()

(SIM118)

@ChiahsinChu
Copy link
Contributor Author

ChiahsinChu commented Dec 13, 2024

do you consider implementing it to the pt backend?

In this PR, this feature is implemented for both tf and pt backend.

@njzjz
Copy link
Member

njzjz commented Dec 13, 2024

I am not sure if we need to wait for #4105.

Copy link
Member

@njzjz njzjz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I approve the current PR, but I am not sure when #4105 can be ready.

@wanghan-iapcm
Copy link
Collaborator

I approve the current PR, but I am not sure when #4105 can be ready.

I think #4105 should not block the merge of this pr.

@wanghan-iapcm wanghan-iapcm added this pull request to the merge queue Dec 18, 2024
Merged via the queue into deepmodeling:devel with commit c0914e1 Dec 18, 2024
60 checks passed
iProzd added a commit to iProzd/deepmd-kit that referenced this pull request Dec 24, 2024
* change property.npy to any name

* Init branch

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* change | to Union

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* change sub_var_name default to []

* Solve pre-commit

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* solve scanning github

* fix UT

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* delete useless file

* Solve some UT

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Solve precommit

* slove pre

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Solve dptest UT, dpatomicmodel UT, code scannisang

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* delete param  and

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Solve UT fail caused by task_dim and property_name

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix UT

* Fix UT

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix UT

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix permutation error

* Add property bias UT

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* recover rcond doc

* recover blank

* Change code according  according to coderabbitai

* solve pre-commit

* Fix UT

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* change apply_bias doc

* update the version compatibility

* feat (tf/pt): add atomic weights to tensor loss (deepmodeling#4466)

Interfaces are of particular interest in many studies. However, the
configurations in the training set to represent the interface normally
also include large parts of the bulk material. As a result, the final
model would prefer the bulk information while the interfacial
information is less learnt. It is difficult to simply improve the
proportion of interfaces in the configurations since the electronic
structures of the interface might only be reasonable with a certain
thickness of bulk materials. Therefore, I wonder whether it is possible
to define weights for atomic quantities in loss functions. This allows
us to add higher weights for the atomic information for the regions of
interest and probably makes the model "more focused" on the region of
interest.
In this PR, I add the keyword `enable_atomic_weight` to the loss
function of the tensor model. In principle, it could be generalised to
any atomic quantity, e.g., atomic forces.
I would like to know the developers' comments/suggestions about this
feature. I can add support for other loss functions and finish unit
tests once we agree on this feature.

Best. 




<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Introduced an optional parameter for atomic weights in loss
calculations, enhancing flexibility in the `TensorLoss` class.
- Added a suite of unit tests for the `TensorLoss` functionality,
ensuring consistency between TensorFlow and PyTorch implementations.

- **Bug Fixes**
- Updated logic for local loss calculations to ensure correct
application of atomic weights based on user input.

- **Documentation**
- Improved clarity of documentation for several function arguments,
including the addition of a new argument related to atomic weights.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

* delete sub_var_name

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* recover to property key

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix conflict

* Fix UT

* Add document of property fitting

* Delete checkpoint

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Add get_property_name to DeepEvalBackend

* pd: fix learning rate setting when resume (deepmodeling#4480)

"When resuming training, there is no need to add `self.start_step` to
the step count because Paddle uses `lr_sche.last_epoch` as the input for
`step`, which already records the `start_step` steps."

learning rate are correct after fixing


![22AD6874B74E437E9B133D75ABCC02FE](https://github.com/user-attachments/assets/1ad0ce71-6e1c-4de5-87dc-0daca1f6f038)



<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Enhanced training process with improved optimizer configuration and
learning rate adjustments.
	- Refined logging of training and validation results for clarity.
- Improved model saving logic to preserve the latest state during
interruptions.
- Enhanced tensorboard logging for detailed tracking of training
metrics.

- **Bug Fixes**
- Corrected lambda function for learning rate scheduler to reference
warmup steps accurately.

- **Chores**
- Streamlined data loading and handling for efficient training across
different tasks.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

* docs: update deepmd-gnn URL (deepmodeling#4482)

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Documentation**
- Updated guidelines for creating and integrating new models in the
DeePMD-kit framework.
- Added new sections on descriptors, fitting networks, and model
requirements.
	- Enhanced unit testing section with instructions for regression tests.
- Updated URL for the DeePMD-GNN plugin to reflect new repository
location.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Jinzhe Zeng <[email protected]>

* docs: update DPA-2 citation (deepmodeling#4483)

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Updated references in the bibliography for the DPA-2 model to include
a new article entry for 2024.
	- Added a new reference for an attention-based descriptor.
  
- **Bug Fixes**
- Corrected reference links in documentation to point to updated DOI
links instead of arXiv.

- **Documentation**
- Revised entries in the credits and model documentation to reflect the
latest citations and details.
- Enhanced clarity and detail in fine-tuning documentation for
TensorFlow and PyTorch implementations.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Signed-off-by: Jinzhe Zeng <[email protected]>

* docs: fix a minor typo on the title of `install-from-c-library.md` (deepmodeling#4484)

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Documentation**
- Updated formatting of the installation guide for the pre-compiled C
library.
- Icons for TensorFlow and JAX are now displayed together in the header.
	- Retained all installation instructions and compatibility notes.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Jinzhe Zeng <[email protected]>

* fix: print dlerror if dlopen fails (deepmodeling#4485)

xref: deepmodeling/deepmd-gnn#44

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Enhanced error messages for library loading failures on non-Windows
platforms.
- Updated thread management environment variable checks for improved
compatibility.
- Added support for mixed types in tensor input handling, allowing for
more flexible configurations.

- **Bug Fixes**
	- Improved error reporting for dynamic library loading issues.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* change doc to py

* Add out_bias out_std doc

* change bias method to compute_stats_do_not_distinguish_types

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* change var_name to property_name

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* change logic of extensive bias

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* add doc for neww added parameter

* change doc for compute_stats_do_not_distinguish_types

* try to fix dptest

* change all property to property_name

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix UT

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Delete key 'property' completely

* Fix UT

* Fix dptest UT

* pd: fix oom error (deepmodeling#4493)

Paddle use `MemoryError` rather than `RuntimeError` used in pytorch, now
I can test DPA-1 and DPA-2 in 16G V100...

![image](https://github.com/user-attachments/assets/42ead773-bf26-4195-8f67-404b151371de)

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Bug Fixes**
- Improved detection of out-of-memory (OOM) errors to enhance
application stability.
- Ensured cached memory is cleared upon OOM errors, preventing potential
memory leaks.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

* pd: add missing `dp.eval()` in pd backend (deepmodeling#4488)

Switch to eval mode when evaluating model, otherwise `self.training`
will be `True`, backward graph will be created and cause OOM

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Enhanced model evaluation state management to ensure correct behavior
during evaluation.

- **Bug Fixes**
- Improved type consistency in the `normalize_coord` function for better
computational accuracy.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

* [pre-commit.ci] pre-commit autoupdate (deepmodeling#4497)

<!--pre-commit.ci start-->
updates:
- [github.com/astral-sh/ruff-pre-commit: v0.8.3 →
v0.8.4](astral-sh/ruff-pre-commit@v0.8.3...v0.8.4)
<!--pre-commit.ci end-->

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Delete attribute

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Solve comment

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Solve error

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* delete property_name in serialize

---------

Signed-off-by: Jinzhe Zeng <[email protected]>
Co-authored-by: root <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Chenqqian Zhang <[email protected]>
Co-authored-by: Jia-Xin Zhu <[email protected]>
Co-authored-by: HydrogenSulfate <[email protected]>
Co-authored-by: Jinzhe Zeng <[email protected]>
iProzd added a commit to iProzd/deepmd-kit that referenced this pull request Jan 4, 2025
* Refactor property (#37)

* change property.npy to any name

* Init branch

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* change | to Union

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* change sub_var_name default to []

* Solve pre-commit

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* solve scanning github

* fix UT

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* delete useless file

* Solve some UT

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Solve precommit

* slove pre

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Solve dptest UT, dpatomicmodel UT, code scannisang

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* delete param  and

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Solve UT fail caused by task_dim and property_name

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix UT

* Fix UT

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix UT

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix permutation error

* Add property bias UT

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* recover rcond doc

* recover blank

* Change code according  according to coderabbitai

* solve pre-commit

* Fix UT

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* change apply_bias doc

* update the version compatibility

* feat (tf/pt): add atomic weights to tensor loss (deepmodeling#4466)

Interfaces are of particular interest in many studies. However, the
configurations in the training set to represent the interface normally
also include large parts of the bulk material. As a result, the final
model would prefer the bulk information while the interfacial
information is less learnt. It is difficult to simply improve the
proportion of interfaces in the configurations since the electronic
structures of the interface might only be reasonable with a certain
thickness of bulk materials. Therefore, I wonder whether it is possible
to define weights for atomic quantities in loss functions. This allows
us to add higher weights for the atomic information for the regions of
interest and probably makes the model "more focused" on the region of
interest.
In this PR, I add the keyword `enable_atomic_weight` to the loss
function of the tensor model. In principle, it could be generalised to
any atomic quantity, e.g., atomic forces.
I would like to know the developers' comments/suggestions about this
feature. I can add support for other loss functions and finish unit
tests once we agree on this feature.

Best. 




<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Introduced an optional parameter for atomic weights in loss
calculations, enhancing flexibility in the `TensorLoss` class.
- Added a suite of unit tests for the `TensorLoss` functionality,
ensuring consistency between TensorFlow and PyTorch implementations.

- **Bug Fixes**
- Updated logic for local loss calculations to ensure correct
application of atomic weights based on user input.

- **Documentation**
- Improved clarity of documentation for several function arguments,
including the addition of a new argument related to atomic weights.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

* delete sub_var_name

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* recover to property key

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix conflict

* Fix UT

* Add document of property fitting

* Delete checkpoint

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Add get_property_name to DeepEvalBackend

* pd: fix learning rate setting when resume (deepmodeling#4480)

"When resuming training, there is no need to add `self.start_step` to
the step count because Paddle uses `lr_sche.last_epoch` as the input for
`step`, which already records the `start_step` steps."

learning rate are correct after fixing


![22AD6874B74E437E9B133D75ABCC02FE](https://github.com/user-attachments/assets/1ad0ce71-6e1c-4de5-87dc-0daca1f6f038)



<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Enhanced training process with improved optimizer configuration and
learning rate adjustments.
	- Refined logging of training and validation results for clarity.
- Improved model saving logic to preserve the latest state during
interruptions.
- Enhanced tensorboard logging for detailed tracking of training
metrics.

- **Bug Fixes**
- Corrected lambda function for learning rate scheduler to reference
warmup steps accurately.

- **Chores**
- Streamlined data loading and handling for efficient training across
different tasks.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

* docs: update deepmd-gnn URL (deepmodeling#4482)

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Documentation**
- Updated guidelines for creating and integrating new models in the
DeePMD-kit framework.
- Added new sections on descriptors, fitting networks, and model
requirements.
	- Enhanced unit testing section with instructions for regression tests.
- Updated URL for the DeePMD-GNN plugin to reflect new repository
location.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Jinzhe Zeng <[email protected]>

* docs: update DPA-2 citation (deepmodeling#4483)

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Updated references in the bibliography for the DPA-2 model to include
a new article entry for 2024.
	- Added a new reference for an attention-based descriptor.
  
- **Bug Fixes**
- Corrected reference links in documentation to point to updated DOI
links instead of arXiv.

- **Documentation**
- Revised entries in the credits and model documentation to reflect the
latest citations and details.
- Enhanced clarity and detail in fine-tuning documentation for
TensorFlow and PyTorch implementations.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Signed-off-by: Jinzhe Zeng <[email protected]>

* docs: fix a minor typo on the title of `install-from-c-library.md` (deepmodeling#4484)

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Documentation**
- Updated formatting of the installation guide for the pre-compiled C
library.
- Icons for TensorFlow and JAX are now displayed together in the header.
	- Retained all installation instructions and compatibility notes.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Jinzhe Zeng <[email protected]>

* fix: print dlerror if dlopen fails (deepmodeling#4485)

xref: deepmodeling/deepmd-gnn#44

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Enhanced error messages for library loading failures on non-Windows
platforms.
- Updated thread management environment variable checks for improved
compatibility.
- Added support for mixed types in tensor input handling, allowing for
more flexible configurations.

- **Bug Fixes**
	- Improved error reporting for dynamic library loading issues.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* change doc to py

* Add out_bias out_std doc

* change bias method to compute_stats_do_not_distinguish_types

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* change var_name to property_name

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* change logic of extensive bias

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* add doc for neww added parameter

* change doc for compute_stats_do_not_distinguish_types

* try to fix dptest

* change all property to property_name

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix UT

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Delete key 'property' completely

* Fix UT

* Fix dptest UT

* pd: fix oom error (deepmodeling#4493)

Paddle use `MemoryError` rather than `RuntimeError` used in pytorch, now
I can test DPA-1 and DPA-2 in 16G V100...

![image](https://github.com/user-attachments/assets/42ead773-bf26-4195-8f67-404b151371de)

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Bug Fixes**
- Improved detection of out-of-memory (OOM) errors to enhance
application stability.
- Ensured cached memory is cleared upon OOM errors, preventing potential
memory leaks.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

* pd: add missing `dp.eval()` in pd backend (deepmodeling#4488)

Switch to eval mode when evaluating model, otherwise `self.training`
will be `True`, backward graph will be created and cause OOM

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Enhanced model evaluation state management to ensure correct behavior
during evaluation.

- **Bug Fixes**
- Improved type consistency in the `normalize_coord` function for better
computational accuracy.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

* [pre-commit.ci] pre-commit autoupdate (deepmodeling#4497)

<!--pre-commit.ci start-->
updates:
- [github.com/astral-sh/ruff-pre-commit: v0.8.3 →
v0.8.4](astral-sh/ruff-pre-commit@v0.8.3...v0.8.4)
<!--pre-commit.ci end-->

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Delete attribute

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Solve comment

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Solve error

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* delete property_name in serialize

---------

Signed-off-by: Jinzhe Zeng <[email protected]>
Co-authored-by: root <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Chenqqian Zhang <[email protected]>
Co-authored-by: Jia-Xin Zhu <[email protected]>
Co-authored-by: HydrogenSulfate <[email protected]>
Co-authored-by: Jinzhe Zeng <[email protected]>

* add multig1 mess

---------

Signed-off-by: Jinzhe Zeng <[email protected]>
Signed-off-by: Duo <[email protected]>
Co-authored-by: root <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Chenqqian Zhang <[email protected]>
Co-authored-by: Jia-Xin Zhu <[email protected]>
Co-authored-by: HydrogenSulfate <[email protected]>
Co-authored-by: Jinzhe Zeng <[email protected]>
@ChiahsinChu ChiahsinChu deleted the devel-atomic_weight branch January 6, 2025 07:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants