Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Chore: refactor InvarFitting #3266

Merged
merged 46 commits into from
Feb 16, 2024
Merged
Show file tree
Hide file tree
Changes from 20 commits
Commits
Show all changes
46 commits
Select commit Hold shift + click to select a range
b54e109
feat: redo pt dipole
anyangml Feb 13, 2024
ca75551
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Feb 13, 2024
9bc9900
Merge branch 'devel' into devel
anyangml Feb 13, 2024
4ca7a30
Merge branch 'devel' into devel
anyangml Feb 14, 2024
156c17f
Merge branch 'devel' into devel
anyangml Feb 14, 2024
dbd68ae
Merge branch 'devel' into devel
anyangml Feb 14, 2024
fba6eb4
Merge branch 'devel' into devel
anyangml Feb 14, 2024
bdd2b5c
fix: numpy warning
anyangml Feb 14, 2024
3797ac0
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Feb 14, 2024
86a6713
chore: refactor InvarFitting
anyangml Feb 14, 2024
9585203
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Feb 14, 2024
31d84fc
chore: refactor InvarFitting
anyangml Feb 14, 2024
5459248
chore: refactor InvarFitting
anyangml Feb 14, 2024
be5be25
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Feb 14, 2024
ad810df
chore: refactor InvarFitting
anyangml Feb 14, 2024
6932c24
Merge branch 'devel' into devel
anyangml Feb 15, 2024
6ade44f
chore: refactor InvarFitting
anyangml Feb 15, 2024
ca957b5
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Feb 15, 2024
404f31f
fix: internal dipole fit output shape
anyangml Feb 15, 2024
d1fedf8
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Feb 15, 2024
514271b
fix: mask shape
anyangml Feb 15, 2024
40e8a7e
Merge branch 'devel' into devel
anyangml Feb 15, 2024
06fb5bf
chore: restore dipole, split PR
anyangml Feb 15, 2024
41d94b5
chore: restore LinearAtomicModel
anyangml Feb 15, 2024
593b517
Merge branch 'devel' into devel
anyangml Feb 15, 2024
e122700
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Feb 15, 2024
0e1182e
fix: ignore numpy warning
anyangml Feb 15, 2024
7e06282
Merge branch 'devel' into devel
anyangml Feb 15, 2024
c8d97e8
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Feb 15, 2024
39a5c34
fix: merge conflict
anyangml Feb 15, 2024
1c30126
fix: merge conflict
anyangml Feb 15, 2024
1ae6ce5
fix: merge conflict
anyangml Feb 15, 2024
0e98ddc
chore: refactor
anyangml Feb 15, 2024
22e6d82
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Feb 15, 2024
ecf242b
chore: refactor
anyangml Feb 15, 2024
042201a
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Feb 15, 2024
7698483
chore: refactor
anyangml Feb 15, 2024
c3be87c
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Feb 15, 2024
367e0ad
fix: revert device
anyangml Feb 15, 2024
709b010
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Feb 15, 2024
b8bdbbc
fix: add device
anyangml Feb 16, 2024
4bb7737
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Feb 16, 2024
7f9292a
fix: cuda
anyangml Feb 16, 2024
4ff3019
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Feb 16, 2024
1fd58a4
fix: cuda
anyangml Feb 16, 2024
e125be1
fix: cuda
anyangml Feb 16, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 4 additions & 2 deletions deepmd/dpmodel/model/linear_atomic_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -318,14 +318,16 @@
),
axis=-1,
) # handle masked nnei.
sigma = numerator / denominator
with np.errstate(divide="ignore", invalid="ignore"):
sigma = numerator / denominator

Check warning on line 322 in deepmd/dpmodel/model/linear_atomic_model.py

View check run for this annotation

Codecov / codecov/patch

deepmd/dpmodel/model/linear_atomic_model.py#L321-L322

Added lines #L321 - L322 were not covered by tests
u = (sigma - self.sw_rmin) / (self.sw_rmax - self.sw_rmin)
coef = np.zeros_like(u)
left_mask = sigma < self.sw_rmin
mid_mask = (self.sw_rmin <= sigma) & (sigma < self.sw_rmax)
right_mask = sigma >= self.sw_rmax
coef[left_mask] = 1
smooth = -6 * u**5 + 15 * u**4 - 10 * u**3 + 1
with np.errstate(invalid="ignore"):
smooth = -6 * u**5 + 15 * u**4 - 10 * u**3 + 1

Check warning on line 330 in deepmd/dpmodel/model/linear_atomic_model.py

View check run for this annotation

Codecov / codecov/patch

deepmd/dpmodel/model/linear_atomic_model.py#L329-L330

Added lines #L329 - L330 were not covered by tests
coef[mid_mask] = smooth[mid_mask]
coef[right_mask] = 0
self.zbl_weight = coef
Expand Down
4 changes: 2 additions & 2 deletions deepmd/pt/model/task/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
DenoiseNet,
)
from .dipole import (
DipoleFittingNetType,
DipoleFittingNet,
)
from .ener import (
EnergyFittingNet,
Expand All @@ -25,7 +25,7 @@
__all__ = [
"FittingNetAttenLcc",
"DenoiseNet",
"DipoleFittingNetType",
"DipoleFittingNet",
"EnergyFittingNet",
"EnergyFittingNetDirect",
"Fitting",
Expand Down
116 changes: 86 additions & 30 deletions deepmd/pt/model/task/dipole.py
Original file line number Diff line number Diff line change
@@ -1,21 +1,47 @@
# SPDX-License-Identifier: LGPL-3.0-or-later
import logging
from typing import (

Check warning on line 3 in deepmd/pt/model/task/dipole.py

View check run for this annotation

Codecov / codecov/patch

deepmd/pt/model/task/dipole.py#L3

Added line #L3 was not covered by tests
List,
Optional,
)

import torch

from deepmd.pt.model.network.network import (
ResidualDeep,
from deepmd.pt.model.network.mlp import (

Check warning on line 10 in deepmd/pt/model/task/dipole.py

View check run for this annotation

Codecov / codecov/patch

deepmd/pt/model/task/dipole.py#L10

Added line #L10 was not covered by tests
FittingNet,
NetworkCollection,
)
from deepmd.pt.model.task.fitting import (
Fitting,
GeneralFitting,
)
from deepmd.pt.utils import (

Check warning on line 17 in deepmd/pt/model/task/dipole.py

View check run for this annotation

Codecov / codecov/patch

deepmd/pt/model/task/dipole.py#L17

Added line #L17 was not covered by tests
env,
)
from deepmd.pt.utils.env import (

Check warning on line 20 in deepmd/pt/model/task/dipole.py

View check run for this annotation

Codecov / codecov/patch

deepmd/pt/model/task/dipole.py#L20

Added line #L20 was not covered by tests
DEFAULT_PRECISION,
)

dtype = env.GLOBAL_PT_FLOAT_PRECISION
device = env.DEVICE

Check warning on line 25 in deepmd/pt/model/task/dipole.py

View check run for this annotation

Codecov / codecov/patch

deepmd/pt/model/task/dipole.py#L24-L25

Added lines #L24 - L25 were not covered by tests

log = logging.getLogger(__name__)


class DipoleFittingNetType(Fitting):
class DipoleFittingNet(GeneralFitting):

Check warning on line 30 in deepmd/pt/model/task/dipole.py

View check run for this annotation

Codecov / codecov/patch

deepmd/pt/model/task/dipole.py#L30

Added line #L30 was not covered by tests
def __init__(
self, ntypes, embedding_width, neuron, out_dim, resnet_dt=True, **kwargs
self,
var_name: str,
ntypes: int,
dim_descrpt: int,
dim_out: int,
neuron: List[int] = [128, 128, 128],
resnet_dt: bool = True,
numb_fparam: int = 0,
numb_aparam: int = 0,
activation_function: str = "tanh",
precision: str = DEFAULT_PRECISION,
distinguish_types: bool = False,
**kwargs,
):
"""Construct a fitting net for dipole.

Expand All @@ -26,23 +52,32 @@
- bias_atom_e: Average enery per atom for each element.
- resnet_dt: Using time-step in the ResNet construction.
"""
super().__init__()
self.ntypes = ntypes
self.embedding_width = embedding_width
self.out_dim = out_dim

filter_layers = []
one = ResidualDeep(
0, embedding_width, neuron, 0.0, out_dim=self.out_dim, resnet_dt=resnet_dt
super().__init__(

Check warning on line 55 in deepmd/pt/model/task/dipole.py

View check run for this annotation

Codecov / codecov/patch

deepmd/pt/model/task/dipole.py#L55

Added line #L55 was not covered by tests
var_name=var_name,
ntypes=ntypes,
dim_descrpt=dim_descrpt,
dim_out=dim_out,
neuron=neuron,
resnet_dt=resnet_dt,
numb_fparam=numb_fparam,
numb_aparam=numb_aparam,
activation_function=activation_function,
precision=precision,
distinguish_types=distinguish_types,
**kwargs,
)
filter_layers.append(one)
self.filter_layers = torch.nn.ModuleList(filter_layers)
self.old_impl = False # this only supports the new implementation.

Check warning on line 69 in deepmd/pt/model/task/dipole.py

View check run for this annotation

Codecov / codecov/patch

deepmd/pt/model/task/dipole.py#L69

Added line #L69 was not covered by tests
Fixed Show fixed Hide fixed

if "seed" in kwargs:
log.info("Set seed to %d in fitting net.", kwargs["seed"])
torch.manual_seed(kwargs["seed"])

def forward(self, inputs, atype, atype_tebd, rot_mat):
def forward(

Check warning on line 71 in deepmd/pt/model/task/dipole.py

View check run for this annotation

Codecov / codecov/patch

deepmd/pt/model/task/dipole.py#L71

Added line #L71 was not covered by tests
self,
descriptor: torch.Tensor,
atype: torch.Tensor,
gr: Optional[torch.Tensor] = None,
g2: Optional[torch.Tensor] = None,
h2: Optional[torch.Tensor] = None,
fparam: Optional[torch.Tensor] = None,
aparam: Optional[torch.Tensor] = None,
):
"""Based on embedding net output, alculate total energy.

Args:
Expand All @@ -55,13 +90,34 @@
-------
- vec_out: output vector. Its shape is [nframes, nloc, 3].
"""
nframes, nloc, _ = inputs.size()
if atype_tebd is not None:
inputs = torch.concat([inputs, atype_tebd], dim=-1)
vec_out = self.filter_layers[0](inputs) # Shape is [nframes, nloc, m1]
assert list(vec_out.size()) == [nframes, nloc, self.out_dim]
vec_out = vec_out.view(-1, 1, self.out_dim)
vec_out = (
torch.bmm(vec_out, rot_mat).squeeze(-2).view(nframes, nloc, 3)
) # Shape is [nframes, nloc, 3]
return vec_out
in_dim = self.dim_descrpt + self.numb_fparam + self.numb_aparam

Check warning on line 93 in deepmd/pt/model/task/dipole.py

View check run for this annotation

Codecov / codecov/patch

deepmd/pt/model/task/dipole.py#L93

Added line #L93 was not covered by tests

nframes, nloc, _ = descriptor.shape
gr = gr.view(nframes, nloc, -1, 3)
out_dim = gr.shape[2] # m1
self.filter_layers = NetworkCollection(

Check warning on line 98 in deepmd/pt/model/task/dipole.py

View check run for this annotation

Codecov / codecov/patch

deepmd/pt/model/task/dipole.py#L95-L98

Added lines #L95 - L98 were not covered by tests
1 if self.distinguish_types else 0,
self.ntypes,
network_type="fitting_network",
networks=[
FittingNet(
in_dim,
out_dim,
anyangml marked this conversation as resolved.
Show resolved Hide resolved
self.neuron,
self.activation_function,
self.resnet_dt,
self.precision,
bias_out=True,
)
for ii in range(self.ntypes if self.distinguish_types else 1)
],
)
# (nframes, nloc, m1)
out = self._forward_common(descriptor, atype, gr, g2, h2, fparam, aparam)[

Check warning on line 116 in deepmd/pt/model/task/dipole.py

View check run for this annotation

Codecov / codecov/patch

deepmd/pt/model/task/dipole.py#L116

Added line #L116 was not covered by tests
self.var_name
]
# (nframes * nloc, 1, m1)
out = out.view(-1, 1, out_dim)

Check warning on line 120 in deepmd/pt/model/task/dipole.py

View check run for this annotation

Codecov / codecov/patch

deepmd/pt/model/task/dipole.py#L120

Added line #L120 was not covered by tests
# (nframes, nloc, 3)
out = torch.bmm(out, gr).squeeze(-2).view(nframes, nloc, 3)
return {self.var_name: out.to(env.GLOBAL_PT_FLOAT_PRECISION)}

Check warning on line 123 in deepmd/pt/model/task/dipole.py

View check run for this annotation

Codecov / codecov/patch

deepmd/pt/model/task/dipole.py#L122-L123

Added lines #L122 - L123 were not covered by tests
Loading
Loading