Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Chore: refactor InvarFitting #3266

Merged
merged 46 commits into from
Feb 16, 2024
Merged
Show file tree
Hide file tree
Changes from 12 commits
Commits
Show all changes
46 commits
Select commit Hold shift + click to select a range
b54e109
feat: redo pt dipole
anyangml Feb 13, 2024
ca75551
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Feb 13, 2024
9bc9900
Merge branch 'devel' into devel
anyangml Feb 13, 2024
4ca7a30
Merge branch 'devel' into devel
anyangml Feb 14, 2024
156c17f
Merge branch 'devel' into devel
anyangml Feb 14, 2024
dbd68ae
Merge branch 'devel' into devel
anyangml Feb 14, 2024
fba6eb4
Merge branch 'devel' into devel
anyangml Feb 14, 2024
bdd2b5c
fix: numpy warning
anyangml Feb 14, 2024
3797ac0
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Feb 14, 2024
86a6713
chore: refactor InvarFitting
anyangml Feb 14, 2024
9585203
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Feb 14, 2024
31d84fc
chore: refactor InvarFitting
anyangml Feb 14, 2024
5459248
chore: refactor InvarFitting
anyangml Feb 14, 2024
be5be25
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Feb 14, 2024
ad810df
chore: refactor InvarFitting
anyangml Feb 14, 2024
6932c24
Merge branch 'devel' into devel
anyangml Feb 15, 2024
6ade44f
chore: refactor InvarFitting
anyangml Feb 15, 2024
ca957b5
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Feb 15, 2024
404f31f
fix: internal dipole fit output shape
anyangml Feb 15, 2024
d1fedf8
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Feb 15, 2024
514271b
fix: mask shape
anyangml Feb 15, 2024
40e8a7e
Merge branch 'devel' into devel
anyangml Feb 15, 2024
06fb5bf
chore: restore dipole, split PR
anyangml Feb 15, 2024
41d94b5
chore: restore LinearAtomicModel
anyangml Feb 15, 2024
593b517
Merge branch 'devel' into devel
anyangml Feb 15, 2024
e122700
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Feb 15, 2024
0e1182e
fix: ignore numpy warning
anyangml Feb 15, 2024
7e06282
Merge branch 'devel' into devel
anyangml Feb 15, 2024
c8d97e8
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Feb 15, 2024
39a5c34
fix: merge conflict
anyangml Feb 15, 2024
1c30126
fix: merge conflict
anyangml Feb 15, 2024
1ae6ce5
fix: merge conflict
anyangml Feb 15, 2024
0e98ddc
chore: refactor
anyangml Feb 15, 2024
22e6d82
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Feb 15, 2024
ecf242b
chore: refactor
anyangml Feb 15, 2024
042201a
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Feb 15, 2024
7698483
chore: refactor
anyangml Feb 15, 2024
c3be87c
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Feb 15, 2024
367e0ad
fix: revert device
anyangml Feb 15, 2024
709b010
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Feb 15, 2024
b8bdbbc
fix: add device
anyangml Feb 16, 2024
4bb7737
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Feb 16, 2024
7f9292a
fix: cuda
anyangml Feb 16, 2024
4ff3019
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Feb 16, 2024
1fd58a4
fix: cuda
anyangml Feb 16, 2024
e125be1
fix: cuda
anyangml Feb 16, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 4 additions & 2 deletions deepmd/dpmodel/model/linear_atomic_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -318,14 +318,16 @@
),
axis=-1,
) # handle masked nnei.
sigma = numerator / denominator
with np.errstate(divide="ignore", invalid="ignore"):
sigma = numerator / denominator

Check warning on line 322 in deepmd/dpmodel/model/linear_atomic_model.py

View check run for this annotation

Codecov / codecov/patch

deepmd/dpmodel/model/linear_atomic_model.py#L321-L322

Added lines #L321 - L322 were not covered by tests
u = (sigma - self.sw_rmin) / (self.sw_rmax - self.sw_rmin)
coef = np.zeros_like(u)
left_mask = sigma < self.sw_rmin
mid_mask = (self.sw_rmin <= sigma) & (sigma < self.sw_rmax)
right_mask = sigma >= self.sw_rmax
coef[left_mask] = 1
smooth = -6 * u**5 + 15 * u**4 - 10 * u**3 + 1
with np.errstate(invalid="ignore"):
smooth = -6 * u**5 + 15 * u**4 - 10 * u**3 + 1

Check warning on line 330 in deepmd/dpmodel/model/linear_atomic_model.py

View check run for this annotation

Codecov / codecov/patch

deepmd/dpmodel/model/linear_atomic_model.py#L329-L330

Added lines #L329 - L330 were not covered by tests
coef[mid_mask] = smooth[mid_mask]
coef[right_mask] = 0
self.zbl_weight = coef
Expand Down
4 changes: 2 additions & 2 deletions deepmd/pt/model/task/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
DenoiseNet,
)
from .dipole import (
DipoleFittingNetType,
DipoleFittingNet,
)
from .ener import (
EnergyFittingNet,
Expand All @@ -25,7 +25,7 @@
__all__ = [
"FittingNetAttenLcc",
"DenoiseNet",
"DipoleFittingNetType",
"DipoleFittingNet",
"EnergyFittingNet",
"EnergyFittingNetDirect",
"Fitting",
Expand Down
180 changes: 159 additions & 21 deletions deepmd/pt/model/task/dipole.py
Original file line number Diff line number Diff line change
@@ -1,21 +1,49 @@
# SPDX-License-Identifier: LGPL-3.0-or-later
import logging
from typing import (

Check warning on line 3 in deepmd/pt/model/task/dipole.py

View check run for this annotation

Codecov / codecov/patch

deepmd/pt/model/task/dipole.py#L3

Added line #L3 was not covered by tests
List,
Optional,
)

import torch

from deepmd.pt.model.network.network import (
ResidualDeep,
from deepmd.pt.model.network.mlp import (

Check warning on line 10 in deepmd/pt/model/task/dipole.py

View check run for this annotation

Codecov / codecov/patch

deepmd/pt/model/task/dipole.py#L10

Added line #L10 was not covered by tests
FittingNet,
NetworkCollection,
)
from deepmd.pt.model.task.fitting import (
Fitting,
)
from deepmd.pt.utils import (

Check warning on line 17 in deepmd/pt/model/task/dipole.py

View check run for this annotation

Codecov / codecov/patch

deepmd/pt/model/task/dipole.py#L17

Added line #L17 was not covered by tests
env,
)
from deepmd.pt.utils.env import (

Check warning on line 20 in deepmd/pt/model/task/dipole.py

View check run for this annotation

Codecov / codecov/patch

deepmd/pt/model/task/dipole.py#L20

Added line #L20 was not covered by tests
DEFAULT_PRECISION,
PRECISION_DICT,
)

dtype = env.GLOBAL_PT_FLOAT_PRECISION
device = env.DEVICE

Check warning on line 26 in deepmd/pt/model/task/dipole.py

View check run for this annotation

Codecov / codecov/patch

deepmd/pt/model/task/dipole.py#L25-L26

Added lines #L25 - L26 were not covered by tests

log = logging.getLogger(__name__)


class DipoleFittingNetType(Fitting):
class DipoleFittingNet(Fitting):

Check warning on line 31 in deepmd/pt/model/task/dipole.py

View check run for this annotation

Codecov / codecov/patch

deepmd/pt/model/task/dipole.py#L31

Added line #L31 was not covered by tests
def __init__(
self, ntypes, embedding_width, neuron, out_dim, resnet_dt=True, **kwargs
self,
var_name: str,
ntypes: int,
dim_descrpt: int,
dim_out: int,
neuron: List[int] = [128, 128, 128],
bias_atom_e: Optional[torch.Tensor] = None,
resnet_dt: bool = True,
numb_fparam: int = 0,
numb_aparam: int = 0,
activation_function: str = "tanh",
precision: str = DEFAULT_PRECISION,
distinguish_types: bool = False,
**kwargs,
):
"""Construct a fitting net for dipole.

Expand All @@ -27,22 +55,79 @@
- resnet_dt: Using time-step in the ResNet construction.
"""
super().__init__()
self.var_name = var_name

Check warning on line 58 in deepmd/pt/model/task/dipole.py

View check run for this annotation

Codecov / codecov/patch

deepmd/pt/model/task/dipole.py#L58

Added line #L58 was not covered by tests
self.ntypes = ntypes
self.embedding_width = embedding_width
self.out_dim = out_dim
self.dim_descrpt = dim_descrpt
self.dim_out = dim_out
self.neuron = neuron
self.distinguish_types = distinguish_types
self.use_tebd = not self.distinguish_types
self.resnet_dt = resnet_dt
self.numb_fparam = numb_fparam
self.numb_aparam = numb_aparam
self.activation_function = activation_function
self.precision = precision
self.prec = PRECISION_DICT[self.precision]

Check warning on line 70 in deepmd/pt/model/task/dipole.py

View check run for this annotation

Codecov / codecov/patch

deepmd/pt/model/task/dipole.py#L60-L70

Added lines #L60 - L70 were not covered by tests

filter_layers = []
one = ResidualDeep(
0, embedding_width, neuron, 0.0, out_dim=self.out_dim, resnet_dt=resnet_dt
# init constants
if self.numb_fparam > 0:
self.register_buffer(

Check warning on line 74 in deepmd/pt/model/task/dipole.py

View check run for this annotation

Codecov / codecov/patch

deepmd/pt/model/task/dipole.py#L73-L74

Added lines #L73 - L74 were not covered by tests
"fparam_avg",
torch.zeros(self.numb_fparam, dtype=self.prec, device=device),
)
self.register_buffer(

Check warning on line 78 in deepmd/pt/model/task/dipole.py

View check run for this annotation

Codecov / codecov/patch

deepmd/pt/model/task/dipole.py#L78

Added line #L78 was not covered by tests
"fparam_inv_std",
torch.ones(self.numb_fparam, dtype=self.prec, device=device),
)
else:
self.fparam_avg, self.fparam_inv_std = None, None
if self.numb_aparam > 0:
self.register_buffer(

Check warning on line 85 in deepmd/pt/model/task/dipole.py

View check run for this annotation

Codecov / codecov/patch

deepmd/pt/model/task/dipole.py#L83-L85

Added lines #L83 - L85 were not covered by tests
"aparam_avg",
torch.zeros(self.numb_aparam, dtype=self.prec, device=device),
)
self.register_buffer(

Check warning on line 89 in deepmd/pt/model/task/dipole.py

View check run for this annotation

Codecov / codecov/patch

deepmd/pt/model/task/dipole.py#L89

Added line #L89 was not covered by tests
"aparam_inv_std",
torch.ones(self.numb_aparam, dtype=self.prec, device=device),
)
else:
self.aparam_avg, self.aparam_inv_std = None, None

Check warning on line 94 in deepmd/pt/model/task/dipole.py

View check run for this annotation

Codecov / codecov/patch

deepmd/pt/model/task/dipole.py#L94

Added line #L94 was not covered by tests

in_dim = self.dim_descrpt + self.numb_fparam + self.numb_aparam
out_dim = 3

Check warning on line 97 in deepmd/pt/model/task/dipole.py

View check run for this annotation

Codecov / codecov/patch

deepmd/pt/model/task/dipole.py#L96-L97

Added lines #L96 - L97 were not covered by tests

self.filter_layers = NetworkCollection(

Check warning on line 99 in deepmd/pt/model/task/dipole.py

View check run for this annotation

Codecov / codecov/patch

deepmd/pt/model/task/dipole.py#L99

Added line #L99 was not covered by tests
1 if self.distinguish_types else 0,
self.ntypes,
network_type="fitting_network",
networks=[
FittingNet(
in_dim,
out_dim,
self.neuron,
self.activation_function,
self.resnet_dt,
self.precision,
bias_out=True,
)
for ii in range(self.ntypes if self.distinguish_types else 1)
],
)
filter_layers.append(one)
self.filter_layers = torch.nn.ModuleList(filter_layers)

if "seed" in kwargs:
log.info("Set seed to %d in fitting net.", kwargs["seed"])
torch.manual_seed(kwargs["seed"])

def forward(self, inputs, atype, atype_tebd, rot_mat):
def forward(

Check warning on line 121 in deepmd/pt/model/task/dipole.py

View check run for this annotation

Codecov / codecov/patch

deepmd/pt/model/task/dipole.py#L121

Added line #L121 was not covered by tests
self,
descriptor: torch.Tensor,
atype: torch.Tensor,
gr: Optional[torch.Tensor] = None,
g2: Optional[torch.Tensor] = None,
h2: Optional[torch.Tensor] = None,
fparam: Optional[torch.Tensor] = None,
aparam: Optional[torch.Tensor] = None,
):
"""Based on embedding net output, alculate total energy.

Args:
Expand All @@ -55,13 +140,66 @@
-------
- vec_out: output vector. Its shape is [nframes, nloc, 3].
"""
nframes, nloc, _ = inputs.size()
if atype_tebd is not None:
inputs = torch.concat([inputs, atype_tebd], dim=-1)
vec_out = self.filter_layers[0](inputs) # Shape is [nframes, nloc, m1]
assert list(vec_out.size()) == [nframes, nloc, self.out_dim]
vec_out = vec_out.view(-1, 1, self.out_dim)
vec_out = (
torch.bmm(vec_out, rot_mat).squeeze(-2).view(nframes, nloc, 3)
xx = descriptor
nframes, nloc, nd = xx.shape

Check warning on line 144 in deepmd/pt/model/task/dipole.py

View check run for this annotation

Codecov / codecov/patch

deepmd/pt/model/task/dipole.py#L143-L144

Added lines #L143 - L144 were not covered by tests
# check input dim
if nd != self.dim_descrpt:
raise ValueError(

Check warning on line 147 in deepmd/pt/model/task/dipole.py

View check run for this annotation

Codecov / codecov/patch

deepmd/pt/model/task/dipole.py#L146-L147

Added lines #L146 - L147 were not covered by tests
"get an input descriptor of dim {nd},"
"which is not consistent with {self.dim_descrpt}."
)
# check fparam dim, concate to input descriptor
if self.numb_fparam > 0:
assert fparam is not None, "fparam should not be None"
assert self.fparam_avg is not None
assert self.fparam_inv_std is not None
if fparam.shape[-1] != self.numb_fparam:
raise ValueError(

Check warning on line 157 in deepmd/pt/model/task/dipole.py

View check run for this annotation

Codecov / codecov/patch

deepmd/pt/model/task/dipole.py#L152-L157

Added lines #L152 - L157 were not covered by tests
"get an input fparam of dim {fparam.shape[-1]}, ",
"which is not consistent with {self.numb_fparam}.",
)
fparam = fparam.view([nframes, self.numb_fparam])
nb, _ = fparam.shape
t_fparam_avg = self._extend_f_avg_std(self.fparam_avg, nb)
t_fparam_inv_std = self._extend_f_avg_std(self.fparam_inv_std, nb)
fparam = (fparam - t_fparam_avg) * t_fparam_inv_std
fparam = torch.tile(fparam.reshape([nframes, 1, -1]), [1, nloc, 1])
xx = torch.cat(

Check warning on line 167 in deepmd/pt/model/task/dipole.py

View check run for this annotation

Codecov / codecov/patch

deepmd/pt/model/task/dipole.py#L161-L167

Added lines #L161 - L167 were not covered by tests
[xx, fparam],
dim=-1,
)
# check aparam dim, concate to input descriptor
if self.numb_aparam > 0:
assert aparam is not None, "aparam should not be None"
assert self.aparam_avg is not None
assert self.aparam_inv_std is not None
if aparam.shape[-1] != self.numb_aparam:
raise ValueError(

Check warning on line 177 in deepmd/pt/model/task/dipole.py

View check run for this annotation

Codecov / codecov/patch

deepmd/pt/model/task/dipole.py#L172-L177

Added lines #L172 - L177 were not covered by tests
"get an input aparam of dim {aparam.shape[-1]}, ",
"which is not consistent with {self.numb_aparam}.",
)
aparam = aparam.view([nframes, nloc, self.numb_aparam])
nb, nloc, _ = aparam.shape
t_aparam_avg = self._extend_a_avg_std(self.aparam_avg, nb, nloc)
t_aparam_inv_std = self._extend_a_avg_std(self.aparam_inv_std, nb, nloc)
aparam = (aparam - t_aparam_avg) * t_aparam_inv_std
xx = torch.cat(

Check warning on line 186 in deepmd/pt/model/task/dipole.py

View check run for this annotation

Codecov / codecov/patch

deepmd/pt/model/task/dipole.py#L181-L186

Added lines #L181 - L186 were not covered by tests
[xx, aparam],
dim=-1,
)

outs = torch.zeros_like(atype).unsqueeze(-1) # jit assertion
if self.use_tebd:
atom_dipole = self.filter_layers.networks[0](xx)
outs = outs + atom_dipole # Shape is [nframes, nloc, 3]

Check warning on line 194 in deepmd/pt/model/task/dipole.py

View check run for this annotation

Codecov / codecov/patch

deepmd/pt/model/task/dipole.py#L191-L194

Added lines #L191 - L194 were not covered by tests
else:
for type_i, ll in enumerate(self.filter_layers.networks):
mask = (atype == type_i).unsqueeze(-1)
mask = torch.tile(mask, (1, 1, self.dim_out))
atom_dipole = ll(xx)
atom_dipole = atom_dipole * mask
outs = outs + atom_dipole # Shape is [nframes, nloc, 3]
outs = (

Check warning on line 202 in deepmd/pt/model/task/dipole.py

View check run for this annotation

Codecov / codecov/patch

deepmd/pt/model/task/dipole.py#L196-L202

Added lines #L196 - L202 were not covered by tests
torch.bmm(outs, gr).squeeze(-2).view(nframes, nloc, 3)
) # Shape is [nframes, nloc, 3]
return vec_out
return {self.var_name: outs.to(env.GLOBAL_PT_FLOAT_PRECISION)}

Check warning on line 205 in deepmd/pt/model/task/dipole.py

View check run for this annotation

Codecov / codecov/patch

deepmd/pt/model/task/dipole.py#L205

Added line #L205 was not covered by tests
Loading
Loading