Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Devel update #30

Merged
merged 4 commits into from
Jan 30, 2024
Merged

Devel update #30

merged 4 commits into from
Jan 30, 2024

Conversation

iProzd
Copy link
Owner

@iProzd iProzd commented Jan 30, 2024

No description provided.

njzjz and others added 4 commits January 30, 2024 11:44
If so, throw the following error:
```
-- PyTorch CXX11 ABI: 0
CMake Error at CMakeLists.txt:162 (message):
  PyTorch CXX11 ABI mismatch TensorFlow: 0 != 1
```

Signed-off-by: Jinzhe Zeng <[email protected]>
Fix #3120.

One can disable building the TensorFlow backend during `pip install` by
setting `DP_ENABLE_TENSORFLOW=0`.

---------

Signed-off-by: Jinzhe Zeng <[email protected]>
…g net. (#3199)

- add dp model format (backend independent definition) for the fitting
- refactor torch support, compatible with dp model format
- fix mlp issue: the idt should only be used when a skip connection is
available.
- add tools `to_numpy_array` and `to_torch_tensor`.

---------

Co-authored-by: Han Wang <[email protected]>
@iProzd iProzd merged commit cb4cc67 into iProzd:fix_gpu_ut Jan 30, 2024
56 checks passed
Comment on lines +481 to +482
# if atype_tebd is not None:
# inputs = torch.concat([inputs, atype_tebd], dim=-1)

Check notice

Code scanning / CodeQL

Commented-out code Note

This comment appears to contain commented-out code.
if nap > 0:
iap = rng.normal(size=(self.nf, self.nloc, nap - 1))
with self.assertRaises(ValueError) as context:
ret0 = ifn0(dd[0], atype, fparam=ifp, aparam=iap)

Check warning

Code scanning / CodeQL

Variable defined multiple times Warning test

This assignment to 'ret0' is unnecessary as it is
redefined
before this value is used.
self.assertEqual(onk.dtype, npp)
with self.assertRaises(ValueError) as ee:
foo = foo.astype(np.int32)
bar = to_torch_tensor(foo)

Check warning

Code scanning / CodeQL

Variable defined multiple times Warning test

This assignment to 'bar' is unnecessary as it is
redefined
before this value is used.
def test_new_old(
self,
):
rng = np.random.default_rng()

Check notice

Code scanning / CodeQL

Unused local variable Note test

Variable rng is not used.
)
atype = torch.tensor(self.atype_ext[:, :nloc], dtype=int, device=env.DEVICE)

od = 1

Check notice

Code scanning / CodeQL

Unused local variable Note test

Variable od is not used.
@classmethod
def deserialize(cls, data: dict) -> "InvarFitting":
data = copy.deepcopy(data)
variables = data.pop("@variables")

Check failure

Code scanning / CodeQL

Modification of parameter with default Error

This expression mutates a
default value
.
def deserialize(cls, data: dict) -> "InvarFitting":
data = copy.deepcopy(data)
variables = data.pop("@variables")
nets = data.pop("nets")

Check failure

Code scanning / CodeQL

Modification of parameter with default Error

This expression mutates a
default value
.
@classmethod
def deserialize(cls, data: dict) -> "InvarFitting":
data = copy.deepcopy(data)
variables = data.pop("@variables")

Check failure

Code scanning / CodeQL

Modification of parameter with default Error

This expression mutates a
default value
.
def deserialize(cls, data: dict) -> "InvarFitting":
data = copy.deepcopy(data)
variables = data.pop("@variables")
nets = data.pop("nets")

Check failure

Code scanning / CodeQL

Modification of parameter with default Error

This expression mutates a
default value
.
if use_aparam_as_mask:
raise NotImplementedError("use_aparam_as_mask is not implemented")
if use_aparam_as_mask:
raise NotImplementedError("use_aparam_as_mask is not implemented")

Check warning

Code scanning / CodeQL

Unreachable code Warning

This statement is unreachable.
iProzd added a commit that referenced this pull request Jan 30, 2024
This reverts commit cb4cc67.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants