Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add PhACE architecture #434

Open
wants to merge 229 commits into
base: main
Choose a base branch
from
Open

Add PhACE architecture #434

wants to merge 229 commits into from

Conversation

frostedoyster
Copy link
Collaborator

@frostedoyster frostedoyster commented Dec 17, 2024

Contributor (creator of pull-request) checklist

  • Add your architecture to the experimental/stable folder. See the
    [docs/src/dev-docs/architecture-life-cycle.rst](Architecture life cycle) document for
    requirements.
    src/metatrain/experimental/<architecture_name>
  • Add default hyperparameter file to
    src/metatrain/experimental/<architecture_name>/default-hypers.yml
  • Add a .yml file into github workflows .github/workflow/<architecture_name>.yml
  • Architecture dependencies entry in the optional-dependencies section in the
    pyproject.toml
  • Tests: torch-scriptability, basic functionality (invariance, fitting, prediction)
  • Add maintainers as codeowners in CODEOWNERS

Reviewer checklist

New experimental architectures

  • Capability to fit at least a single quantity and predict it, verified through CI
    tests.
  • Compatibility with JIT compilation using TorchScript <https://pytorch.org/docs/stable/jit.html>_.
  • Provision of reasonable default hyperparameters.
  • A contact person designated as the maintainer, mentioned in __maintainers__ and the CODEOWNERS file
  • All external dependencies must be pip-installable. While not required to be on
    PyPI, a public git repository or another public URL with a repository is acceptable.

📚 Documentation preview 📚: https://metatrain--434.org.readthedocs.build/en/434/

@frostedoyster frostedoyster marked this pull request as ready for review January 24, 2025 09:28
Copy link
Member

@Luthaf Luthaf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is lacking a documentation page explaining the architecture, especially since there is AFAIK no associated publication yet.

Also, there is quite a bit of duplicated code with other places in the ecosystem. While not a blocking point for this PR, I would really prefer that we don't have to maintain multiple copies of the same code. I think you should already be able to use things from metatensor-learn; and we should check that the CG code will be able to do everything required here!

src/metatrain/cli/train.py Outdated Show resolved Hide resolved
tox.ini Outdated Show resolved Hide resolved
tox.ini Outdated Show resolved Hide resolved
src/metatrain/experimental/phace/default-hypers.yaml Outdated Show resolved Hide resolved
num_element_channels: 64
radial_basis:
mlp: true
E_max: 50.0
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should this be defined in terms of max_radial instead? I don't think users will understand E_max

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The correspondence is not 1:1

Copy link
Collaborator Author

@frostedoyster frostedoyster Feb 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will change it to max_eigenvalue, but using some max-radial-like argument removes information so I wouldn't do it

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fair enough, but I really doubt people will be able to modify this and understand what it does.

removes information

I'm not clear on what information is lost here?

)


class EquivariantLinear(torch.nn.Module):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can't you use the version in metatensor-learn?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It needs the custom Linear that is defined above, which initializes the weights in a different way from pure torch

src/metatrain/experimental/phace/modules/layers.py Outdated Show resolved Hide resolved
Comment on lines +185 to +187
def get_cartesian_vectors(
positions, cells, species, cell_shifts, pairs, structure_pairs, structure_offsets
):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not use the metatensor NL directly? It already has everything required, except the species_center/species_neighbor metadata

Copy link
Collaborator Author

@frostedoyster frostedoyster Feb 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because it's not as fast if there are multiple structures in the batch
The metadata needs to be changed a bit since successive structures need to have their indices changed. All the metadata indices here does come from the neighbor lists before being fed into this function

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, could you mention this in a comment? So the next person coming does not try to change it back without adding a bunch of benchmarks

from .layers import EquivariantLinear


class EquivariantTensorAdd(torch.nn.Module):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you explain what this is doing? I could not quite understand from reading the code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants