Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Copy alphafold from colabfold #1

Merged
merged 115 commits into from
Dec 19, 2023
Merged

Copy alphafold from colabfold #1

merged 115 commits into from
Dec 19, 2023

Conversation

ajtritt
Copy link
Collaborator

@ajtritt ajtritt commented Dec 19, 2023

This PR merges the version of alphafold used for ColabFold. I am copying their version over so we can work on a version of alphafold that will work with ColabFold.

When you install ColabFold locally, it will install alphafold from this PyPI package, which I believe is just a fork of DeepMind's alphafold repository that they've modified to work with the rest of the ColabFold code base.

Since we've decided to work with ColabFold, I thought it would be good to have their modifications in our codebase to build on top of.

sokrypton and others added 25 commits January 30, 2023 11:14
attempt to speedup the function by removing ensembles when num_ensembles=1
attempt to speedup the function by removing ensembles when num_ensembles=1
important for batch compute that uses masking
* fix memory leaks
* use bfloat16 for representations
* move confidence compute inside module.py
* move multimer key splitting to model.py
* fix memory leaks

various edits to fix memory leaks
memory leak fix

* v2.3.4 - fix memory leaks

another attempt to fix memory leaks!

* Update config.py

* bugfix - num-ensemble
Update OpenMM imports to work with new OpenMM API
msa_activation is (N,L,256)
in colabfold v1.5.2 we return msa_activation[0] as our single representation vector
looks like there is one extra linear layer to convert msa_activations[0] to single_activation:
![image](https://github.com/sokrypton/alphafold/assets/4187522/1183a0fb-1a07-4626-9ada-12e32fd6891c)
If anything the (L,256) representation might be better, as you might be losing some information by doing the extra transformation at the end.

But since people are asking, I'm adding the transformation back so that the output is (L,386).

typo
@ajtritt ajtritt merged commit a38c14f into main Dec 19, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants