Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Moco encoder #37

Open
A-Thorley opened this issue Sep 19, 2024 · 1 comment
Open

Moco encoder #37

A-Thorley opened this issue Sep 19, 2024 · 1 comment

Comments

@A-Thorley
Copy link

Thanks for the great paper and code! I have a query about the moco v3 encoder- in the paper it mentions the latent representations are regularized on a hyper-sphere. I am fairly new to moco v3, can you confirm if this type of regularisation was done with the original moco v3 pretraining paper or is this something you added? I am assuming that such regularised latents are quite important, so for example if I were to replace the encoder with say a MAE encoder which to my knowledge does not regularize latents in any way, this might not work as well?

@LTH14
Copy link
Owner

LTH14 commented Sep 19, 2024

Thanks for your interest! Please check this paper https://arxiv.org/pdf/2005.10242. From its "uniformity" analysis, contrastive loss naturally regularizes the representations on a hypersphere. MAE encoder should also work, but it might need a stronger representation generator.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants