You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for the great paper and code! I have a query about the moco v3 encoder- in the paper it mentions the latent representations are regularized on a hyper-sphere. I am fairly new to moco v3, can you confirm if this type of regularisation was done with the original moco v3 pretraining paper or is this something you added? I am assuming that such regularised latents are quite important, so for example if I were to replace the encoder with say a MAE encoder which to my knowledge does not regularize latents in any way, this might not work as well?
The text was updated successfully, but these errors were encountered:
Thanks for your interest! Please check this paper https://arxiv.org/pdf/2005.10242. From its "uniformity" analysis, contrastive loss naturally regularizes the representations on a hypersphere. MAE encoder should also work, but it might need a stronger representation generator.
Thanks for the great paper and code! I have a query about the moco v3 encoder- in the paper it mentions the latent representations are regularized on a hyper-sphere. I am fairly new to moco v3, can you confirm if this type of regularisation was done with the original moco v3 pretraining paper or is this something you added? I am assuming that such regularised latents are quite important, so for example if I were to replace the encoder with say a MAE encoder which to my knowledge does not regularize latents in any way, this might not work as well?
The text was updated successfully, but these errors were encountered: