Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The reproduced result #19

Open
geehokim opened this issue Oct 18, 2022 · 0 comments
Open

The reproduced result #19

geehokim opened this issue Oct 18, 2022 · 0 comments

Comments

@geehokim
Copy link

I ran the experiment of table 2 using this repo and the final accuracy for all classes on CIFAR10 and CIFAR100 are only 60.28 and 38.60, respectively, which are much lower than the reported.

To reproduce this paper, I ran the following three files sequentially: "contrastive_train.sh", "extract_features.sh", and "k_means.sh".
Following the implementation details in the original paper, I used ViT-B-16 backbone with DINO pre-trained weights (weights are downloaded from this repo: https://github.com/facebookresearch/dino) and fine-tune the final transformer block.
The total number of clusters for k-means clustering is set as the total number of classes.

The default setting of this repo seems to be set on Stanford Cars, so can you give detailed parameters for CIFAR10/CIFAR100??

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant