-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Accuracy mismatch with comparing papers #7
Comments
The results in Table 4 are all re-implemented by ourselves. |
Okay, that makes sense. Thank you for the quick reply. |
Could you please provide the training script for few shot experiments. Only base2new_train.sh is available here. Thank you. |
Commented or removed the following code during training: DATASET.SUBSAMPLE_CLASSES base. Examples: cd .. DATA=XXXXX CFG=vit_b16_ep100_ctxv1 for DATASET in caltech101 dtd eurosat fgvc_aircraft food101 oxford_flowers oxford_pets stanford_cars ucf101 |
Thank you. |
Thank you for the good work. I have a doubt regarding the values reported in the paper. In Table 4 you have reported 54.87 for EuroSAT 4-shot setting for MaPLe and 75.02 for PromptSRC. However, in Table 13 of PromptSRC paper, 4-shot accuracy of MaPLe is reported as 84.50 and 86.30 for PromptSRC. Could you please explain such a huge variance in results? The backbone is also same, i.e., ViT-B/16.
Thank you.
The text was updated successfully, but these errors were encountered: