Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Accuracy mismatch with comparing papers #7

Open
debarshigit opened this issue Nov 26, 2024 · 5 comments
Open

Accuracy mismatch with comparing papers #7

debarshigit opened this issue Nov 26, 2024 · 5 comments

Comments

@debarshigit
Copy link

Thank you for the good work. I have a doubt regarding the values reported in the paper. In Table 4 you have reported 54.87 for EuroSAT 4-shot setting for MaPLe and 75.02 for PromptSRC. However, in Table 13 of PromptSRC paper, 4-shot accuracy of MaPLe is reported as 84.50 and 86.30 for PromptSRC. Could you please explain such a huge variance in results? The backbone is also same, i.e., ViT-B/16.

Thank you.

@debarshigit debarshigit changed the title Accuracy mismatch Accuracy mismatch with comparing papers Nov 26, 2024
@debarshigit debarshigit changed the title Accuracy mismatch with comparing papers Accuracy mismatch with competing papers Nov 26, 2024
@debarshigit debarshigit changed the title Accuracy mismatch with competing papers Accuracy mismatch with comparing papers Nov 26, 2024
@htyao89
Copy link
Owner

htyao89 commented Nov 26, 2024

Thank you for the good work. I have a doubt regarding the values reported in the paper. In Table 4 you have reported 54.87 for EuroSAT 4-shot setting for MaPLe and 75.02 for PromptSRC. However, in Table 13 of PromptSRC paper, 4-shot accuracy of MaPLe is reported as 84.50 and 86.30 for PromptSRC. Could you please explain such a huge variance in results? The backbone is also same, i.e., ViT-B/16.

Thank you.

The results in Table 4 are all re-implemented by ourselves.

@debarshigit
Copy link
Author

Okay, that makes sense. Thank you for the quick reply.

@debarshigit
Copy link
Author

Could you please provide the training script for few shot experiments. Only base2new_train.sh is available here.

Thank you.

@htyao89
Copy link
Owner

htyao89 commented Dec 3, 2024

Could you please provide the training script for few shot experiments. Only base2new_train.sh is available here.

Thank you.

Commented or removed the following code during training: DATASET.SUBSAMPLE_CLASSES base.

Examples:

cd ..

DATA=XXXXX
TRAINER=TCP
WEIGHT=1.0

CFG=vit_b16_ep100_ctxv1
CTP=end # class token position (end or middle)
NCTX=4 # number of context tokens
SHOTS=4 # number of shots (1, 2, 4, 8, 16)
CSC=False # class-specific context (False or True)
FOLDER=output_1108

for DATASET in caltech101 dtd eurosat fgvc_aircraft food101 oxford_flowers oxford_pets stanford_cars ucf101
do
for SEED in 1 2 3
do
DIR=${FOLDER}${NCTX}/base2new/train_base/${DATASET}/shots${SHOTS}_${WEIGHT}/${TRAINER}/${CFG}/seed${SEED}
if [ -d "$DIR" ]; then
echo "Results are available in ${DIR}. Skip this job"
else
echo "Run this job and save the output to ${DIR}"
python train.py
--root ${DATA}
--seed ${SEED}
--trainer ${TRAINER}
--dataset-config-file configs/datasets/${DATASET}.yaml
--config-file configs/trainers/${TRAINER}/${CFG}.yaml
--output-dir ${DIR}
TRAINER.COOP.N_CTX ${NCTX}
TRAINER.COOP.CSC ${CSC}
TRAINER.COOP.W ${WEIGHT}
TRAINER.COOP.CLASS_TOKEN_POSITION ${CTP}
DATASET.NUM_SHOTS ${SHOTS}
fi
done
done

@debarshigit
Copy link
Author

Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants