Skip to content

Latest commit

 

History

History
114 lines (76 loc) · 5.31 KB

README.md

File metadata and controls

114 lines (76 loc) · 5.31 KB

ClipCap

We have modified the ClipCap codebase for our task of VQA. In particular, we have forked the original repo via our ClipCap branch and made additional changes. This is already part of the codebase you cloned, assuming you included --recurse-submodules as directed in the main branch README.

Downloading pretrained models
# We use this model: MLP mapping network and finetuned GPT-2 (pretrained on COCO)
gdown 1IdaBtMSvtyzF0ByVaBHtvM0JYSXRExRX -O ${PT_MODEL_DIR}/clipcap_coco_weights.pt
# Finetuning on our dataset
python ClipCap/train.py --log-dir ${LOG_DIR}/clipcap --aokvqa-dir ${AOKVQA_DIR} --train-features ${FEATURES_DIR}/clip-ViT-B-32_train.pt --val-features ${FEATURES_DIR}/clip-ViT-B-32_val.pt --pretrained-model ${PT_MODEL_DIR}/clipcap_coco_weights.pt --generation-target answer --mapping mlp --finetune-gpt

# Predicting (e.g. for epoch 3)
python ClipCap/predict.py --log-dir ${LOG_DIR}/clipcap --epoch 3 --aokvqa-dir ${AOKVQA_DIR} --split val --eval-features ${FEATURES_DIR}/clip-ViT-B-32_val.pt --out ${PREDS_DIR}/clipcap_val-da.json

For the multiple-choice setting, adjust the following arguments:

# ClipCap/train.py: --log-dir ${LOG_DIR}/clipcap-mc --prompt-with-choices
# ClipCap/predict.py: --log-dir ${LOG_DIR}/clipcap-mc --map-to-choices --out ${PREDS_DIR}/clipcap_val-mc.json
For training with a Transformer mapping network
# Grab the Transformer ClipCap weights (pretrained on COCO)
gdown 1GYPToCqFREwi285wPLhuVExlz7DDUDfJ -O ${PT_MODEL_DIR}/clipcap_transformer_weights.pt

# ClipCap/train.py: --train-features ${FEATURES_DIR}/clip-RN50x4_train.pt --pretrained-model ${PT_MODEL_DIR}/clipcap_transformer_weights.pt --mapping transformer
# ClipCap/predict.py: --eval-features ${FEATURES_DIR}/clip-RN50x4_val.pt

Generating Captions & Rationales

To generate rationales, we repeat the above ClipCap training and predictions, with some modifications. We only train one model (even between DA and MC settings).

mkdir -p ${LOG_DIR}/gpt3-inputs

# ClipCap/train.py: --log-dir ${LOG_DIR}/clipcap-rationale --generation-target rationale
# Be sure to exclude --prompt-with-choices

# ClipCap/predict.py: --log-dir ${LOG_DIR}/clipcap-rationale --beam-search --out ${LOG_DIR}/gpt3-inputs/clipcap-rationales_val.json
# Be sure to exclude --map-to-choices
Prompting GPT-3 with rationales

First see Querying GPT-3.

We should generate ground-truth rationale files:

for split in train val; do
    python gpt3/rationale_inputs.py --aokvqa-dir ${AOKVQA_DIR} --split ${split} --out logs/gpt3-inputs/rationales_${split}.json
done

You can prompt GPT-3 as described in the link, but with the following modified arguments:

# For prompting with ground-truth rationales:

# gpt3/query_gpt3.py: --train-context ${LOG_DIR}/gpt3-inputs/rationales_train.json --context ${LOG_DIR}/gpt3-inputs/rationales_val.json --out ${PREDS_DIR}/gpt3-rationales_val-da.json
# remap_predictions.py: --pred ${PREDS_DIR}/gpt3-rationales_val-da.json --out ${PREDS_DIR}/gpt3-rationales_val-mc.json

# For prompting with generated rationales:

# gpt3/query_gpt3.py: --train-context ${LOG_DIR}/gpt3-inputs/rationales_train.json --context ${LOG_DIR}/gpt3-inputs/clipcap-rationales_val.json --out ${PREDS_DIR}/gpt3-clipcap-rationales_val-da.json
# remap_predictions.py: --pred ${PREDS_DIR}/gpt3-clipcap-rationales_val-da.json --out ${PREDS_DIR}/gpt3-clipcap-rationales_val-mc.json
Generating and prompting with captions

Please read everything else above first.

We can generate COCO captions with the original ClipCap weights.

python ClipCap/predict_clipcap.py --ckpt ${PT_MODEL_DIR}/clipcap_coco_weights.pt --mapping mlp --aokvqa-dir ${AOKVQA_DIR} --split val --eval-features ${FEATURES_DIR}/clip-ViT-B-32_val.pt --beam-search --out logs/gpt3-inputs/clipcap-captions_val.json

We should also generate ground-truth captions (for train and val).

for split in train val; do
    python gpt3/caption_inputs.py --aokvqa-dir ${AOKVQA_DIR} --coco-dir ${COCO_DIR} --split ${split} --out ${LOG_DIR}/gpt3-inputs/captions_${split}.json
done

Query GPT-3 with original arguments and the following modifications, and produce predictions.

# For prompting with ground-truth captions:

# gpt3/query_gpt3.py: --train-context ${LOG_DIR}/gpt3-inputs/captions_train.json --context ${LOG_DIR}/gpt3-inputs/captions_val.json --out ${PREDS_DIR}/gpt3-captions_val-da.json
# remap_predictions.py: --pred ${PREDS_DIR}/gpt3-captions_val-da.json --out ${PREDS_DIR}/gpt3-captions_val-mc.json

# For prompting with generated captions:

# gpt3/query_gpt3.py: --train-context ${LOG_DIR}/gpt3-inputs/captions_train.json --context ${LOG_DIR}/gpt3-inputs/clipcap-captions_val.json --out ${PREDS_DIR}/gpt3-clipcap-captions_val-da.json
# remap_predictions.py: --pred ${PREDS_DIR}/gpt3-clipcap-captions_val-da.json --out ${PREDS_DIR}/gpt3-clipcap-captions_val-mc.json