Skip to content

This is the official code for paper "Few-Shot Learning from Augmented Label-Uncertain Queries in Bongard-HOI" [AAAI2024]

Notifications You must be signed in to change notification settings

VIML-CVDL/LUQ-Bongard

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 

Repository files navigation

[AAAI 2024] Few-Shot Learning from Augmented Label-Uncertain Queries in Bongard-HOI

This is the repository for paper "Few-Shot Learning from Augmented Label-Uncertain Queries in Bongard-HOI" [AAAI2024]

Project page Link: https://chelsielei.github.io/LUQ/

Installation

Install pytorch

conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch

Install the necessary packages with requirements.txt

pip install -r requirements.txt

Dataset

Data Preparation

  1. Download the images from HAKE dataset. You may follow the official instruction. For your convenience, you may download all the images required by Bongard-HOI here. The images should be extracted to ./assets/data/hake/images and the file structure looks like:

    data
    └── hake
        └── images
            ├── hake_images_20190730
            ├── hcvrd
            ├── hico_20160224_det
            │   └── images
            │       ├── test2015
            │       └── train2015
            ├── openimages
            │   └── images
            ├── pic
            │   └── image
            │       ├── train
            │       └── val
            └── vcoco
                ├── train2014
                └── val2014
    
  2. Download the Bongard-HOI annotations from here and extract them to ./Bongard/cache

  3. Download the detected bounding boxes from here and extract them to ./Bongard/cache

  4. Download the pretrained ResNet-50 from here and extract them to ./Bongard/cache

  5. Download the detected human bounding boxes by DEKR from here and extract them to ./Bongard/cache/DEKR

  6. Download the generated background-blended queries from here and extract them to ./Bongard/cache/ldm_selected_v4 Also download the related annotation file from here and put into ./Bongard/cache

Training

cd Bongard 
python train_my_metric_st_ldm.py --config-file "configs/my_metric_st_ldm.yaml" 

Testing

cd Bongard 
python train_my_metric_st_ldm.py --config-file "configs/my_metric_st_ldm.yaml"  --test_only --test_model "<path to best_model.pth>"

Model Zoo

We provide weights pre-trained on Bongard-HOI for potential downstream applications.

Model SOSA SOUA UOSA UOUA Avg Weights
Ours 68.14 70.94 68.45 67.43 68.74 weights

Citation

If you find our work useful for your research, please consider citing us:

@inproceedings{lei2024few,
  title={Few-Shot Learning from Augmented Label-Uncertain Queries in Bongard-HOI},
  author={Lei, Qinqian and Wang, Bo and Tan, Robby T},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={38},
  number={4},
  pages={2974--2982},
  year={2024}
}

Acknowledgement

We gratefully thank the authors from Bongard-HOI and DSN for open-sourcing their code.

About

This is the official code for paper "Few-Shot Learning from Augmented Label-Uncertain Queries in Bongard-HOI" [AAAI2024]

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%