Skip to content

noelshin/reco

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ReCo: Retrieve and Co-segment for Zero-shot Transfer

      ____       ______    
     / __ \___  / ____/___ 
    / /_/ / _ \/ /   / __ \
   / _, _/  __/ /___/ /_/ /
  /_/ |_|\___/\____/\____/ 

Official PyTorch implementation for ReCo (NeurIPS 2022). Details can be found in the paper. [Paper] [Project page]

PWC PWC PWC

Alt Text

Contents

Preparation

1. Download datasets

To evaluate ReCo, you first need to download some datasets. Please visit following links to download datasets:

Note that Cityscapes, ImageNet2012, and KITTI-STEP require you to sign up an account.

To reimplement ReCo+ on COCO-Stuff as in our paper, you additionally need to download COCO-Stuff10K.

Please don't change the (sub)directory name(s) as the code assumes the original directory names. We advise you to put the downloaded dataset(s) into the following directory structure for ease of implementation:

{your_dataset_directory}
├──cityscapes
│  ├──gtFine
│  ├──leftImg8bit
├──cocostuff
│  ├──annotations
│  ├──curated
│  ├──images
├──cocostuff10k
│  ├──annotations
│  ├──imageLists
│  ├──images
├──ImageNet2012
│  ├──train
│  ├──val
├──kitti_step
   ├──panoptic_maps
   ├──train
   ├──val

2. Download required python packages:

conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch
conda install -c conda-forge tqdm
conda install -c conda-forge matplotlib
conda install -c conda-forge timm
conda install -c conda-forge opencv
conda install -c anaconda ujson
conda install -c conda-forge pyyaml
pip install opencv-python
pip install git+https://github.com/lucasb-eyer/pydensecrf.git
pip install git+https://github.com/openai/CLIP.git

Additionally, please install mmcv following the instructions in the official website.

ReCo inference

To evalute ReCo, you need to set up some directory/file paths (e.g., dataset directory). For this please open reco_$DATASET_NAME.yaml file in configs directory and find "dir_ckpt" and "dir_dataset" arguments where $DATASET_NAME is either cityscapes, coco_stuff, or kitti_step. Then, type your corresponding paths:

dir_ckpt: [YOUR_DESIRED_CHECKPOINT_DIR]
dir_dataset: [YOUR_DATASET_DIR]
dir_imagenet: [YOUR_ImageNet2012_DIR]

To validate on Cityscapes, COCO-Stuff, or KITTI-STEP, move to scripts directory and run

bash reco_$DATASET_NAME.sh

Note that this will first extract and save image embeddings for the ImageNet2012 images. This process occurs only for the first time and takes up to a few hours. If you want to avoid this, please download the pre-computed image embeddings via this link (~4.3 GB) and put the downloaded file into your ImageNet2012 directory.

In addition, if you also want to avoid computing reference image embeddings for categories in a benchmark, please download the pre-computed reference image embeddings file for the benchmark and put it into the benchmark directory (e.g., put reference image embeddings for the Cityscapes categories into your Cityscapes directory):

ReCo+ training/inference

Unlike ReCo, which does not involve any training, ReCo+ is trained on the training split of each benchmark. To avoid using human-annotations, ReCo+ utilises predictions made by ReCo as pseudo-labels.

1. Generate pseudo-masks

To compute pseudo-masks for training ReCo+ on Cityscapes, COCO-Stuff, or KITTI-STEP, run

bash reco_$DATASET_NAME.sh "train"

By default, the pseudo-masks will be stored in the dataset directory. If you want to skip this process, please download the pre-computed pseudo-masks:

2. Training

Once pseudo-masks are created (or downloaded and uncompressed), set a path to the directory that contains the pseudo-masks in a configuration file. For example, open the reco_plus_cityscapes.yaml file and change dir_pseudo_masks argument as appropriate. Then, run

bash reco_plus_$DATASET_NAME.sh

Note that an evaluation will be made at every 1,000 iterations during training and the weights for the best model will be saved at your checkpoint directory.

3. Inference

To run an inference script with pre-trained weights, please run

bash reco_plus_$DATASET_NAME.sh $PATH_TO_WEIGHTS

Pre-trained weights

We provide the pre-trained weights for ReCo+:

benchmark IoU (%) pixel accuracy (%) link
Cityscapes 24.2 83.7 weights (~183.1 MB)
COCO-Stuff 32.6 54.1 weights (~183.1 MB)
KITTI-STEP 31.9 75.3 weights (~183.1 MB)

Citation

@inproceedings{shin2022reco,
  title = {ReCo: Retrieve and Co-segment for Zero-shot Transfer},
  author = {Shin, Gyungin and Xie, Weidi and Albanie, Samuel},
  booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
  year = {2022}
}

Acknowledgements

We borrowed the code for CLIP, DeepLabv3+, DenseCLIP, DINO, ViT from

If you have any questions about our code/implementation, please contact us at gyungin [at] robots [dot] ox [dot] ac [dot] uk.

About

[NeurIPS'22] ReCo: Retrieve and Co-segment for Zero-shot Transfer

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published