Skip to content
/ STIP Public

Code for CVPR22 paper: Exploring Structure-aware Transformer over Interaction Proposals for Human-Object Interaction Detection.

License

Notifications You must be signed in to change notification settings

zyong812/STIP

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

[CVPR 2022] Exploring Structure-aware Transformer over Interaction Proposals for Human-object Interaction Detection

Paper introduction

Recent high-performing Human-Object Interaction (HOI) detection techniques have been highly influenced by Transformer-based object detector (i.e., DETR). Nevertheless, most of them directly map parametric interaction queries into a set of HOI predictions through vanilla Transformer in a one-stage manner. This leaves rich inter- or intra-interaction structure under-exploited. In this work, we design a novel Transformer-style HOI detector, i.e., Structure-aware Transformer over Interaction Proposals (STIP), for HOI detection. Such design decomposes the process of HOI set prediction into two subsequent phases, i.e., an interaction proposal generation is first performed, and then followed by transforming the non-parametric interaction proposals into HOI predictions via a structure-aware Transformer. The structure-aware Transformer upgrades vanilla Transformer by encoding additionally the holistically semantic structure among interaction proposals as well as the locally spatial structure of human/object within each interaction proposal, so as to strengthen HOI predictions. Extensive experiments conducted on V-COCO and HICO-DET benchmarks have demonstrated the effectiveness of STIP, and superior results are reported when comparing with the state-of-the-art HOI detectors.

Installation

1. Environmental Setup

$ conda create -n STIP python=3.7
$ conda install -c pytorch pytorch torchvision # PyTorch 1.7.1, torchvision 0.8.2, CUDA=11.0
$ conda install cython scipy
$ pip install pycocotools
$ pip install opencv-python
$ pip install wandb

2. HOI dataset setup

Our current version supports the experiments for

  • V-COCO and HICO-DET dataset. Download the dataset under the pulled directory.
  • For HICO-DET, we use the annotation files provided by the PPDM authors. Download the list of actions as list_action.txt and place them under the unballed hico-det directory. If these download links are broken, please download them from hico_20160224_det.

Below we present how you should place the files.

# V-COCO setup
$ git clone https://github.com/s-gupta/v-coco.git
$ cd v-coco
$ ln -s [:COCO_DIR] coco/images # COCO_DIR contains images of train2014 & val2014
$ python script_pick_annotations.py [:COCO_DIR]/annotations

# HICO-DET setup
$ tar -zxvf hico_20160224_det.tar.gz # move the unballed folder under the pulled repository

# dataset setup
STIP
 │─ v-coco
 │   │─ data
 │   │   │─ instances_vcoco_all_2014.json
 │   │   :
 │   └─ coco
 │       │─ images
 │       │   │─ train2014
 │       │   │   │─ COCO_train2014_000000000009.jpg
 │       │   │   :
 │       │   └─ val2014
 │       │       │─ COCO_val2014_000000000042.jpg
 :       :       :
 │─ hico_20160224_det
 │       │─ list_action.txt
 │       │─ annotations
 │       │   │─ trainval_hico.json
 │       │   │─ test_hico.json
 │       │   └─ corre_hico.npy
 :       :

If you wish to download the datasets on our own directory, simply change the 'data_path' argument to the directory you have downloaded the datasets.

--data_path [:your_own_directory]/[v-coco/hico_20160224_det]

3. Training/Testing on V-COCO

python STIP_main.py --validate \
    --num_hoi_queries 32 --batch_size 4 --lr 5e-5 --HOIDet --hoi_aux_loss --no_aux_loss \ 
    --dataset_file vcoco  --data_path v-coco --detr_weights https://dl.fbaipublicfiles.com/detr/detr-r50-e632da11.pth \
    --output_dir checkpoints/vcoco --group_name STIP_debug --run_name vcoco_run1
  • Add --eval option for evaluation

4. Training/Testing on HICO-DET

Training with pretrained DETR detector on COCO.

python STIP_main.py --validate \ 
    --num_hoi_queries 32 --batch_size 4 --lr 5e-5 --HOIDet --hoi_aux_loss --no_aux_loss \
    --dataset_file hico-det --data_path hico_20160224_det --detr_weights https://dl.fbaipublicfiles.com/detr/detr-r50-e632da11.pth \
    --output_dir checkpoints/hico-det --group_name STIP_debug --run_name hicodet_run1

Jointly fine-tune object detector & HOI detector

python STIP_main.py --validate \
    --num_hoi_queries 32 --batch_size 2 --lr 1e-5 --HOIDet --hoi_aux_loss \
    --dataset_file hico-det --data_path hico_20160224_det \
    --output_dir checkpoints/hico-det --group_name STIP_debug --run_name hicodet_run1/jointly-tune \
    --resume checkpoints/hico-det/STIP_debug/best.pth --train_detr

Pre-trained models:

License

This repo is released under the Apache License, Version 2.0.

Acknowledgement

This repo is based on DETR, HOTR. Thanks for their wonderful works.

Citation

If you find this code helpful for your research, please cite our paper.

@InProceedings{Zhang_2022_CVPR,
    author    = {Zhang, Yong and Pan, Yingwei and Yao, Ting and Huang, Rui and Mei, Tao and Chen, Chang-Wen},
    title     = {Exploring Structure-Aware Transformer Over Interaction Proposals for Human-Object Interaction Detection},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2022},
    pages     = {19548-19557}
}

About

Code for CVPR22 paper: Exploring Structure-aware Transformer over Interaction Proposals for Human-Object Interaction Detection.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages