Skip to content
/ AsyFOD Public
forked from Hlings/AsyFOD

(CVPR2023) The PyTorch implementation of the "AsyFOD: An Asymmetric Adaptation Paradigm for Few-Shot Domain Adaptive Object Detection".

License

Notifications You must be signed in to change notification settings

heyuhhh/AsyFOD

 
 

Repository files navigation

(CVPR2023) AsyFOD

Data Preparation

Sim10K Key: juf6 (The synthetic dataset includes only car class.)

KITTI Key: 8brv (The KITTI dataset includes only car class.)

Cityscapes_car_8_1 Key: p69u (The randomly selected 8 images from cityscapes_car.)

Cityscapes_car Key: 4ym4 (The cityscapes dataset includes only car class.)

Cityscapes_8cls Key: rg4z (The Cityscapes dataset includes 8 classes.)

Cityscapes_8cls_foggy Key: bjgr (The Foggy Cityscapes dataset includes 8 classes.)

Viped Key: a9y7 (The synthetic dataset includes)

coco_person_60 Key: vg1m (The randomly selected 60 images from coco_person.)

coco_person Key: je89 (The COCO dataset includes only person class.)

You can also process the raw data to Yolo format via the tools shown here.

Requirements

This repo is based on YOLOv5 repo. Please follow that repo for installation and preparation. The version I built for this project is YOLO v5 3.0. The proposed methods are also easy to be migrated into advanced YOLO versions.

Training

  1. Modify the config of the data in the data subfolders. Please refer to the instructions in the yaml file.

  2. The command below can reproduce the corresponding results mentioned in the paper.

python train.py --img 640 --batch 12 --epochs 300 --data ./data/city_and_foggy8_3.yaml --cfg ./models/yolov5x.yaml --hyp ./data/hyp_aug/mm1.yaml --weights '' --name "test"

The codes have been released but need further construction. If you are intersted in more details of the ablation studies, you can refer to the folder "train_files_for_abl". I have listed nearly every train.py in this folder. I hope you find them helpful.

I will try my best to update :(. You can also check our previous work AcroFOD "https://github.com/Hlings/AcroFOD".

  • If you find this paper/repository useful, please consider citing our paper:
@inproceedings{gao2023asyfod,
  title={AsyFOD: An Asymmetric Adaptation Paradigm for Few-Shot Domain Adaptive Object Detection},
  author={Gao, Yipeng and Lin, Kun-Yu and Yan, Junkai and Wang, Yaowei and Zheng, Wei-Shi},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={3261--3271},
  year={2023}
}

About

(CVPR2023) The PyTorch implementation of the "AsyFOD: An Asymmetric Adaptation Paradigm for Few-Shot Domain Adaptive Object Detection".

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%