Leveraging Anatomical Consistency for Multi-Object Detection in Ultrasound Images via Source-free Unsupervised Domain Adaptation
This project is the pytorch implemention of AATS;
Our experimental platform is configured with One RTX3090 (cuda>=11.0);
Currently, this code is avaliable for public dataset CardiacUDA and FUSH;
- Python ≥ 3.6
- PyTorch ≥ 1.5 and torchvision that matches the PyTorch installation.
- Detectron2 == 0.5
To install required dependencies on the virtual environment of the python (e.g., virtualenv for python3), please run the following command at the root of this code:
$ python3 -m venv /path/to/new/virtual/environment/.
$ source /path/to/new/virtual/environment/bin/activate
For example:
$ mkdir python_env
$ python3 -m venv python_env/
$ source python_env/bin/activate
Follow the INSTALL.md to install Detectron2.
-
Download the datasets
-
Organize the dataset as the COCO annotation format.
- Train the AATS under Center 1 of Heart (source) and Center 2 of Heart (target) on FUSH dataset
python train_net.py \
--num-gpus 1 \
--config configs/sfda_at_rcnn_vgg_fetus_4c_1to2.yaml\
OUTPUT_DIR output/AATS_4c_1to2
- Train the AATS under Center 2 of Heart (source) and Center 1 of Heart (target) on FUSH dataset
python train_net.py\
--num-gpus 1\
--config configs/sfda_at_rcnn_vgg_fetus_4c_2to1.yaml\
OUTPUT_DIR output/AATS_4c_2to1
python train_net.py \
--resume \
--num-gpus 1 \
--config configs/sfda_at_rcnn_vgg_fetus_4c_1to2.yaml MODEL.WEIGHTS <your weight>.pth
python train_net.py \
--eval-only \
--num-gpus 1 \
--config configs/sfda_test.yaml \
MODEL.WEIGHTS <your weight>.pth
We will publish the VGG pre-training weights and model weights soon.