It is the implementation of the paper: A Deeply Attentive High-Resolution Network for Change Detection in Remote Sensing Images.
Here, we provide the pytorch implementation of the DAHRNet.
Our code is inspired by STANet, we have added some new features for change detection.
- windows or Linux
- Python 3.6+
- CPU or NVIDIA GPU
- CUDA 9.0+
- PyTorch > 1.0
- visdom
Recommend conducting your experiments in docker. deepo is an open framework to assemble specialized docker images for deep learning research without pain. Clone this repo:
git clone https://github.com/Githubwujinming/DAHRNet.git
cd DAHRNet
pip install -r requirements.txt
You could download the LEVIR-CD at https://justchenhao.github.io/LEVIR/;
The path list in the downloaded folder is as follows:
path to LEVIR-CD:
├─train
│ ├─A
│ ├─B
│ ├─label
├─val
│ ├─A
│ ├─B
│ ├─label
├─test
│ ├─A
│ ├─B
│ ├─label
where A contains images of pre-phase, B contains images of post-phase, and label contains label maps.
The original image in LEVIR-CD has a size of 1024 * 1024, which will consume too much memory when training. Therefore, we can cut the origin images into smaller patches (e.g., 256 * 256, or 512 * 512). In our paper, we cut the original image into patches of 256 * 256 size without overlapping.
Make sure that the corresponding patch samples in the A, B, and label subfolders have the same name.
The processed and original datasets can be downloaded from the table below, we recommended downloading the processed one directly to get a quick start on our codes:
Datasets | Processed Links | Original Links |
---|---|---|
Season-varying Dataset [1] | [Baidu Drive] | [Original] |
LEVIR-CD Dataset [2] | [Original] | |
Google Dataset [3] | [Original] | |
Zhange Dataset [4] | [Original] | |
WHU-CD Dataset [5] | [Original] | |
SYSU-CD Dataset [6] | [Original] |
To view training results and loss plots, run this script and click the URL http://localhost:8097.
python -m visdom.server
dataroot="../datasets/cd_dataset/SYSU-CD/train/"
val_dataroot="../datasets/cd_dataset/SYSU-CD/val/"
test_dataroot="../datasets/cd_dataset/SYSU-CD/test/"
lr=0.0001
model=DAHRN
batch_size=16
num_threads=4
save_epoch_freq=1
angle=20
gpu=1
port=8091
arch='base'
encoder='hrnet18'
Block='block'
preprocess='blur_rotate_transpose_hsvshift_noise_flip'
name=SYSU_DAHRNet
criterion='hybrid_bcl'
python ./train.py --epoch_count 1 -c 1 -r 'base' -d 4 -l $criterion --preprocess $preprocess --arch $arch --encoder $encoder --Block $Block --display_port $port --gpu_ids $gpu --num_threads $num_threads --save_epoch_freq $save_epoch_freq --angle $angle --test_dataroot ${test_dataroot} --dataroot ${dataroot} --val_dataroot ${val_dataroot} --name $name --lr $lr --model $model --batch_size $batch_size
cd DAHRNet
bash scripts/your_shell.sh
Model weights for Season-varying/LEVIR-CD/Google/SYSU datasets are available via Baidu Drive.
Datasets | F1 (%) |
---|---|
Season-varying | 96.94 |
LEVIR-CD | 92.11 |
89.97 | |
SYSU | 83.57 |
supported CD baselines: FC-EF, FC-Siam-conv, FC-Siam-diff, DSAMNet, SNUNet, FCCDN, ChangeFormer, DSIFNet. You can implement other models according to comments in base_line.
dataroot="../datasets/cd_dataset/SYSU-CD/train/"
val_dataroot="../datasets/cd_dataset/SYSU-CD/val/"
test_dataroot="../datasets/cd_dataset/SYSU-CD/test/"
lr=0.0001
model=FCS
batch_size=16
num_threads=4
save_epoch_freq=1
angle=20
gpu=1
port=8089
name=SYSU_SNUN
arch='FCSC' # if nessceary
preprocess='blur_rotate_transpose_hsvshift_noise_flip'
# train
python ./train.py --continue_train --epoch_count 89 --display_port $port --gpu_ids $gpu --arch $arch --num_threads $num_threads --save_epoch_freq $save_epoch_freq --angle $angle --dataroot ${dataroot} --test_dataroot ${test_dataroot} --val_dataroot ${val_dataroot} --name $name --lr $lr --model $model --batch_size $batch_size --load_size 256 --crop_size 256 --preprocess $preprocess
You could edit the file val.py, CD algorithm will save five images for each pair for example:
if __name__ == '__main__':
opt = TestOptions().parse() # get training options
opt = make_val_opt(opt)
opt.phase = 'val'
opt.dataroot = 'path-to-LEVIR-CD-test' # data root
opt.dataset_mode = 'changedetection'
opt.n_class = 2
opt.arch = 'FCEF'
opt.model = 'FCS' # model type
opt.name = 'LEVIR_FCS' # project name
opt.results_dir = 'LEVIR_FCS/' # save predicted images
opt.epoch = 'best-epoch-in-val' # which epoch to test
opt.num_test = np.inf
val(opt)
white for true positive, black for true negative, red for false positive, and blue for false negative
T1 |
T2 |
label |
pred |
comparision |
This grad_cam is modified from pytorch-grad-cam for change detection, see examples in scrpt util/grad_vis.py
'''
supported grad_cam: GradCAM,HiResCAM,GradCAMElementWise,GradCAMPlusPlus,XGradCAM,EigenCAM,EigenGradCAM,LayerCAM.
'''
model = Net()
saved_path = 'saved_model.pth'
model = load_by_path(model, saved_path)# load your model
model = CDModelOutputWrapper(model)# you can customize for your model
# target_layer = [model.model.cube.cubes[0].stageY.branches[0]]
target_layer = [model.model.cbuilder.decoder1]# give target layer you want to visualizate
# you should point the 'target_layer' belongs to 'seg' branch or 'cd' branch.
cam = CDGradCamUtil(model,target_layer,gradcam='GradCAM', use_cuda=True, branch='seg')
img1 = Image.open('.png')
img2 = Image.open('.png')
grad_gray = cam(img1, img2)
cam.cd_save_cam()# save heatmap in the dir of './grad_imgs'
T1 |
T2 |
label |
seg_GradCAM_heatmap1 |
seg_GradCAM_heatmap2 |
cd_GradCAM_heatmap |