we provide PyTorch implementations for our paper "Complementary, Heterogeneous and Adversarial Networks for Image-to-Image Translation"
- Linux or similar environment
- Python 2.7
- NVIDIA GPU + CUDA CuDNN
- clone this repo:
git clone https://github.com/jehovahxu/chan.git
cd chan
- install PyTorch 0.4+
- Download a dataset
- CUFS split train and test with this files
- CityScapes,Facades.Google Map,Edge2Shoes,Edge2Handbags: you can download by Pix2Pix bash
- Paris Street View : you can contact Deepak Pathak to get the dataset
- we use fine-tuning to train our model eg.CUFS
- first you need train on Pix2Pix to get a coarse model or you can Download a pre-trained model(pre-trained with Pix2Pix) in here
- Train a model:
- you can download a pre-trained pix2pix model in
python train.py --dataroot {dataset path} --datalist {datalist path} --pre_netG {coarse model path} --gpuid {your gpu ids}
- Test
python test.py --dataroot {dataset path} --datalist {datalist path} --pre_netG {final model path} --gpuid {your gpu ids}
A face photo-to-sketch model pre-trained on the CUSF: Google Drive
The pre-trained model need to be save at ./checkpoint
Then you can test the model
Our final result can be downloaded in here
Our Quantitative performance in a variety of image-to-image translation tasks. The total score gained by every model on each dataset are reported in the column of score
Best practice for training and testing your models.
Feel free to ask any questions about coding.Xingxin Xu, [email protected]
If you find this useful for your research, please cite our paper as:
@article{gao2020ca-gan,
title = {Complementary, Heterogeneous and Adversarial Networks for Image-to-Image Translation},
author = {},
year = {2020},
url = {https://github.com/fei-hdu},
}
Our code is inspired by pytorch-CycleGAN-and-pix2pix