Skip to content

jehovahxu/chan

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

chan

Coarse-to-Fine Image-to-Image Translation via Attentively Collaborative

we provide PyTorch implementations for our paper "Complementary, Heterogeneous and Adversarial Networks for Image-to-Image Translation"

Our Proposed Framework

Generator

Discriminator

Sample Result

Prerequisites

  • Linux or similar environment
  • Python 2.7
  • NVIDIA GPU + CUDA CuDNN

Getting Started

installation

  • clone this repo:
git clone https://github.com/jehovahxu/chan.git
cd chan
  • install PyTorch 0.4+

Quick Start(Apply a Pre-trained Model)

  • Download a dataset
    • CUFS split train and test with this files
    • CityScapes,Facades.Google Map,Edge2Shoes,Edge2Handbags: you can download by Pix2Pix bash
    • Paris Street View : you can contact Deepak Pathak to get the dataset
  • we use fine-tuning to train our model eg.CUFS
    • first you need train on Pix2Pix to get a coarse model or you can Download a pre-trained model(pre-trained with Pix2Pix) in here
    • Train a model:
      • you can download a pre-trained pix2pix model in
      python train.py --dataroot {dataset path} --datalist {datalist path} --pre_netG {coarse model path} --gpuid {your gpu ids}  
  • Test
    python test.py --dataroot {dataset path} --datalist {datalist path} --pre_netG {final model path} --gpuid {your gpu ids}

Apply a pre-trained model

A face photo-to-sketch model pre-trained on the CUSF: Google Drive

The pre-trained model need to be save at ./checkpoint

Then you can test the model

Result

Our final result can be downloaded in here

Our Quantitative performance in a variety of image-to-image translation tasks. The total score gained by every model on each dataset are reported in the column of score

Training/Test Tips

Best practice for training and testing your models.

Feel free to ask any questions about coding.Xingxin Xu, [email protected]

Citation

If you find this useful for your research, please cite our paper as:

@article{gao2020ca-gan,
	title = {Complementary, Heterogeneous and Adversarial Networks for Image-to-Image Translation},
	author = {},
	year = {2020},
	url = {https://github.com/fei-hdu},
}

Acknowledgments

Our code is inspired by pytorch-CycleGAN-and-pix2pix

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages