Skip to content

cc-hpc-itwm/Stabilizing-GANs-with-Octave-Convolutions

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

46 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

This repository provides the official PyTorch implementation of Stabilizing GANs with Octave Convolutions.

Dependencies

Tested on Python 3.6.x.

CelebA dataset

The full CelebA is available here. To resize the RGB images to 128 by 128 pixels, set the path and run resize_celeba.py.

Training

To train a model, simply specify run sh on the selected model (e.g. sh gan.sh, sh wgan.sh or sh lsgan.sh) with the appropriate hyper-parameters.

Example hyper-parameters definition (wgan.sh)

python train.py --type wgan \
           --nb-epochs 50 \
           --learning-rate 0.00005 \
           --optimizer rmsprop \
           --critic 5 \
           --cuda

Comparision between DCGAN with and wihtout Octave Convolution

DCGAN with Octave Conv.

References

This repo combines the pytorch implementation of the following paper:

Goodfellow et al. Generative Adversarial Nets.

Radford et al. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks.

Mao et al. Multi-class Generative Adversarial Networks with the L2 Loss Function.

Arjovsky et al. Wasserstein Generative Adversarial Networks.

Chen et al. Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave Convolution.

Octave Convolution

The codes are heavily borrowed from a pytorch:

Ocatve Convolution implementation.

GANs implementations.

Citation

If this work is useful for your research, please cite our paper:

@article{durall2019dropgan,
  title={Stabilizing GANs with Octave Convolutions},
  author={Durall, Ricard and Pfreundt, Franz-Josef and Keuper, Janis},
  journal={arXiv preprint arXiv:1905.12534},
  year={2019}
}

About

Code for the paper

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published