Skip to content

zzzucf/lsgan-gp

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

38 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Loss-Sensitive GAN

Framework: Pytorch 0.2.0.post1 or above

Language: Python 2.7 or above

Implemented by: Zihang Zou, [Laboratory for MAchine Perception and Learning(MAPLE)], University of Central Florida

Please cite the following paper when referring to the following algorithms:

Guo-Jn Qi. Loss-Sensitive Generative Adversarial Networks on Lipschitz Densities. arXiv:1701.06264 [pdf]

Reference

LS-GAN

Code: https://github.com/guojunq/lsgan

Paper: https://arxiv.org/pdf/1701.06264

This implementation covers the algorithms proposed in loss-sensitive GAN, including direct gradient penalty and generalized loss sensitive GAN. The loss sensitive gan regularizes GAN on Lipschitz density through a margin and well defined discrminator output. LSGAN abandons the the binary entropy term proposed in original GAN and apply the assumption that a real example should have a smaller loss than a generated sample. Discrmintor loss and generator loss for GLS-GAN are as below:

D_loss = LeakyReLU(D(x) - D(G(z)) + lambda * \delta(x, G(z))).mean()

G_loss = D(G(z)).mean()

It's worthy noted that we use LeakyReLU for generalized LS-GAN. And this function is only a special case of ()+ from the original proof. Any other ()+ function also works under the generalized theorem of LS-GAN.

The gradient penalty applies the form proposed originally in the first version of LS-GAN, Chapter 5 [pdf], quoted here "Alternatively, one may consider to directly minimize the gradient norm ||∇xLθ(x)|| as a regularizer for the LS-GAN. In this paper, we adopt weight decay for its simplicity and find it works well with the LS-GAN model in experiments."

Usage

1.PYTORCH version

1.In this implementation, we use the following version of PYTORCH (any version beyond this will also work),

$ pip list | grep torch
torch (0.2.0.post1)
torchvision (0.1.8)

We use the following function to calculate the gradient penalty. torch.autograd.grad() [source]

2.Download dataset

1.Setup and download celebA dataset

Download img_align_celeba.zip from http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html under the link "Align&Cropped Images".

2.Crop the face using face detector.

$ python ./Data/face_detect.py

Note: For those dataset that are not supported by PYTORCH, you can use your own image folder by using the parameter --dataset folder, the code will work. And be sure to have a sub-folder under the main images folder. For example, celebA_crop/64_crop/.

3.Train LS-GAN

The default slope is 0, which is LS-GAN,

$ python lsgan-gp.py --dataset folder --dataroot celebA_crop --cuda --niter 25

If slope is set to 1, it is WGAN,

$ python lsgan-gp.py --dataset folder --dataroot celebA_crop --cuda --niter 25 --slope 1

Or you can explore more slope as GLS-GAN, for example,

$ python lsgan-gp.py --dataset folder --dataroot celebA_crop --cuda --niter 25 --slope 0.01

Results

We save our generated images in samples folder using torchvision.utils.save_image function. You should get the following results after running the code.

LSGAN converges faster! You can start getting recognizable results after half an epoch.

half epoch

alt text

1 epoch

alt text

2 epoch

alt text

3 epoch

alt text

5 epoch

alt text

10 epoch

alt text

15 epoch

alt text

20 epoch

alt text

25 epoch

alt text

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages