A DCGAN to generate anime faces using custom dataset in Keras.
The dataset is created by crawling anime database websites using curl
. The script anime_dataset_gen.py
crawls and processes the images into 64x64 PNG images with only the faces cropped.
This implementation of GAN uses deconv layers in Keras (networks are initialized in the GAN_Nets.py
file). I have tried various combinations of layers such as :
Conv + Upsampling
Conv + bilinear
Conv + Subpixel Upscaling
But none of these combinations yielded any decent results. The case was either GAN fails to generate images that resembles faces or it generates same or very similar looking faces for all batches (generator collapse). But these were my results, maybe techniques such as mini-batch discrimination, z-layers could be used to get better results.
Only simple GAN training methods are used. Training is done on about 22,000 images. Images are not loaded entirely into memory instead, each time a batch is sampled, only the sampled images are loaded. An overview of what happens each step is:
-Sample images from dataset (real data)
-Generate images using generator (gaussian noise as input) (fake data)
-Add noise to labels of real and fake data
-Train discriminator on real data
-Train discriminator on fake data
-Train GAN on fake images and real data labels
Training is done for 10,000 steps. In my setup (GTX 660; i5 4670) it takes 10-11 secs for each step.
The faces look pretty good IMO, might look more like an actual face with more training, more data and probably with a better network.
https://github.com/tdrussell/IllustrationGAN
https://github.com/jayleicn/animeGAN
https://github.com/forcecore/Keras-GAN-Animeface-Character
https://distill.pub/2016/deconv-checkerboard/
https://kivantium.net/keras-bilinear