Pytorch Implementation for NeurIPS2018 paper: IntroVAE: Introspective Variational Autoencoders for Photographic Image Synthesis.
The rep. contains a basic implementation for IntroVAE. However, due to no official implementation released, some hyperparameters can only be guessed and can not reach the performance as stated in paper. Issues and pull requests are welcome.
This work is based on dragen1860/IntroVAE-Pytorch.
-
Download FFHQ thumbnails128x128 subset, CelebA dataset will also do (but not tested).
-
Start Visdom server and run
python3 main.py --name FFHQt --root /path/to/FFHQ/thumbnails128x128 --batchsz 400
to train from strach.
Interrupt the training process when you find the image quality not improving any more.
- Interpolating in latent space.
python3 eval.py --load FFHQt/ckpt/vae_0000060000.mdl --input /path/to/FFHQ/thumbnails128x128/12345.png /path/to/FFHQ/thumbnails128x128/23456.png --output interp.png --n_interp 5
Tested for FFHQ 128x128 thumbnails on a GTX 1080Ti GPU and PyTorch 1.1.
- Training process.
- Original, reconstructed and sampled images (two rows each).
- Interpolation in latent space.