Skip to content

Unofficial PyTorch Implementation for NIPS2018 paper: IntroVAE: Introspective Variational Autoencoders for Photographic Image Synthesis.

Notifications You must be signed in to change notification settings

woxuankai/IntroVAE-Pytorch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

49 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

IntroVAE-Pytorch

Pytorch Implementation for NeurIPS2018 paper: IntroVAE: Introspective Variational Autoencoders for Photographic Image Synthesis.

The rep. contains a basic implementation for IntroVAE. However, due to no official implementation released, some hyperparameters can only be guessed and can not reach the performance as stated in paper. Issues and pull requests are welcome.

This work is based on dragen1860/IntroVAE-Pytorch.

Training

  1. Download FFHQ thumbnails128x128 subset, CelebA dataset will also do (but not tested).

  2. Start Visdom server and run

python3 main.py --name FFHQt --root /path/to/FFHQ/thumbnails128x128 --batchsz 400

to train from strach.

Interrupt the training process when you find the image quality not improving any more.

  1. Interpolating in latent space.
python3 eval.py --load FFHQt/ckpt/vae_0000060000.mdl --input /path/to/FFHQ/thumbnails128x128/12345.png /path/to/FFHQ/thumbnails128x128/23456.png --output interp.png --n_interp 5

Results

Tested for FFHQ 128x128 thumbnails on a GTX 1080Ti GPU and PyTorch 1.1.

  • Training process.

  • Original, reconstructed and sampled images (two rows each).

  • Interpolation in latent space.

About

Unofficial PyTorch Implementation for NIPS2018 paper: IntroVAE: Introspective Variational Autoencoders for Photographic Image Synthesis.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages