Skip to content

LynnHo/EigenGAN-Tensorflow

Repository files navigation

Gender Bangs Body Side Pose (Yaw)
Lighting Smile Face Shape Lipstick Color
Painting Style Pose (Yaw) Pose (Pitch) Zoom & Rotate
Flush & Eye Color Mouth Shape Hair Color Hue (Orange-Blue)

more unsupervisedly learned dimensions

EigenGAN: Layer-Wise Eigen-Learning for GANs
Zhenliang He1,2, Meina Kan1,2, Shiguang Shan1,2,3
1Key Lab of Intelligent Information Processing, Institute of Computing Technology, CAS, China
2University of Chinese Academy of Sciences, China
3Peng Cheng Laboratory, China

Schema

Manifold Perspective

Usage

  • Environment

    • Python 3.6

    • TensorFlow 1.15

    • OpenCV, scikit-image, tqdm, oyaml

    • we recommend Anaconda or Miniconda, then you can create the environment with commands below

      conda create -n EigenGAN python=3.6
      
      source activate EigenGAN
      
      conda install opencv scikit-image tqdm tensorflow-gpu=1.15
      
      conda install -c conda-forge oyaml
    • NOTICE: if you create a new conda environment, remember to activate it before any other command

      source activate EigenGAN
  • Data Preparation

    • CelebA-unaligned (10.2GB, higher quality than the aligned data)

      • download the dataset

      • unzip and process the data

        7z x ./data/img_celeba/img_celeba.7z/img_celeba.7z.001 -o./data/img_celeba/
        
        unzip ./data/img_celeba/annotations.zip -d ./data/img_celeba/
        
        python ./scripts/align.py
    • Anime

      • download the dataset

        mkdir -p ./data/anime
        
        rsync --verbose --recursive rsync://176.9.41.242:873/biggan/portraits/ ./data/anime/original_imgs
      • process the data

        python ./scripts/remove_black_edge.py
  • Run (support multi-GPU)

    • training on CelebA

      CUDA_VISIBLE_DEVICES=0,1 \
      python train.py \
      --img_dir ./data/img_celeba/aligned/align_size(572,572)_move(0.250,0.000)_face_factor(0.450)_jpg/data \
      --experiment_name CelebA
    • training on Anime

      CUDA_VISIBLE_DEVICES=0,1 \
      python train.py \
      --img_dir ./data/anime/remove_black_edge_imgs \
      --experiment_name Anime
    • testing

      CUDA_VISIBLE_DEVICES=0 \
      python test_traversal_all_dims.py \
      --experiment_name CelebA
    • loss visualization

      CUDA_VISIBLE_DEVICES='' \
      tensorboard \
      --logdir ./output/CelebA/summaries \
      --port 6006
  • Using Trained Weights

    • trained weights (move to ./output/*.zip)

    • unzip the file (CelebA.zip for example)

      unzip ./output/CelebA.zip -d ./output/
    • testing (see above)

Citation

If you find EigenGAN useful in your research works, please consider citing:

@inproceedings{he2021eigengan,
  title={EigenGAN: Layer-Wise Eigen-Learning for GANs},
  author={He, Zhenliang and Kan, Meina and Shan, Shiguang},
  booktitle={International Conference on Computer Vision (ICCV)},
  year={2021}
}

About

[ICCV'21] EigenGAN: Layer-Wise Eigen-Learning for GANs

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages