- B.Sc
- data/
- Dataset examples
- evaluation/
- Colorized examples
- implementation/
- model/
- Model summaries
- data/
Automatic colorization of gray-value images based on generative algorithms of machine learning and deep neural networks
Generative models create - in contrast to object recognizing models – new con-tent instead of just differentiating between familiar instances. This reveals novel possibilities of data synthesis, which are required whenever it is not useful or even impossible to predict unknown content. The automatic colorization of gray-value images is a prime example for the use of generative models, like a GAN (Generative Adversarial Network), which assigns as part of this work plausible color information to colorless patterns. Deep convolutional neural networks are used, which handle the colorization as a pixel-to-pixel transformation through machine learning. As a result, historical black and white images can be enhanced by applying a suitable color restoration.
Based on Pix2Pix by Isola et al. from Image-to-Image Translation with Conditional Adversarial Networks
Trained on Places365 by Zhou et al. from Places: A 10 million Image Database for Scene Recognition
Release | Date |
---|---|
1.0.0 | 31.01.20 |
Software | Version |
---|---|
CoLab | 1.0.0 |
Keras | 2.2.4 |
Python | 3.6.9 |
TensorBoard | 2.1.0 |
TensorFlow | 2.1.0 |
TensorFlow Addons | 0.6.0 |
Thesis supported by Faculty of Information, Media and Electrical Engineering at TH Köln - University of Applied Science
Implementation relies heavily on TensorFlow and Keras
Training process borrows from TensorFlow Tutorial
Hyperparameter search based on TensorBoard Guide