Skip to content

Latest commit

 

History

History
84 lines (60 loc) · 4.04 KB

README.md

File metadata and controls

84 lines (60 loc) · 4.04 KB

Generative Art Using Neural Visual Grammars and Dual Encoders

Arnheim 1

The original algorithm from the paper Generative Art Using Neural Visual Grammars and Dual Encoders running on 1 GPU allows optimization of any image using a genetic algorithm. This is much more general but much slower than using Arnheim 2 which uses gradients.

Arnheim 2

A reimplementation of the Arnheim 1 generative architecture in the CLIPDraw framework allowing optimization of its parameters using gradients. Much more efficient than Arnheim 1 above but requires differentiating through the image itself.

Arnheim 3 (aka CLIP-CLOP: CLIP-Guided Collage and Photomontage)

A spatial transformer-based Arnheim implementation for generating collage images. It employs a combination of evolution and training to create collages from opaque to transparent image patches.

Example patch datasets, with the exception of 'Fruit and veg', are provided under CC BY 4.0 licence. The 'Fruit and veg' patches in collage_patches/fruit.npy are based on a subset of the Kaggle Fruits 360 and are provided under CC BY-SA 4.0 licence, as are all example collages using them.

The Fall of the Damned by Rubens and Eaton. Collages made of different numbers of tree leaves patches (bulls in the top row), as well as Degas-inspired ballet dancers made from animals, faces made of fruit and still life or landscape made from patches of animals.

Usage

Usage instructions are included in the Colabs which open and run on the free-to-use Google Colab platform - just click the buttons below! Improved performance and longer timeouts are available with Colab Pro.

Arnheim 1 Open In Colab

Arnheim 2 Open In Colab

Arnheim 3 Open In Colab

Arnheim 3 Patch Maker Open In Colab

Video illustration of the CLIP-CLOP Collage and Photomontage Generator (Arnheim 3)

CLIP-CLOP Collage and Photomontage Generator

Citing this work

If you use this code (or any derived code), data or these models in your work, please cite the relevant accompanying papers on Generative Art Using Neural Visual Grammars and Dual Encoders or on CLIP-CLOP: CLIP-Guided Collage and Photomontage.

@misc{fernando2021genart,
      title={Generative Art Using Neural Visual Grammars and Dual Encoders},
      author={Chrisantha Fernando and S. M. Ali Eslami and Jean-Baptiste Alayrac and Piotr Mirowski and Dylan Banarse and Simon Osindero}
      year={2021},
      eprint={2105.00162},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
@inproceedings{mirowski2022clip,
               title={CLIP-CLOP: CLIP-Guided Collage and Photomontage},
               author={Piotr Mirowski and Dylan Banarse and Mateusz Malinowski and Simon Osindero and Chrisantha Fernando},
               booktitle={Proceedings of the Thirteenth International Conference on Computational Creativity},
               year={2022}
}

Disclaimer

This is not an official Google product.

CLIPDraw provided under license, Copyright 2021 Kevin Frans.

Other works may be copyright of the authors of such work.