Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Develop/develop #96

Merged
merged 37 commits into from
Sep 19, 2021
Merged
Show file tree
Hide file tree
Changes from 35 commits
Commits
Show all changes
37 commits
Select commit Hold shift + click to select a range
1d7861b
Merge pull request #80 from nakajima-john-shotaro/hotfix/v1.0.0
nakajima-john-shotaro Sep 17, 2021
3aa23de
[change] footer position adjust
nakajima-john-shotaro Sep 17, 2021
0046a91
[feat] Implemented break-off queue
urasakikeisuke Sep 17, 2021
ddb95fa
[change] Changed some constants
urasakikeisuke Sep 17, 2021
d9f9e10
[feat] Implemented break-off queue
urasakikeisuke Sep 17, 2021
c54f23f
[fix] Disabled to launch Chrome
urasakikeisuke Sep 17, 2021
3eae26f
[change] Changed some functions
urasakikeisuke Sep 17, 2021
b85f32d
[update] brush up
nakajima-john-shotaro Sep 17, 2021
e1c641c
Merge pull request #82 from nakajima-john-shotaro/feature/frontend/po…
urasakikeisuke Sep 17, 2021
c086366
Merge pull request #81 from nakajima-john-shotaro/feature/backend/bre…
nakajima-john-shotaro Sep 17, 2021
7b5c736
[WIP] wip
urasakikeisuke Sep 17, 2021
7f24f56
Merge pull request #83 from nakajima-john-shotaro/feature/backend/twi…
urasakikeisuke Sep 17, 2021
d863e6f
[add] Changing communication method.
nakajima-john-shotaro Sep 17, 2021
1bfb615
Merge pull request #84 from nakajima-john-shotaro/feature/frontend/tw…
nakajima-john-shotaro Sep 17, 2021
810504a
[chenge] try new icon
nakajima-john-shotaro Sep 17, 2021
a87a3cd
[WIP] wip
urasakikeisuke Sep 17, 2021
9bd6f71
Merge pull request #85 from nakajima-john-shotaro/feature/frontend/tw…
nakajima-john-shotaro Sep 17, 2021
c3a4e93
Merge pull request #86 from nakajima-john-shotaro/feature/backend/twi…
urasakikeisuke Sep 17, 2021
9712df2
[change] Add icon and adjust button position
nakajima-john-shotaro Sep 17, 2021
8698b50
[feat] Post on Twitter
urasakikeisuke Sep 17, 2021
b0652a7
Merge pull request #87 from nakajima-john-shotaro/feature/backend/twi…
urasakikeisuke Sep 17, 2021
25f992a
Merge pull request #88 from nakajima-john-shotaro/feature/frontend/tw…
nakajima-john-shotaro Sep 17, 2021
d1d13f0
[style] Remove unneed code
nakajima-john-shotaro Sep 17, 2021
1989edd
Merge pull request #89 from nakajima-john-shotaro/develop/twitter
nakajima-john-shotaro Sep 17, 2021
1f0c619
[change] Changed tweet text
urasakikeisuke Sep 17, 2021
1e6b638
[update] Modify design.
nakajima-john-shotaro Sep 17, 2021
6f04963
Merge pull request #91 from nakajima-john-shotaro/feature/frontend/tw…
urasakikeisuke Sep 17, 2021
dc7a6a2
Merge pull request #90 from nakajima-john-shotaro/develop/backend
nakajima-john-shotaro Sep 17, 2021
187450b
[fix] Fixed a bug
urasakikeisuke Sep 17, 2021
7bc13a4
Merge pull request #92 from nakajima-john-shotaro/develop/backend
nakajima-john-shotaro Sep 17, 2021
38ceef6
[update] Changed display images, added favicons, and fixed display in…
nakajima-john-shotaro Sep 18, 2021
09697bd
Merge pull request #93 from nakajima-john-shotaro/feature/frontend/tw…
urasakikeisuke Sep 18, 2021
dcf48cf
Merge pull request #94 from nakajima-john-shotaro/develop/twitter
urasakikeisuke Sep 19, 2021
5335eed
[docs] Changed README
urasakikeisuke Sep 19, 2021
b5a2cca
Merge pull request #95 from nakajima-john-shotaro/feature/readme
nakajima-john-shotaro Sep 19, 2021
43a7a4f
[change] remove unneed code
nakajima-john-shotaro Sep 19, 2021
6692674
Merge pull request #97 from nakajima-john-shotaro/develop/backend
urasakikeisuke Sep 19, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
147 changes: 144 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,151 @@
# AIcon_dev
# AIcon 2

[![CircleCI](https://circleci.com/gh/nakajima-john-shotaro/AIcon_dev/tree/main.svg?style=svg)](https://circleci.com/gh/nakajima-john-shotaro/AIcon_dev/tree/main)
[![Spell check workflow](https://github.com/nakajima-john-shotaro/AIcon_dev/actions/workflows/misspell-fixer.yml/badge.svg?branch=main)](https://github.com/nakajima-john-shotaro/AIcon_dev/actions/workflows/misspell-fixer.yml)
[![Docker Build CI](https://github.com/nakajima-john-shotaro/AIcon_dev/actions/workflows/docker-ci.yml/badge.svg?branch=main)](https://github.com/nakajima-john-shotaro/AIcon_dev/actions/workflows/docker-ci.yml)
[![CodeQL](https://github.com/nakajima-john-shotaro/AIcon_dev/actions/workflows/codeql-analysis.yml/badge.svg?branch=main)](https://github.com/nakajima-john-shotaro/AIcon_dev/actions/workflows/codeql-analysis.yml)


## AIconとは
### AIcon is a web application that uses state-of-the-art AI to generate images from input text.
### AIconは最先端のAIを使って、入力された文章からそれに沿った画像を生成するWEBアプリケーションです。

#
<div align="center" width="80%" height="auto">
<img src="assets/logo_black.png" alt="logo" title="logo">
</div>


# Example
* ### *Burning ice*
<div align="center" width="80%" height="auto">
<img src="assets/burning_ice.png" alt="Burning ice" title="Burning ice">
</div>

* ### *New green promenade*
<div align="center" width="80%" height="auto">
<img src="assets/New_green_promenade.png" alt="New green promenade" title="New green promenade">
</div>

* ### *Fire and ice*
<div align="center" width="80%" height="auto">
<img src="assets/fire_and_ice.png" alt="Fire and ice" title="Fire and ice">
</div>


# Requirements

- Docker (19.03+)
- Nvidia docker (https://github.com/NVIDIA/nvidia-docker)

# System Requirements

## Minimum
- **CPU**: 64-bit Intel or AMD processor (also known as `x86_64`, `x64`, and `AMD64`)
- **Memory**: 8 GB RAM
- **Graphics**: Nvidia GeForce GTX and RTX series from 4 GB RAM or equivalent Nvidia Quadro card


## Recommendation
- **CPU**: 64-bit Intel or AMD processor (also known as `x86_64`, `x64`, and `AMD64`)
- **Memory**: 16 GB RAM
- **Graphics**: Nvidia GeForce RTX series from 8 GB RAM with Tensor Core


# Platform Support

- Ubuntu 18.04/20.04
- WSL2 (Requires `CUDA for WSL Public Preview`. See [here](https://developer.nvidia.com/cuda/wsl))


# Usage

## 1. Clone this repo.

## 2. Pull docker image
```sh
docker pull magicspell/aicon:latest
```

## (Or build docker image yourself)
```sh
cd docker && ./build-docker.sh
```

## 3. Run docker container
```sh
cd docker && ./run-docker.sh
```

## 4. Run the AIcon server
```sh
cd backend && python server.py
```

## 5. Connect to the sever
With the default settings, you can connect to the server by typing `http://localhost:5050` in the address bar of your browser.


# Citations
```bibtex
@misc{unpublished2021clip,
title = {CLIP: Connecting Text and Images},
author = {Alec Radford, Ilya Sutskever, Jong Wook Kim, Gretchen Krueger, Sandhini Agarwal},
year = {2021}
}
```

```bibtex
@misc{brock2019large,
title = {Large Scale GAN Training for High Fidelity Natural Image Synthesis},
author = {Andrew Brock and Jeff Donahue and Karen Simonyan},
year = {2019},
eprint = {1809.11096},
archivePrefix = {arXiv},
primaryClass = {cs.LG}
}
```

```bibtex
@misc{sitzmann2020implicit,
title = {Implicit Neural Representations with Periodic Activation Functions},
author = {Vincent Sitzmann and Julien N. P. Martel and Alexander W. Bergman and David B. Lindell and Gordon Wetzstein},
year = {2020},
eprint = {2006.09661},
archivePrefix = {arXiv},
primaryClass = {cs.CV}
}
```

```bibtex
@misc{ramesh2021zeroshot,
title = {Zero-Shot Text-to-Image Generation},
author = {Aditya Ramesh and Mikhail Pavlov and Gabriel Goh and Scott Gray and Chelsea Voss and Alec Radford and Mark Chen and Ilya Sutskever},
year = {2021},
eprint = {2102.12092},
archivePrefix = {arXiv},
primaryClass = {cs.CV}
}
```


```bibtex
@misc{kitaev2020reformer,
title = {Reformer: The Efficient Transformer},
author = {Nikita Kitaev and Łukasz Kaiser and Anselm Levskaya},
year = {2020},
eprint = {2001.04451},
archivePrefix = {arXiv},
primaryClass = {cs.LG}
}
```

入力した文章を画像に変換するWebアプリケーションです
```bibtex
@misc{esser2021taming,
title = {Taming Transformers for High-Resolution Image Synthesis},
author = {Patrick Esser and Robin Rombach and Björn Ommer},
year = {2021},
eprint = {2012.09841},
archivePrefix = {arXiv},
primaryClass = {cs.CV}
}
```
8 changes: 6 additions & 2 deletions aicon/backend/constant.py
Original file line number Diff line number Diff line change
Expand Up @@ -61,6 +61,8 @@

CORE_COMPATIBLE_PYTORCH_VERSION: str = "1.7.1"
CORE_C2I_QUEUE: str = "c2i_queue"
CORE_C2I_BREAK_QUEUE: str = "c2i_brake_queue"
CORE_C2I_EVENT: str = "c2i_event"
CORE_I2C_EVENT: str = "i2c_event"

CHC_TIMEOUT: float = 7.0
Expand All @@ -78,7 +80,9 @@
TWITTER_OAUTH_VERIFIER: str = "oauth_verifier"
TWITTER_OAUTH_TOKEN_SECRET: str = "oauth_token_secret"
TWITTER_IMG_PATH: str = JSON_IMG_PATH
TWITTER_TEXT: str = JSON_TEXT
TWITTER_MODE: str = "mode"
TWITTER_UUID: str = "uuid"

TWITTER_MODE_ICON: str = "icon"
TWITTER_MODE_TWEET: str = "tweet"
Expand Down Expand Up @@ -161,11 +165,11 @@ def __str__(self):
return f"{self.arg}"


class AIconEnvVarNotFindError(AIconBaseException):
class AIconEnvVarNotFoundError(AIconBaseException):
def __str__(self):
return f"{self.arg}"


class AIconCookiyNotFindError(AIconBaseException):
class AIconCookieNotFoundError(AIconBaseException):
def __str__(self):
return f"{self.arg}"
15 changes: 8 additions & 7 deletions aicon/backend/models/big_sleep/big_sleep.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,8 @@

sys.path.append(os.path.join(os.path.dirname(__file__), '../..'))
import random
from multiprocessing import Queue, synchronize
from multiprocessing import Queue
from multiprocessing.synchronize import Event as Event_
from pathlib import Path

import imageio
Expand Down Expand Up @@ -296,7 +297,7 @@ def __init__(
save_mp4_path: str = os.path.join(self.client_data[JSON_MP4_PATH], "timelapse.mp4")
self.response_mp4_path: str = save_mp4_path.replace("frontend/", "")

self.writer: imageio.core.Format.Writer = get_writer(save_mp4_path, fps=10)
self.writer: imageio.core.Format.Writer = get_writer(save_mp4_path, fps=20, quality=10)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
self.writer: imageio.core.Format.Writer = get_writer(save_mp4_path, fps=20, quality=10)
self.writer: imageio.core.Format.Writer = get_writer(save_mp4_path, fps=20)


text: str = f"{self.client_data[RECEIVED_DATA][JSON_TEXT]}|{self.client_data[RECEIVED_DATA][JSON_CARROT]}"
stick: str = self.client_data[RECEIVED_DATA][JSON_STICK]
Expand All @@ -307,13 +308,14 @@ def __init__(
model_name: str = self.client_data[RECEIVED_DATA][JSON_BACKBONE]

self.c2i_queue: Queue = self.client_data[CORE_C2I_QUEUE]
self.i2c_event: synchronize.Event = self.client_data[CORE_I2C_EVENT]
self.c2i_event: Event_ = self.client_data[CORE_C2I_EVENT]
self.i2c_event: Event_ = self.client_data[CORE_I2C_EVENT]

self.put_data: Dict[str, Optional[Union[str, bool]]] = {
JSON_HASH: self.client_uuid,
JSON_CURRENT_ITER: None,
JSON_IMG_PATH: None,
JSON_MP4_PATH: None,
JSON_MP4_PATH: self.response_mp4_path,
JSON_COMPLETE: False,
JSON_MODEL_STATUS: False,
}
Expand Down Expand Up @@ -493,7 +495,7 @@ def save_image(self, epoch: int, iteration: int) -> None:
pil_img: Image = T.ToPILImage()(img.squeeze())
pil_img.save(save_filename)

self.writer.append_data(np.uint8(np.array(pil_img) * 255.))
self.writer.append_data(np.uint8(np.array(pil_img)))

def forward(self) -> None:
with torch.no_grad():
Expand Down Expand Up @@ -541,9 +543,8 @@ def forward(self) -> None:
pass

self.put_data[JSON_IMG_PATH] = str(self.response_filename)
self.put_data[JSON_MP4_PATH] = self.response_mp4_path
self.put_data[JSON_COMPLETE] = True

self.c2i_queue.put_nowait(self.put_data)
self.c2i_event.set()

torch.cuda.empty_cache()
16 changes: 9 additions & 7 deletions aicon/backend/models/deep_daze/deep_daze.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,8 @@

sys.path.append(os.path.join(os.path.dirname(__file__), '../..'))
import random
from multiprocessing import synchronize
from multiprocessing import Queue
from multiprocessing.synchronize import Event as Event_
from pathlib import Path
from queue import Empty

Expand Down Expand Up @@ -265,7 +265,7 @@ def __init__(
save_mp4_path: str = os.path.join(self.client_data[JSON_MP4_PATH], "timelapse.mp4")
self.response_mp4_path: str = save_mp4_path.replace("frontend/", "")

self.writer: imageio.core.Format.Writer = get_writer(save_mp4_path, fps=10)
self.writer: imageio.core.Format.Writer = get_writer(save_mp4_path, fps=20, quality=10)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
self.writer: imageio.core.Format.Writer = get_writer(save_mp4_path, fps=20, quality=10)
self.writer: imageio.core.Format.Writer = get_writer(save_mp4_path, fps=20)


text: str = self.client_data[RECEIVED_DATA][JSON_TEXT]
seed: Optional[int] = self.client_data[RECEIVED_DATA][JSON_SEED]
Expand All @@ -278,13 +278,15 @@ def __init__(
model_name: str = str(self.client_data[RECEIVED_DATA][JSON_BACKBONE])

self.c2i_queue: Queue = self.client_data[CORE_C2I_QUEUE]
self.i2c_event: synchronize.Event = self.client_data[CORE_I2C_EVENT]
self.c2i_brake_queue: Queue = self.client_data[CORE_C2I_BREAK_QUEUE]
self.c2i_event: Event_ = self.client_data[CORE_C2I_EVENT]
self.i2c_event: Event_ = self.client_data[CORE_I2C_EVENT]

self.put_data: Dict[str, Optional[Union[str, bool]]] = {
JSON_HASH: self.client_uuid,
JSON_CURRENT_ITER: None,
JSON_IMG_PATH: None,
JSON_MP4_PATH: None,
JSON_MP4_PATH: self.response_mp4_path,
JSON_COMPLETE: False,
JSON_MODEL_STATUS: False,
}
Expand Down Expand Up @@ -541,7 +543,7 @@ def save_image(self, epoch: int, iteration: int, img: Optional[torch.Tensor] = N
pil_img: Image = T.ToPILImage()(img.squeeze())
pil_img.save(save_filename)

self.writer.append_data(np.uint8(np.array(pil_img) * 255.))
self.writer.append_data(np.uint8(np.array(pil_img)))

def forward(self):
if exists(self.start_image):
Expand Down Expand Up @@ -614,10 +616,10 @@ def forward(self):
pass

self.put_data[JSON_IMG_PATH] = str(self.response_filename)
self.put_data[JSON_MP4_PATH] = self.response_mp4_path
self.put_data[JSON_COMPLETE] = True

self.c2i_queue.put_nowait(self.put_data)
self.c2i_brake_queue.put_nowait(self.put_data)
self.c2i_event.set()

torch.cuda.empty_cache()

Expand Down
Loading