Skip to content

Commit

Permalink
feat(doc): updated FAQ
Browse files Browse the repository at this point in the history
  • Loading branch information
beniz committed Jun 21, 2023
1 parent e692f78 commit 88b417c
Show file tree
Hide file tree
Showing 4 changed files with 25 additions and 26 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

<h1 align="center">Generative AI Image Toolset with GANs and Diffusion for Real-World Applications</h1>

**JoliGEN** provides powerful training capabilities for generative AI image-to-image models
**JoliGEN** provides is an integrated framework for training custom generative AI image-to-image models

Main Features:

Expand Down
12 changes: 11 additions & 1 deletion docs/source/FAQ.rst
Original file line number Diff line number Diff line change
@@ -1,5 +1,15 @@
.. _qa:
.. _faq:

############################
Frequently Asked Questions
############################

- **What are the training times for the examples in JoliGEN training ?**

It is reasonable to train for 10 to 15 days on 2 to 4 GPUs in 256x256 or 360x360. In 64x64 or 128x128, a couple days may suffice, always a good starting point.

In general:

- With GANs, convergence can be visually assessed within 1 or 2 days, then fine-grained details start to appear
- With diffusion for object insertion, training is smoother due to the straight supervision, and good results are obtained with a couple of days, then 300 to 400 epochs are reasonable

25 changes: 13 additions & 12 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,24 +5,23 @@
.. image:: https://github.com/jolibrain/joliGEN/actions/workflows/github-actions-black-formatting.yml/badge.svg
:target: https://github.com/jolibrain/joliGEN/actions/workflows/github-actions-black-formatting.yml

`JoliGEN <https://github.com/jolibrain/joliGEN/>`_ provides easy-to-use
generative AI for image to image transformations.
`JoliGEN <https://github.com/jolibrain/joliGEN/>`_ is an integrated framework for training custom generative AI image-to-image models

***************
Main Features
***************

- JoliGEN support both GAN and Diffusion models for unpaired and paired
- JoliGEN support both **GAN and Diffusion models** for unpaired and paired
image to image translation tasks, including domain and style
adaptation with conservation of semantics such as image and object
classes, masks, ...

- JoliGEN generative capabilities are targeted at real world
applications such as Augmented Reality, Dataset Smart Augmentation
and object insertion, Synthetic to real transforms.
applications such as **Controled Image Generation**, **Augmented Reality**, **Dataset Smart Augmentation**
and object insertion, **Synthetic to Real** transforms.

- JoliGEN allows for fast and stable training with astonishing results.
A server with REST API is provided that allows for simplified
A **server with REST API** is provided that allows for simplified
deployment and usage.

- JoliGEN has a large scope of options and parameters. To not get
Expand All @@ -34,16 +33,16 @@ generative AI for image to image transformations.
Use cases
***********

- AR and metaverse: replace any image element with super-realistic
- **AR and metaverse**: replace any image element with super-realistic
objects
- Smart data augmentation: test / train sets augmentation
- Image manipulation: seamlessly insert or remove objects/elements in
- **Smart data augmentation**: test / train sets augmentation
- **Image manipulation**: seamlessly insert or remove objects/elements in
images
- Image to image translation while preserving semantic, e.g. existing
- **Image to image translation** while preserving semantic, e.g. existing
source dataset annotations
- Simulation to reality translation while preserving elements, metrics,
...
- Image to image translation to cope with scarce data
- **Image generation to enrich datasets**, e.g. counter dataset imbalance, increase test sets, ...

This is achieved by combining conditioned generator architectures for
fine-grained control, bags of discriminators, configurable neural
Expand Down Expand Up @@ -120,9 +119,11 @@ Code is making use of `pytorch-CycleGAN-and-pix2pix
`AttentionGAN <https://github.com/Ha0Tang/AttentionGAN>`_, `MoNCE
<https://github.com/fnzhan/MoNCE>`_ among others.

Some elements from JoliGEN are supported by the French National AI
Elements from JoliGEN are supported by the French National AI
program `"Confiance.AI" <https://www.confiance.ai/en/>`_

Contact: [email protected]

.. toctree::
:maxdepth: 4
:caption: Get Started
Expand Down
12 changes: 0 additions & 12 deletions docs/source/inference.rst
Original file line number Diff line number Diff line change
Expand Up @@ -32,15 +32,3 @@ Using a pretrained glasses insertion model (see above):
python3 gen_single_image_diffusion.py --model-in-file /path/to/model/latest_net_G_A.pth --img-in /path/to/source.jpg --mask-in /path/to/mask.jpg --dir-out /path/to/target_dir/ --img-width 256 --img-height 256
The mask image has 1 where to insert the object and 0 elsewhere.

**************
Export model
**************

.. code:: bash
python3 -m scripts.export_jit_model --model-in-file "/path/to/model_checkpoint.pth" --model-out-file exported_model.pt --model-type mobile_resnet_9blocks --img-size 360
Then ``exported_model.pt`` can be reloaded without JoliGEN to perform
inference with an external software, e.g. `DeepDetect
<https://github.com/jolibrain/deepdetect>`_ with torch backend.

0 comments on commit 88b417c

Please sign in to comment.