This repo is the official PyTorch implementation of "DreamArtist: Towards Controllable One-Shot Text-to-Image Generation via Contrastive Prompt-Tuning" with Stable-Diffusion-webui.
Stable-Diffusion-webui Extension Version : DreamArtist-sd-webui-extension
Everyone is an artist. Rome wasn't built in a day, but your artist dreams can be!
With just one training image DreamArtist learns the content and style in it, generating diverse high-quality images with high controllability. Embeddings of DreamArtist can be easily combined with additional descriptions, as well as two learned embeddings.
Clone this repo.
git clone https://github.com/7eu7d7/DreamArtist-stable-diffusion
Following the instructions of webui to install.
First create the positive and negative embeddings in DreamArtist Create Embedding
Tab.
After that, the names
of the positive and negative embedding ({name}
and {name}-neg
) should be filled into the
txt2img Tab
with some common descriptions. This will ensure a correct preview image.
Then, select positive embedding and set the parameters and image folder path in the Train
Tab to start training.
The corresponding negative embedding is loaded automatically.
If your VRAM is low or you want save time, you can uncheck the reconstruction
.
better to train without filewords
Remember to check the option below, otherwise the preview is wrong.
Fill the trained positive and negative embedding into txt2img to generate with DreamArtist prompt.
- Stable Diffusion v1.5
- animefull-latest
- Anything v3.0
Embeddings can be transferred between different models of the same dataset.