Skip to content

Latest commit

 

History

History
93 lines (67 loc) · 3.83 KB

text-to-image.md

File metadata and controls

93 lines (67 loc) · 3.83 KB

Models

We currently offer four text-to-image models. FLUX1.1 [pro] is our most capable model which can generate images at up to 4MP while maintaining an impressive generation time of only 10 seconds per sample.

Name HuggingFace repo License sha256sum
FLUX.1 [schnell] https://huggingface.co/black-forest-labs/FLUX.1-schnell apache-2.0 9403429e0052277ac2a87ad800adece5481eecefd9ed334e1f348723621d2a0a
FLUX.1 [dev] https://huggingface.co/black-forest-labs/FLUX.1-dev FLUX.1-dev Non-Commercial License 4610115bb0c89560703c892c59ac2742fa821e60ef5871b33493ba544683abd7
FLUX.1 [pro] Available in our API.
FLUX1.1 [pro] Available in our API.
FLUX1.1 [pro] Ultra/raw Available in our API.

Open-weights usage

The weights will be downloaded automatically from HuggingFace once you start one of the demos. To download FLUX.1 [dev], you will need to be logged in, see here. If you have downloaded the model weights manually, you can specify the downloaded paths via environment-variables:

export FLUX_SCHNELL=<path_to_flux_schnell_sft_file>
export FLUX_DEV=<path_to_flux_dev_sft_file>
export AE=<path_to_ae_sft_file>

For interactive sampling run

python -m flux --name <name> --loop

Or to generate a single sample run

python -m flux --name <name> \
  --height <height> --width <width> \
  --prompt "<prompt>"

We also provide a streamlit demo that does both text-to-image and image-to-image. The demo can be run via

streamlit run demo_st.py

We also offer a Gradio-based demo for an interactive experience. To run the Gradio demo:

python demo_gr.py --name flux-schnell --device cuda

Options:

  • --name: Choose the model to use (options: "flux-schnell", "flux-dev")
  • --device: Specify the device to use (default: "cuda" if available, otherwise "cpu")
  • --offload: Offload model to CPU when not in use
  • --share: Create a public link to your demo

To run the demo with the dev model and create a public link:

python demo_gr.py --name flux-dev --share

Diffusers integration

FLUX.1 [schnell] and FLUX.1 [dev] are integrated with the 🧨 diffusers library. To use it with diffusers, install it:

pip install git+https://github.com/huggingface/diffusers.git

Then you can use FluxPipeline to run the model

import torch
from diffusers import FluxPipeline

model_id = "black-forest-labs/FLUX.1-schnell" #you can also use `black-forest-labs/FLUX.1-dev`

pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-schnell", torch_dtype=torch.bfloat16)
pipe.enable_model_cpu_offload() #save some VRAM by offloading the model to CPU. Remove this if you have enough GPU power

prompt = "A cat holding a sign that says hello world"
seed = 42
image = pipe(
    prompt,
    output_type="pil",
    num_inference_steps=4, #use a larger number if you are using [dev]
    generator=torch.Generator("cpu").manual_seed(seed)
).images[0]
image.save("flux-schnell.png")

To learn more check out the diffusers documentation