Skip to content

The library `pared` down the features of `diffusers` implemented the minimum function to generate images without using huggingface/diffusers to understand the inner workings of the library.

Notifications You must be signed in to change notification settings

masaishi/parediffusers

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PareDiffusers

Screenshot 2024-01-17 at 10 53 49 PM

parediffusers on PyPI

The library pared down the features of diffusers implemented the minimum function to generate images without using huggingface/diffusers functions to understand the inner workings of the library.

Why PareDiffusers?

PareDiffusers was born out of a curiosity and a desire to demystify the processes of generating images by diffusion models and the workings of the diffusers library.

I will write blog-style notebooks understanding how works using a top-down approach. First, generate images using diffusers to understand the overall flow, then gradually replace code with Pytorch code. In the end, we will write the code for the PareDiffusers code that does not include diffusers code.

I hope that it helps others who share a similar interest in the inner workings of image generation.

Versions

  • v0.0.0: After Ch0.0.0, inprement StableDiffusionPipeline.
  • v0.1.2: After Ch0.1.0, imprement DDIMScheduler.
  • v0.2.0: After Ch0.2.0, imprement UNet2DConditionModel.
  • v0.3.1: After Ch0.3.0, imprement AutoencoderKL.

Table of Contents

version: v0.0.0

  • Generate images using diffusers
  • Imprement StableDiffusionPipeline
  • Imprement DDIMScheduler
  • Imprement UNet2DConditionModel
  • Imprement AutoencoderKL
  • Test PareDiffusersPipeline by pip install parediffusers.
  • Play prompt_embeds, make gradation images by using two prompts.

version: v0.1.3

  • Imprement images using diffusers
  • Imprement StableDiffusionPipeline
  • Imprement DDIMScheduler
  • Imprement UNet2DConditionModel
  • Imprement AutoencoderKL
  • Test PareDiffusersPipeline by pip install parediffusers.

version: v0.2.0

  • Generate images using diffusers
  • Imprement StableDiffusionPipeline
  • Imprement DDIMScheduler
  • Imprement UNet2DConditionModel
  • Imprement AutoencoderKL
  • Test PareDiffusersPipeline by pip install parediffusers.

version: v0.3.1

  • Generate images using diffusers
  • Imprement StableDiffusionPipeline
  • Imprement DDIMScheduler
  • Imprement UNet2DConditionModel
  • Imprement AutoencoderKL
  • Test PareDiffusersPipeline by pip install parediffusers.

Usage

import torch
from parediffusers import PareDiffusionPipeline

device = torch.device("cuda")
dtype = torch.float16
model_name = "stabilityai/stable-diffusion-2"

pipe = PareDiffusionPipeline.from_pretrained(model_name, device=device, dtype=dtype)
prompt = "painting depicting the sea, sunrise, ship, artstation, 4k, concept art"
image = pipe(prompt)
display(image)

Contribution

I am starting this project to help me understand the code in order to participate in diffusers' OSS. So, I think there may be some mistakes in my explanation, so if you find any, please feel free to correct them via an issue or pull request.

About

The library `pared` down the features of `diffusers` implemented the minimum function to generate images without using huggingface/diffusers to understand the inner workings of the library.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published