-
Notifications
You must be signed in to change notification settings - Fork 22
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #19 from gbtb/vNext
vNext
- Loading branch information
Showing
28 changed files
with
2,013 additions
and
298 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,7 +0,0 @@ | ||
[submodule "InvokeAI"] | ||
path = InvokeAI | ||
url = ../../invoke-ai/InvokeAI | ||
branch = main | ||
[submodule "stable-diffusion-webui"] | ||
path = stable-diffusion-webui | ||
url = ../../AUTOMATIC1111/stable-diffusion-webui | ||
Submodule InvokeAI
deleted from
93cdb4
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,65 +1,74 @@ | ||
# Table of contents | ||
- [nix-stable-diffusion](#nix-stable-diffusion) | ||
- [What's done](#whats-done) | ||
* [What's done](#whats-done) | ||
- [How to use it?](#how-to-use-it) | ||
- [InvokeAI](#invokeai) | ||
- [stable-diffusion-webui aka 111AUTOMATIC111 fork](#stable-diffusion-webui-aka-111automatic111-fork) | ||
- [What's needed to be done](#whats-needed-to-be-done) | ||
- [Updates and versioning](#updates-and-versioning) | ||
- [Acknowledgements](#acknowledgements) | ||
* [InvokeAI](#invokeai) | ||
* [stable-diffusion-webui aka 111AUTOMATIC111 fork](#stable-diffusion-webui-aka-111automatic111-fork) | ||
* [Hardware quirks](#hardware-quirks) | ||
+ [AMD](#amd) | ||
+ [Nvidia](#nvidia) | ||
- [What's (probably) needed to be done](#whats-probably-needed-to-be-done) | ||
- [Current versions](#current-versions) | ||
- [Meta](#meta) | ||
* [Contributions](#contributions) | ||
* [Acknowledgements](#acknowledgements) | ||
* [Similar projects](#similar-projects) | ||
|
||
# nix-stable-diffusion | ||
Flake for running SD on NixOS | ||
|
||
## What's done | ||
* Nix devShell capable of running InvokeAI's and stable-diffusion-webui flavors of SD without need to reach for pip or conda (including AMD ROCM support) | ||
* Nix flake capable of running InvokeAI's and stable-diffusion-webui flavors of SD without need to reach for pip or conda (including AMD ROCM support) | ||
* ...??? | ||
* PROFIT | ||
|
||
# How to use it? | ||
## InvokeAI | ||
1. Clone repo | ||
1. Clone submodule with InvokeAI | ||
1. Run `nix develop .#invokeai.{default,nvidia,amd}`, wait for shell to build | ||
1. `.#invokeai.default` builds shell which overrides bare minimum required for SD to run | ||
1. `.#invokeai.amd` builds shell which overrides torch packages with ROCM-enabled bin versions | ||
1. `.#invokeai.nvidia` builds shell with overlay explicitly setting `cudaSupport = true` for torch | ||
1. Run `nix run .#invokeai.{default,amd} -- --web --root_dir "folder for configs and models"`, wait for package to build | ||
1. `.#invokeai.default` builds package with default torch-bin that has CUDA-support by default | ||
1. `.#invokeai.amd` builds package which overrides torch packages with ROCM-enabled bin versions | ||
1. Weights download | ||
1. **Built-in way.** Inside InvokeAI's directory, run `python scripts/preload_models.py` to preload main SD weighs and support models. (Downloading Stable Diffusion Weighs will require HugginFace token) | ||
2. **Manual way.** If you obtained SD weights from somewhere else, you can skip their download with `preload_models.py`. However, you'll have to manually create/edit `InvokeAI/configs/models.yaml` so that your models get loaded. Some example configs for SD 1.4, 1.5, 1.5-inpainting present in `models.example.yaml` . | ||
1. Run CLI with `python scripts/invoke.py` or GUI with `python scripts/invoke.py --web` | ||
1. For more detailed instructions consult https://invoke-ai.github.io/InvokeAI/installation/INSTALLING_MODELS/#community-contributed-models | ||
1. **Built-in CLI way.** Upon first launch InvokeAI will check its default config dir (~/invokeai) and suggest you to run build-in TUI startup configuration script that help you to download default models or supply existing ones to InvokeAI. Follow the instructions and finish configuration. Note: you can also pass option `--root_dir` to pick another location for configs/models installation. More fine-grained directory setup options also available - run `nix run .#invokeai.amd -- --help` for more info. | ||
2. **Build-in GUI way.** Recent version of InvokeAI added GUI for model managing. See upstream [docs](https://invoke-ai.github.io/InvokeAI/installation/050_INSTALLING_MODELS/#installation-via-the-webui) on that matter. | ||
1. CLI arguments for invokeai itself can be supplied after `--` part of the nix run command | ||
1. If you need to run additional scripts (like invokeai-merge, invokeai-ti), then you can run `nix build .#invokeai.amd` and call those scripts manually like that: `./result/bin/invokeai-ti`. | ||
|
||
## stable-diffusion-webui aka 111AUTOMATIC111 fork | ||
1. Clone repo | ||
1. Clone submodule with stable-diffusion-webui | ||
1. Run `nix develop .#webui.{default,nvidia,amd}`, wait for shell to build | ||
1. `.#webui.default` builds shell which overrides bare minimum required for SD to run | ||
1. `.#webui.amd` builds shell which overrides torch packages with ROCM-enabled bin versions | ||
1. `.#webui.nvidia` builds shell with overlay explicitly setting `cudaSupport = true` for torch | ||
1. Obtain and place SD weights into `stable-diffusion-webui/models/Stable-diffusion/model.ckpt` | ||
1. Inside `stable-diffusion-webui/` directory, run `python launch.py` to start web server. It should preload required models from the start. Additional models, such as CLIP, will be loaded before the first actual usage of them. | ||
1. Run `nix run .#webui.{default,amd} -- --data-dir "runtime folder for webui stuff" --ckpt-dir "folder with pre-downloaded main SD models"`, wait for packages to build | ||
1. `.#webui.default` builds package with default torch-bin that has CUDA-support by default | ||
1. `.#webui.amd` builds package which overrides torch packages with ROCM-enabled bin versions | ||
1. Webui is not a proper python package by itself, so I had to make a multi-layered wrapper script which sets required env and args. `bin/flake-launch` is a top-level wrapper, which sets default args and is running by default. `bin/launch.py` is a thin wrapper around original launch.py which only sets PYTHONPATH with required packages. Both wrappers pass additional args further down the pipeline. To list all available args you may run `nix run .#webui.amd -- --help`. | ||
|
||
# What's needed to be done | ||
## Hardware quirks | ||
### AMD | ||
If you get an error `"hipErrorNoBinaryForGpu: Unable to find code object for all current devices!"`, then probably your GPU is not fully supported by ROCM (only several gpus are by default) and you have to set env variable to trick ROCM into running - `export HSA_OVERRIDE_GFX_VERSION=10.3.0` | ||
|
||
- [x] devShell with CUDA support (should be trivial, but requires volunteer with NVidia GPU) | ||
- [ ] Missing packages definitions should be submitted to Nixpkgs | ||
- [x] Investigate ROCM device warning on startup | ||
- [ ] Apply patches so that all downloaded models would go into one specific folder | ||
- [ ] Should create a PR to pynixify with "skip-errors mode" so that no ugly patches would be necessary | ||
- [ ] Shell hooks for initial setup? | ||
- [ ] May be this devShell should be itself turned into a package? | ||
- [x] Add additional flavors of SD ? | ||
### Nvidia | ||
* **Please note, that I don't have an nvidia gpu and therefore I can't test that CUDA functionality actually work. If something is broken in that department, please open an issue, or even better - submit a PR with a proposed fix.** | ||
* xformers for CUDA hasn't been tested. Python package added to the flake, but it's missing triton compiler. It might partially work, so please test it and report back :) | ||
|
||
# What's (probably) needed to be done | ||
|
||
# Updates and versioning | ||
- [ ] Most popular missing packages definitions should be submitted to Nixpkgs | ||
- [ ] Try to make webui to use same paths and filenames for weights, as InvokeAI (through patching/args/symlinks) | ||
- [ ] Should create a PR to pynixify with "skip-errors mode" so that no ugly patches would be necessary | ||
- [ ] Increase reproducibility by replacing models, downloaded in runtime, to proper flake inputs | ||
|
||
Current versions: | ||
- InvokeAI 2.1.3p1 | ||
- stable-diffusion-webui 27.10.2022 | ||
# Current versions | ||
|
||
I have no intention to keep up with development pace of these apps, especially the Automatic's fork :) . However, I will ocasionally update at least InvokeAI's flake. Considering versioning, I will try to follow semver with respect to submodules as well, which means major version bump for submodule = major version bump for this flake. | ||
- InvokeAI 2.3.1.post2 | ||
- stable-diffusion-webui 12.03.2023 | ||
|
||
# Acknowledgements | ||
# Meta | ||
|
||
## Contributions | ||
Contributions are welcome. I have no intention to keep up with development pace of these apps, especially the Automatic's fork :) . | ||
However, I will ocasionally update at least InvokeAI's flake. Considering versioning, I will try to follow semver with respect to submodules as well, which means major version bump for submodule = major version bump for this flake. | ||
## Acknowledgements | ||
Many many thanks to https://github.com/cript0nauta/pynixify which generated all the boilerplate for missing python packages. | ||
Also thanks to https://github.com/colemickens/stable-diffusion-flake and https://github.com/skogsbrus/stable-diffusion-nix-flake for inspiration and some useful code snippets. | ||
|
||
## Similar projects | ||
You may want to check out [Nixified-AI](https://github.com/nixified-ai/flake). It aims to support broader range (e.g. text models) of AI models in NixOS. |
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Oops, something went wrong.
Oops, something went wrong.