Emulate NovelAI #2017
Replies: 88 comments 306 replies
-
NovelAI is just aimed for Anime? |
Beta Was this translation helpful? Give feedback.
-
how to apply sd_hijack patch? |
Beta Was this translation helpful? Give feedback.
-
Hi, I figured out how to recreate the NAI result for this image. 4chan forgot to account for the "quality tags" that NAI prepends to the prompt. I found this post which clarifies how the quality tags work: It's just "masterpiece, best quality" to the start of the prompt. So the prompt becomes "masterpiece, best quality, masterpiece, asuka langley sitting cross legged on a chair" 28 steps, euler, cfg scale 12, seed = 2870305590 Still slightly off, maybe need to tweak CFG scale or something, but it's very close. Anyway, someone should go tell 4chan. I can't be bothered to complete their godforsaken captcha. Also, shame on NovelAI for what they're doing to Auto. |
Beta Was this translation helpful? Give feedback.
-
Sorry if it is a silly Q. I got this error with the edited prompt_parser.py: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_addmm) Not an expert with torch. Any help would be much appreciated! |
Beta Was this translation helpful? Give feedback.
-
Would you give me a tutorial that even an incompetent like me who knows nothing about Python etc. can understand? |
Beta Was this translation helpful? Give feedback.
-
I'm not sure where I should add/delete that as I don't have those "-" lines in my sd_hijack.py file Does someone have the edited sd_hijack.py file? |
Beta Was this translation helpful? Give feedback.
-
Also, where can I find this?
|
Beta Was this translation helpful? Give feedback.
-
Kind of a side question, but how can we know if the vae is working? If I named my model "AnimeFullLatest.ckpt" the correct name for the VAE file is "AnimeFullLatest.vae.pt", correct? There's nothing else I need to do regarding that? |
Beta Was this translation helpful? Give feedback.
-
Seems like the latest changes broke it again Any idea? |
Beta Was this translation helpful? Give feedback.
-
what "Stable Diffusion finetune hypernetwork" actually do since it doesnt show anything even when i pasted the leaked modules folder to hypernetworks folder |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
I got error 'Options' object has no attribute 'CLIP_ignore_last_layers', I'm guessing from SD_Hijack, anyone has solution? |
Beta Was this translation helpful? Give feedback.
-
What do "Low Quality + Bad Anatomy" and "Low Quality" options in NovelAI website add to the negative prompt? |
Beta Was this translation helpful? Give feedback.
-
Has anyone been able to replicate novelai multiprompts? I can't get the same results using the AND keyword |
Beta Was this translation helpful? Give feedback.
-
... am I the only one who finds it a bit depressing that this repo's most upvoted discussion is about being able to replicate anime despite SD capabilities being incredibly more powerful than that? |
Beta Was this translation helpful? Give feedback.
-
As far as training what do I do with VAE? |
Beta Was this translation helpful? Give feedback.
-
I need some help here, when i try to generate an image i get this really long error |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Hi! I followed steps on the guide here: https://rentry.org/voldy#-novelai-setup- Then tried to replicate the "Hello world" image but I always get the following output image: This is the exact prompt: The only different things in the prompt are the model name (I renamed the ckpt file based on the name on the guide at the provided link) and the hash of the model. What am I doing wrong? |
Beta Was this translation helpful? Give feedback.
-
This can be caused by compatibility settings, for example using a Mac, a GPU with low memory (and settings to make it work) or maybe the new ARC GPUs.
SD is supposed to run on an nVidia GPU work sufficient memory and nothing else. Someone made torch compatible with lots of stuff but it didn't come with any warranty.
Hent BlueMail til Android
Den 27. jan. 2023 22.46, fra 22.46, AIA ***@***.***> skrev:
…can you share your commit hash and your set COMMANDLINE_ARGS=?
--
Reply to this email directly or view it on GitHub:
#2017 (reply in thread)
You are receiving this because you commented.
Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
What's exactly going on here? |
Beta Was this translation helpful? Give feedback.
-
Hello, thank you very much for the awesome guide! |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
While this is bumped, might as well share this: https://blog.novelai.net/introducing-nai-smea-higher-image-generation-resolutions-9b0034ffdc4b |
Beta Was this translation helpful? Give feedback.
-
Disclaimer: I have no relationship with Automatic1111/Voldy, i take all responsability for this discussion and everything shared here
compiled, edited and tested by aiamiauthor
Newsfeed https://rentry.org/sdupdates
author: questianon !!YbTGdICxQOw (malt#6065)
Anything 3.0 VAE is just the NovelAI VAE but renamed
NAI's "Variations" feature does (by enhance anon): Alright, variations is really similar to enhance. It sends it to img2img with strength hardcoded @ 0.8, and then increments the seed by 1 for each variation given. Nothing super special.
NAI's "Enhance" feature does (by anon): It upscales the image with Lanczos (defaults to 1.5x, which is the max), and then sends it to img2img with [whatever sampler you specified] @ 50 steps, with the denoising strength ranging from 0.2 to 0.6 (this is the "Magnitude" value that NAI shows, ranging from 1 to 5). It's like a much more expensive version of SD Upscale, which does it as tiles to save VRAM, and instead this does it on the whole image at once, so it requires more VRAM.
List of image manipulation (img2img) methods and their implementation state #2940
No Man's Guide to GPUs
https://docs.google.com/document/u/0/d/1lF9_5MIhALo7xCxKpQCZNL_jrJdUHYgJ3prET5yC1rI/mobilebasic
https://rentry.org/stablediffgpubuy
Stable Diffusion benchmark - GPU - Spreadsheet
https://docs.google.com/spreadsheets/d/1Zlv4UFiciSgmJZncCujuXKHwc4BcxbjbSBg71-SdeNk/edit#gid=0
NAI Quick Start Guide
https://rentry.org/nai-speedrun
Every graphics card model has a slightly different result.
[Textual Inversion] [Dreambooth] [Hypernetworks][Aesthetic Gradients]
[Hypernetworks]
https://rentry.org/hypernetwork4dumdums
[Textual Inversion]
https://rentry.org/textard
https://pastebin.com/dqHZBpyA
https://github.com/nicolai256/Stable-textual-inversion_win
[Dreambooth]
https://rentry.org/simple-db-elinas
https://github.com/huggingface/diffusers/tree/main/examples/dreambooth#training-on-a-8-gb-gpu
https://colab.research.google.com/github/ShivamShrirao/diffusers/blob/main/examples/dreambooth/DreamBooth_Stable_Diffusion.ipynb
https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb
Benchmark Xformers
Xformers .bat --xformers or --force-enable-xformers
NO --precision full --medvram --lowvram --no-half-vae --opt-split-attention --opt-split-attention-v1
NO Hypernetworks
NO v2.pt
XFORMERS Build Dev 10/13/2022
GPU ( ? )
CUDA 11.8 (mid)
PYTHON 3.10.6 (mid)
cuDNN 8.6.0 (minor)
NVCC 11.8 (minor)
Driver Version: 522.25 (minor)
Pytorch-cuda=11.8 (minor)
whl/cu118 (minor)
regarding WHL: Wheel has to match your python version (3.8/3.9/3.10) and your cuda version (11.6/11.7/11.8) - if either mismatches it won't work.
regarding GPU : A theory I have had myself for a little while now. And if that is the case it would make sense that Ampere cards would get closer to what NAI produces, as I image that NAI is probably running their backend on some Ampere based cards, like a bunch of A100's or something like that.
Benchmark Vanilla
Vanilla .bat
NO --xformers --force-enable-xformers --precision full --medvram --lowvram --no-half-vae --opt-split-attention --opt-split-attention-v1
NO Hypernetworks
NO v2.pt
NAI - EULER - NSFW - FULL
NAI - EULER A - NSFW - FULL
NAI - EULER - SFW - CURATED
NAI - EULER A - SFW - CURATED
EULER - NSFW - FULL - PRUNED - VAE - EMA FALSE - ENSD NO - CLIP SKIP 2-
EULER - NSFW - FULL - PRUNED - VAE - EMA FALSE - ENSD 31337 - CLIP SKIP 2-
EULER A - NSFW - FULL - PRUNED - VAE - EMA FALSE - ENSD NO - CLIP SKIP 2-
EULER A - NSFW - FULL - PRUNED - VAE - EMA FALSE - ENSD 31337 - CLIP SKIP 2-
EULER - SFW - CURATED- PRUNED - VAE - EMA FALSE - ENSD NO - CLIP SKIP 2-
EULER - SFW - CURATED- PRUNED - VAE - EMA FALSE - ENSD 31337 - CLIP SKIP 2-
EULER A - SFW - CURATED- PRUNED - VAE - EMA FALSE - ENSD NO - CLIP SKIP 2-
EULER A - SFW - CURATED- PRUNED - VAE - EMA FALSE - ENSD 31337 - CLIP SKIP 2-⚠️
ERROR⚠️
EULER A - SFW - CURATED- PRUNED - VAE - EMA FALSE - ENSD 31337 - CLIP SKIP 2- Enable quantization in K samplers
(Leg looks more like original)
EULER A - SFW - CURATED- PRUNED - VAE - EMA TRUE - ENSD 31337 - CLIP SKIP 2-
(Re-pruned?)
EULER A - SFW - CURATED- PRUNED - VAE - EMA TRUE - ENSD 31337 - CLIP SKIP 2- Enable quantization in K samplers
(Re-pruned?)
EULER A - SFW - CURATED- PRUNED - NO VAE - EMA TRUE - ENSD NO - CLIP SKIP 2-
EULER A - SFW - CURATED- PRUNED - NO VAE - EMA TRUE - ENSD 31337 - CLIP SKIP 2-
EULER A - SFW - CURATED- PRUNED - NO VAE - EMA FALSE - ENSD NO - CLIP SKIP 2-
EULER A - SFW - CURATED- PRUNED - NO VAE - EMA FALSE - ENSD 31337 - CLIP SKIP 2-
EULER A - SFW - CURATED- PRUNED - NO VAE - EMA FALSE - ENSD NO - CLIP SKIP 2- Enable quantization in K samplers
EULER A - SFW - CURATED- PRUNED - NO VAE - EMA FALSE - ENSD 31337 - CLIP SKIP 2- Enable quantization in K samplers
Issues: https://rentry.org/sd-issues
Troubleshooting Euler : https://imgur.com/a/DCYJCSX
Troubleshooting Euler a: https://imgur.com/a/s3llTE5
Current State https://rentry.org/sd-tldr
FAQ : https://rentry.org/sdg_FAQ
AUTOMATIC 1111 WebUi- NovelAi Emulation Setup
by anons and /2017 participants
EULER - NSFW (Full) working ✔️
EULER - SFW (Curated) - working partially, tweak? what's missing? ✔️🚧
EULER A - NSFW (Full) working ✔️ eta noise seed delta option added.
EULER A - SFW (Curated) - working partially, tweak? what's missing?syntax? ✔️🚧
NAI preset seed: 31337. Sample added.
Brackets/NAI = Brackets + Backslash/Automatic1111. Sample added.
If there's any brackets you gotta put a backslash before each one to get the same output for real.
PLMS - Testing
DDIM - Testing
(New User)
Step 0: https://rentry.org/voldy
(Old User)
Step 1: Back up your stable-diffusion-webui folder and create a new folder (restart from zero) (some old pulled repos won't work, git pull won't fix it in some cases), copy or git clone it, git init, Oct 9, 2022 last commit.
Step 2: Get animefull-final-pruned.ckpt from stableckpt folder ( Model hash: 925997e9, source: novelaileak or novelaileakpt2) move it to webui/models/Stable-diffusion. Feel free to rename it.
Step 3: Move stableckpt/animevae.pt to webui/models/Stable-diffusion and rename it to be the same name as your NovelAI checkpoint but with .vae.pt (ex: animefull-final-pruned.vae.pt)
Step 4: Move stableckpt/modules/modules/*.pt (anime.pt, anime_2.pt, etc) to webui/models/hypernetworks
Create the directory if it does not exist.
Step 5: Run webui-user.bat, git pull or auto git pull the .bat if you want.
Step 6: Load your weights (ex: final-pruned.ckpt) [925997e9].
Step 7: Ignore last layers of CLIP model : 2
After update
Step 8: Test.
Prompt by Anonymous 10/08/22(Sat)12:17:07 No.89073763
Novelai prompt:
Automatic prompt:
Prompt:
Add Quality Tags = masterpiece, best quality
Novelai : masterpiece, asuka langley sitting cross legged on a chair
Automatic: masterpiece, best quality, masterpiece, asuka langley sitting cross legged on a chair
Undesired Content = Negative prompt weights
low quality + bad anatomy = low quality, bad anatomy
Negative prompt examples:
EULER / NSFW (Full) - Full prompt ✔️ :
EULER / NSFW (Full) - Alt prompt ✔️:
EULER / NSFW (Full) - Alt prompt 2 - Mix ✔️ :
EULER / NSFW (Full) - Bad prompt⚠️ :
EULER / NSFW (Full) - Prompt + V2.pt ✔️:
EULER / NSFW (Full) - Mismatch⚠️ : before eta noise seed delta option (ENSD: 31337)
EULER A / NSFW (Full) ) + ENSD: 31337 ✔️:
EULER A / NSFW (Full) ) + ENSD: 31337 ✔️: Brackets/NAI = Brackets + Backslash/Automatic1111
"If there's any brackets you gotta put a backslash before each one to get the same output for real"
Prompt by Anonymous 10/10/22(Mon)13:11:48 No.6894050 edited by aiamiauthor
Nai prompt:
Automatic1111 prompt:
EULER / SFW (Curated) ✔️🚧: working partially, tweak? what's missing?
Automatic1111 prompt:
Try 1
EULER A / SFW (Curated) - Mismatch⚠️ : before eta noise seed delta option (ENSD: 31337)
Automatic1111 prompt:
EULER A / SFW (Curated) + ENSD: 31337) ✔️🚧: working partially, tweak? what's missing? syntax?
Automatic1111 prompt:
Try 1
Try 2
Try 3 : turning on Use old emphasis implementation. Can be useful to reproduce old seeds, breaks it.
Syntax, logic, color variations?
Starting point:
Prompt by Anonymous 10/08/22(Sat)12:17:07 No.89073763
Can we actually implement this?
LEFT = original leak, no vae, no hypernetwork, full-pruned
MIDDLE = original leak, vae, no hypernetwork, latest, SD_Hiijack edits and Parser (v2.pt) edits
RIGHT = NovelAI
NO longer needed. DON'T edit any files
sd_model.py to run the full model
vae.pt and .yaml ?
NO longer needed. DON'T edit any files
prompt_parser.py to run v2.pt - mod code by community, OSS license.
https://pastebin.com/5cftmKDm
NO longer needed. DON'T edit any files
sd_hijack.py for the clip-embed.patch - mod code by community, OSS license.
https://pastebin.com/SXzpTfDm
Beta Was this translation helpful? Give feedback.
All reactions