-
Notifications
You must be signed in to change notification settings - Fork 928
Issues: kohya-ss/sd-scripts
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Bug❓: --cache_latents prevents regular checkpoint saving in train_network.py 😥
#1937
opened Feb 15, 2025 by
Jackiiiii
Can the algorithm for automatic scaling of ARB buckets be specified?
#1928
opened Feb 12, 2025 by
TLFZ1
Why have the options to define the single and double blocks_to_swap been removed?
#1923
opened Feb 7, 2025 by
Deathawaits4
FLUX-Fill-LoRa-Training and Clip-L training when doing Fine Tuning / DreamBooth with FLUX Dev
#1920
opened Feb 5, 2025 by
FurkanGozukara
flux_minimal_inference: CUDA Out of Memory with LORA and negative_prompt
#1919
opened Feb 3, 2025 by
mamawr
Hardcoded bucket_step overrides configurable parameter
bug
Something isn't working
#1915
opened Feb 1, 2025 by
iqddd
Problems with deliverables when using the full_fp16 option in lora learning
#1914
opened Jan 31, 2025 by
ytoaa
Shouldn't fix_noise_scheduler_betas_for_zero_terminal_snr be applied BEFORE prepare_scheduler_for_custom_training?
help wanted
Extra attention is needed
#1905
opened Jan 27, 2025 by
67372a
Flux training, no matter the settings I get: returned non-zero exit status 3221225477
#1902
opened Jan 27, 2025 by
alexgilseg
Does the full fine-tuning in SD3.5 have a setting similar to --guidance_scale?
#1896
opened Jan 24, 2025 by
raindrop313
lora extracted from dreambooth trained model has poor compatibility with controlnet
#1892
opened Jan 23, 2025 by
Nomination-NRB
Previous Next
ProTip!
Adding no:label will show everything without a label.