You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The syntax <lora:loraname:unet weight> is already supported but it would be useful to have the one that's on a1111/forge which is <lora:loraname:unet weight:text encoder weight> for people who train LoRAs and test them on SwarmUI. It could help to pinpoint a bit better where the issue with the LoRA is if it was related to unappropiate learning rates. This is not something game changing so I can wait until it gets implemented, but yea, would be amazing to have it.
Other
No response
The text was updated successfully, but these errors were encountered:
Feature Idea
The syntax
<lora:loraname:unet weight>
is already supported but it would be useful to have the one that's on a1111/forge which is<lora:loraname:unet weight:text encoder weight>
for people who train LoRAs and test them on SwarmUI. It could help to pinpoint a bit better where the issue with the LoRA is if it was related to unappropiate learning rates. This is not something game changing so I can wait until it gets implemented, but yea, would be amazing to have it.Other
No response
The text was updated successfully, but these errors were encountered: