Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

only use lora on unet or pixart? #7

Open
CS123n opened this issue Mar 26, 2024 · 1 comment
Open

only use lora on unet or pixart? #7

CS123n opened this issue Mar 26, 2024 · 1 comment

Comments

@CS123n
Copy link

CS123n commented Mar 26, 2024

Hey, did you get the results from fine-tuning only the UNet part with a fixed T5 or Llama?

@ShihaoZhaoZSH
Copy link
Owner

We have not yet conducted the experiment you mentioned. However, we can provide some insights:

  1. In our research paper, specifically in Section 4.4 (Experiments - Ablation Study), we conducted experiments where we trained the LaVi-Bridge by only fine-tuning the adapter without LoRA. This aligns with your suggestion, with the difference being that we disabled LoRA in both U-Net and LLM. We observed that even when only training the adapter without injecting LoRA, we achieved reasonable results. However, the performance did show a decline.

  2. Additionally, during inference, it is possible to disable LoRA in U-Net or LLM, even when training with both LoRA and adapter. This can be achieved by simply commenting the monkeypatch_or_replace_lora_extended function in the test scripts. Despite this, it is important to note that the performance will also be negatively affected.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants