Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add LoRA-FA support #859

Open
PeanutButterRat opened this issue Nov 25, 2024 · 0 comments · May be fixed by #860
Open

Add LoRA-FA support #859

PeanutButterRat opened this issue Nov 25, 2024 · 0 comments · May be fixed by #860
Labels
enhancement New feature or request

Comments

@PeanutButterRat
Copy link
Contributor

PeanutButterRat commented Nov 25, 2024

Is your feature request related to a problem? Please describe:
LoRA is designed to reduce the memory requirements to finetune LLMs and already exists in Fairseq2 to some capacity. LoRA-FA reduces the memory overhead of vanilla LoRA even further by freezing the A matrix while leaving the B matrix as trainable.

Describe the solution you would like:
Because LoRA-FA is effectively an extension of LoRA, there should be an additional parameter in the LoRAConfig class that is used to effectively enable/disable LoRa-FA.

Describe the alternatives you have considered:
None

Additional Context:
None

@PeanutButterRat PeanutButterRat added the enhancement New feature or request label Nov 25, 2024
@PeanutButterRat PeanutButterRat linked a pull request Nov 25, 2024 that will close this issue
7 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant