You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe:
LoRA is designed to reduce the memory requirements to finetune LLMs and already exists in Fairseq2 to some capacity. LoRA-FA reduces the memory overhead of vanilla LoRA even further by freezing the A matrix while leaving the B matrix as trainable.
Describe the solution you would like:
Because LoRA-FA is effectively an extension of LoRA, there should be an additional parameter in the LoRAConfig class that is used to effectively enable/disable LoRa-FA.
Describe the alternatives you have considered:
None
Additional Context:
None
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe:
LoRA is designed to reduce the memory requirements to finetune LLMs and already exists in Fairseq2 to some capacity. LoRA-FA reduces the memory overhead of vanilla LoRA even further by freezing the A matrix while leaving the B matrix as trainable.
Describe the solution you would like:
Because LoRA-FA is effectively an extension of LoRA, there should be an additional parameter in the LoRAConfig class that is used to effectively enable/disable LoRa-FA.
Describe the alternatives you have considered:
None
Additional Context:
None
The text was updated successfully, but these errors were encountered: