-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Q LORA or LORA implementation #50
Comments
We plan to host another repository to automate importing models. Currently we are working on burn-import to support more ONNX OPs. We plan to start with a few popular examples and gradually move to other pre-trained models. The idea is that once you import your model, you can, in addition to inference, finetune or do additional complex training. That's the whole idea for burn-import. At the beginning, we will support ONNX and other data formats, such as If you wish to contribute in such effort, we will happily assist you. burn-import has been cleaned up and ready for additional contribution. @nathanielsimard if you have anything to add, please let us know. |
Hi, just curious if any news since the last update was a while ago. Thank you! |
CCing @laggui |
The plan was to add those after Llama was added, but some plans changed since we introduced CubeCL 🙂 I think it's safe to say that there will be a positive update sometime in ~3 months regarding this 👀 Stay tuned. |
Feature description
For finetuning existing text generation model, LORA and QLORA are popularly being used. Can we create pipelines for download models from Huggingface, then finetune the models using LORA or QLORA ?
Feature motivation
For text generation models, finetuning is preferred, rather than training from scratch. LORA or QLORA reduces the time required for finetuning by an order of magnitude.
The text was updated successfully, but these errors were encountered: