QLoRA: Efficient Finetuning of Quantized LLMs FORK
| Paper | Adapter Weights | Demo |
This repo supports the paper "QLoRA: Efficient Finetuning of Quantized LLMs", an effort to democratize access to LLM research.
QLoRA uses bitsandbytes for quantization and is integrated with Hugging Face's PEFT and transformers libraries. QLoRA was developed by members of the University of Washington's UW NLP group.
- Text Learning rather than Input-Output (dataset_format=rawtext)
- Stop Learning on specific loss (stop_at_loss=1.4)
- I am new to python
- experimenting
This is a fork. So shoutouts to the originator artidoro/qlora