Skip to content
/ qlora Public
forked from artidoro/qlora

QLoRA: Efficient Finetuning of Quantized LLMs

License

Notifications You must be signed in to change notification settings

mase-ppi/qlora

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

87 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

QLoRA: Efficient Finetuning of Quantized LLMs FORK

| Paper | Adapter Weights | Demo |

This repo supports the paper "QLoRA: Efficient Finetuning of Quantized LLMs", an effort to democratize access to LLM research.

QLoRA uses bitsandbytes for quantization and is integrated with Hugging Face's PEFT and transformers libraries. QLoRA was developed by members of the University of Washington's UW NLP group.

New Features

  • Text Learning rather than Input-Output (dataset_format=rawtext)
  • Stop Learning on specific loss (stop_at_loss=1.4)

Why forked

  • I am new to python
  • experimenting

Acknowledgements

This is a fork. So shoutouts to the originator artidoro/qlora

About

QLoRA: Efficient Finetuning of Quantized LLMs

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 86.9%
  • Python 10.6%
  • Shell 1.5%
  • HTML 1.0%