Skip to content

Collection of finetuning smaller LLMs efficiently in google colab

License

Notifications You must be signed in to change notification settings

Ak-Gautam/efficient_llm_fine_tunes

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

26 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Fine-tuning LLMs for free in Google Colab

AI should be accessible to all, GPU-poor or rich. This repo contains well explained code to finetune Smaller LLMs such as Qwen1.5-0.5B and Mistral 7B in a free Google Colab Notebook.
All of this can be replicated in a Google Colab free instabnce with T4 GPU.

Contributing

Contributions to this repository are welcome. If you find any issues or have suggestions for improvements, please open an issue or submit a pull request.

License

This is Apache 2.0 license.

About

Collection of finetuning smaller LLMs efficiently in google colab

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published