Skip to content

My own implementation to run inference on local LLM models

License

Notifications You must be signed in to change notification settings

Notnaton/microllm

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

35 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Microllm

just the bare basics to run inference on local hardware.

currently working:

  • read_gguf.py

Refactor made it faster and more compact

TODO: fix token generation to generate sensible tokens...

About

My own implementation to run inference on local LLM models

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Languages