Skip to content

kth8/llama-server-vulkan

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

llama.cpp server and Llama 3.2 3B model bundled together inside a Docker image. Compiled with Vulkan support and without AVX support to run on old hardware. Tested using i3-3220 CPU with RX 470 GPU.

docker run -d --device /dev/kfd --device /dev/dri --init -p 8001:8080 ghcr.io/kth8/llama-server-vulkan

Verify if the server is running by going to http://127.0.0.1:8001 in your web browser or using the terminal:

curl http://127.0.0.1:8001/v1/chat/completions -H "Content-Type: application/json" -d '{"messages":[{"role":"user","content":"Hello"}]}'

About

Run llama.cpp server with Vulkan

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages