Alpaca-Turbo is a language model that can be run locally without much setup required. It is a user-friendly web UI for the alpaca.cpp language model based on LLaMA, with unique features that make it stand out from other implementations. The goal is to provide a seamless chat experience that is easy to configure and use, without sacrificing speed or functionality.
Note: for some reason this docker container works on linux but not on windows
Docker must be installed on your system
- Download the latest alpaca-turbo.zip from the release page. here
- Extract the contents of the zip file into a directory named alpaca-turbo.
- Copy your alpaca models to alpaca-turbo/models/ directory.
- Run the following command to set everything up:
docker-compose up
- Visit http://localhost:5000 to use the chat interface of the chatbot.
-
Install miniconda
- Install for all users
- Make sure to add
c:\ProgramData\miniconda3\condabin
to your environment variables
-
Download the latest alpaca-turbo.zip from the release page. here
-
Extract Alpaca-Turbo.zip to Alpaca-Turbo
Make sure you have enough space for the models in the extracted location
-
Copy your alpaca models to alpaca-turbo/models/ directory.
-
Open cmd as Admin and type
conda init
-
close that window
-
open a new cmd window in your Alpaca-Turbo dir and type
conda create -n alpaca_turbo python=3.8 -y conda activate alpaca_turbo pip install -r requirements.txt python api.py
-
Visit http://localhost:5000 select your model and click change wait for the model to load
-
ready to interact
- ggerganov/LLaMA.cpp For their amazing cpp library
- antimatter15/alpaca.cpp For initial versions of their chat app
- cocktailpeanut/dalai For the Inspiration
- MetaAI for the LLaMA models
- Stanford for the alpaca models