-
Notifications
You must be signed in to change notification settings - Fork 117
/
Copy pathREADME.md
177 lines (122 loc) · 6.7 KB
/
README.md
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
# Llamacpp_Python Model Server
The llamacpp_python model server images are based on the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) project that provides python bindings for [llama.cpp](https://github.com/ggerganov/llama.cpp). This provides us with a python based and OpenAI API compatible model server that can run LLM's of various sizes locally across Linux, Windows or Mac.
This model server requires models to be converted from their original format, typically a set of `*.bin` or `*.safetensor` files into a single GGUF formatted file. Many models are available in GGUF format already on [huggingface.co](https://huggingface.co). You can also use the [model converter utility](../../convert_models/) available in this repo to convert models yourself.
## Image Options
We currently provide 3 options for the llamacpp_python model server:
* [Base](#base)
* [Cuda](#cuda)
* [Vulkan (experimental)](#vulkan-experimental)
### Base
The [base image](../llamacpp_python/base/Containerfile) is the standard image that works for both arm64 and amd64 environments. However, it does not includes any hardware acceleration and will run with CPU only. If you use the base image, make sure that your container runtime has sufficient resources to run the desired model(s).
To build the base model service image:
```bash
make -f Makefile build
```
To pull the base model service image:
```bash
podman pull quay.io/ai-lab/llamacpp_python
```
### Cuda
The [Cuda image](../llamacpp_python/cuda/Containerfile) include all the extra drivers necessary to run our model server with Nvidia GPUs. This will significant speed up the models response time over CPU only deployments.
To Build the the Cuda variant image:
```bash
make -f Makefile build-cuda
```
To pull the base model service image:
```bash
podman pull quay.io/ai-lab/llamacpp_python_cuda
```
**IMPORTANT!**
To run the Cuda image with GPU acceleration, you need to install the correct [Cuda drivers](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#driver-installation) for your system along with the [Nvidia Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html#). Please use the links provided to find installation instructions for your system.
Once those are installed you can use the container toolkit CLI to discover your Nvidia device(s).
```bash
sudo nvidia-ctk cdi generate --output=/etc/cdi/nvidia.yaml
```
Finally, you will also need to add `--device nvidia.com/gpu=all` to your `podman run` command so your container can access the GPU.
### Vulkan (experimental)
The [Vulkan image](../llamacpp_python/vulkan/Containerfile) is experimental, but can be used for gaining partial GPU access on an M-series Mac, significantly speeding up model response time over a CPU only deployment. This image requires that your podman machine provider is "applehv" and that you use krunkit instead of vfkit. Since these tools are not currently supported by podman desktop this image will remain "experimental".
To build the Vulkan model service variant image:
| System Architecture | Command |
|---|---|
| amd64 | make -f Makefile build-vulkan-amd64 |
| arm64 | make -f Makefile build-vulkan-arm64 |
To pull the base model service image:
```bash
podman pull quay.io/ai-lab/llamacpp_python_vulkan
```
## Download Model(s)
There are many models to choose from these days, most of which can be found on [huggingface.co](https://huggingface.co). In order to use a model with the llamacpp_python model server, it must be in GGUF format. You can either download pre-converted GGUF models directly or convert them yourself with the [model converter utility](../../convert_models/) available in this repo.
A well performant Apache-2.0 licensed models that we recommend using if you are just getting started is
`granite-7b-lab`. You can use the link below to quickly download a quantized (smaller) GGUF version of this model for use with the llamacpp_python model server.
Download URL: [https://huggingface.co/instructlab/granite-7b-lab-GGUF/resolve/main/granite-7b-lab-Q4_K_M.gguf](https://huggingface.co/instructlab/granite-7b-lab-GGUF/resolve/main/granite-7b-lab-Q4_K_M.gguf)
Place all models in the [models](../../models/) directory.
You can use this snippet below to download the default model:
```bash
make -f Makefile download-model-granite
```
Or you can use the generic `download-models` target from the `/models` directory to download any model file from huggingface:
```bash
cd ../../models
make MODEL_NAME=<model_name> MODEL_URL=<model_url> -f Makefile download-model
# EX: make MODEL_NAME=granite-7b-lab-Q4_K_M.gguf MODEL_URL=https://huggingface.co/instructlab/granite-7b-lab-GGUF/resolve/main/granite-7b-lab-Q4_K_M.gguf -f Makefile download-model
```
## Deploy Model Service
### Single Model Service:
To deploy the LLM server you must specify a volume mount `-v` where your models are stored on the host machine and the `MODEL_PATH` for your model of choice. The model_server is most easily deploy from calling the make command: `make -f Makefile run`. Of course as with all our make calls you can pass any number of the following variables: `REGISTRY`, `IMAGE_NAME`, `MODEL_NAME`, `MODEL_PATH`, and `PORT`.
```bash
podman run --rm -it \
-p 8001:8001 \
-v Local/path/to/locallm/models:/locallm/models:ro \
-e MODEL_PATH=models/granite-7b-lab-Q4_K_M.gguf
-e HOST=0.0.0.0
-e PORT=8001
-e MODEL_CHAT_FORMAT=openchat
llamacpp_python \
```
or with Cuda image
```bash
podman run --rm -it \
--device nvidia.com/gpu=all
-p 8001:8001 \
-v Local/path/to/locallm/models:/locallm/models:ro \
-e MODEL_PATH=models/granite-7b-lab-Q4_K_M.gguf
-e HOST=0.0.0.0
-e PORT=8001
-e MODEL_CHAT_FORMAT=openchat
llamacpp_python \
```
### Multiple Model Service:
To enable dynamic loading and unloading of different models present on your machine, you can start the model service with a `CONFIG_PATH` instead of a `MODEL_PATH`.
Here is an example `models_config.json` with two model options.
```json
{
"host": "0.0.0.0",
"port": 8001,
"models": [
{
"model": "models/granite-7b-lab-Q4_K_M.gguf",
"model_alias": "granite",
"chat_format": "openchat",
},
{
"model": "models/merlinite-7b-lab-Q4_K_M.gguf",
"model_alias": "merlinite",
"chat_format": "openchat",
},
]
}
```
Now run the container with the specified config file.
```bash
podman run --rm -it -d \
-p 8001:8001 \
-v Local/path/to/locallm/models:/locallm/models:ro \
-e CONFIG_PATH=models/<config-filename> \
llamacpp_python
```
### DEV environment
The environment is implemented with devcontainer technology.
Running tests
```bash
make -f Makefile test
```