Skip to content

Commit

Permalink
Revert "Bump llama-cpp-python to 0.2.18 (#4611)"
Browse files Browse the repository at this point in the history
This reverts commit 923c8e2.
  • Loading branch information
oobabooga committed Nov 17, 2023
1 parent e0a7cc5 commit 9d6f79d
Show file tree
Hide file tree
Showing 17 changed files with 174 additions and 92 deletions.
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -325,6 +325,7 @@ Optionally, you can use the following command-line flags:
| `--mlock` | Force the system to keep the model in RAM. |
| `--n-gpu-layers N_GPU_LAYERS` | Number of layers to offload to the GPU. |
| `--tensor_split TENSOR_SPLIT` | Split the model across multiple GPUs. Comma-separated list of proportions. Example: 18,17. |
| `--llama_cpp_seed SEED` | Seed for llama-cpp models. Default is 0 (random). |
| `--numa` | Activate NUMA task allocation for llama.cpp. |
| `--logits_all`| Needs to be set for perplexity evaluation to work. Otherwise, ignore it, as it makes prompt processing slower. |
| `--cache-capacity CACHE_CAPACITY` | Maximum cache capacity (llama-cpp-python). Examples: 2000MiB, 2GiB. When provided without units, bytes will be assumed. |
Expand Down
3 changes: 2 additions & 1 deletion docs/04 - Model Tab.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ Options:
* **alpha_value**: Used to extend the context length of a model with a minor loss in quality. I have measured 1.75 to be optimal for 1.5x context, and 2.5 for 2x context. That is, with alpha = 2.5 you can make a model with 4096 context length go to 8192 context length.
* **rope_freq_base**: Originally another way to write "alpha_value", it ended up becoming a necessary parameter for some models like CodeLlama, which was fine-tuned with this set to 1000000 and hence needs to be loaded with it set to 1000000 as well.
* **compress_pos_emb**: The first and original context-length extension method, discovered by [kaiokendev](https://kaiokendev.github.io/til). When set to 2, the context length is doubled, 3 and it's tripled, etc. It should only be used for models that have been fine-tuned with this parameter set to different than 1. For models that have not been tuned to have greater context length, alpha_value will lead to a smaller accuracy loss.
* **cpu**: Loads the model in CPU mode using Pytorch. The model will be loaded in 32-bit precision, so a lot of RAM will be used. CPU inference with transformers is older than llama.cpp and it works, but it's a lot slower.
* **cpu**: Loads the model in CPU mode using Pytorch. The model will be loaded in 32-bit precision, so a lot of RAM will be used. CPU inference with transformers is older than llama.cpp and it works, but it's a lot slower. Note: this parameter has a different interpretation in the llama.cpp loader (see below).
* **load-in-8bit**: Load the model in 8-bit precision using bitsandbytes. The 8-bit kernel in that library has been optimized for training and not inference, so load-in-8bit is slower than load-in-4bit (but more accurate).
* **bf16**: Use bfloat16 precision instead of float16 (the default). Only applies when quantization is not used.
* **auto-devices**: When checked, the backend will try to guess a reasonable value for "gpu-memory" to allow you to load a model with CPU offloading. I recommend just setting "gpu-memory" manually instead. This parameter is also needed for loading GPTQ models, in which case it needs to be checked before loading the model.
Expand Down Expand Up @@ -97,6 +97,7 @@ Example: https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF
* **no-mmap**: Loads the model into memory at once, possibly preventing I/O operations later on at the cost of a longer load time.
* **mlock**: Force the system to keep the model in RAM rather than swapping or compressing (no idea what this means, never used it).
* **numa**: May improve performance on certain multi-cpu systems.
* **cpu**: Force a version of llama.cpp compiled without GPU acceleration to be used. Can usually be ignored. Only set this if you want to use CPU only and llama.cpp doesn't work otherwise.
* **tensor_split**: For multi-gpu only. Sets the amount of memory to allocate per GPU.
* **Seed**: The seed for the llama.cpp random number generator. Not very useful as it can only be set once (that I'm aware).

Expand Down
35 changes: 27 additions & 8 deletions modules/llamacpp_hf.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,6 @@
from pathlib import Path
from typing import Any, Dict, Optional, Union

import llama_cpp
import torch
from torch.nn import CrossEntropyLoss
from transformers import GenerationConfig, PretrainedConfig, PreTrainedModel
Expand All @@ -11,6 +10,23 @@
from modules import RoPE, shared
from modules.logging_colors import logger

try:
import llama_cpp
except:
llama_cpp = None

try:
import llama_cpp_cuda
except:
llama_cpp_cuda = None


def llama_cpp_lib():
if (shared.args.cpu and llama_cpp is not None) or llama_cpp_cuda is None:
return llama_cpp
else:
return llama_cpp_cuda


class LlamacppHF(PreTrainedModel):
def __init__(self, model, path):
Expand All @@ -23,7 +39,7 @@ def __init__(self, model, path):
'n_tokens': self.model.n_tokens,
'input_ids': self.model.input_ids,
'scores': self.model.scores,
'ctx': self.model._ctx.ctx
'ctx': self.model.ctx
}

if shared.args.cfg_cache:
Expand All @@ -32,7 +48,7 @@ def __init__(self, model, path):
'n_tokens': self.model.n_tokens,
'input_ids': self.model.input_ids.copy(),
'scores': self.model.scores.copy(),
'ctx': llama_cpp.llama_new_context_with_model(model.model, model.context_params)
'ctx': llama_cpp_lib().llama_new_context_with_model(model.model, model.context_params)
}

def _validate_model_class(self):
Expand All @@ -49,28 +65,28 @@ def save_cache(self):
'n_tokens': self.model.n_tokens,
'input_ids': self.model.input_ids,
'scores': self.model.scores,
'ctx': self.model._ctx.ctx
'ctx': self.model.ctx
})

def save_negative_cache(self):
self.llamacpp_cache_negative.update({
'n_tokens': self.model.n_tokens,
'input_ids': self.model.input_ids,
'scores': self.model.scores,
'ctx': self.model._ctx.ctx
'ctx': self.model.ctx
})

def load_cache(self):
self.model.n_tokens = self.llamacpp_cache['n_tokens']
self.model.input_ids = self.llamacpp_cache['input_ids']
self.model.scores = self.llamacpp_cache['scores']
self.model._ctx.ctx = self.llamacpp_cache['ctx']
self.model.ctx = self.llamacpp_cache['ctx']

def load_negative_cache(self):
self.model.n_tokens = self.llamacpp_cache_negative['n_tokens']
self.model.input_ids = self.llamacpp_cache_negative['input_ids']
self.model.scores = self.llamacpp_cache_negative['scores']
self.model._ctx.ctx = self.llamacpp_cache_negative['ctx']
self.model.ctx = self.llamacpp_cache_negative['ctx']

@property
def device(self) -> torch.device:
Expand Down Expand Up @@ -176,6 +192,7 @@ def from_pretrained(cls, pretrained_model_name_or_path: Optional[Union[str, os.P
params = {
'model_path': str(model_file),
'n_ctx': shared.args.n_ctx,
'seed': int(shared.args.llama_cpp_seed),
'n_threads': shared.args.threads or None,
'n_threads_batch': shared.args.threads_batch or None,
'n_batch': shared.args.n_batch,
Expand All @@ -190,5 +207,7 @@ def from_pretrained(cls, pretrained_model_name_or_path: Optional[Union[str, os.P
'logits_all': shared.args.logits_all,
}

model = llama_cpp.Llama(**params)
Llama = llama_cpp_lib().Llama
model = Llama(**params)

return LlamacppHF(model, model_file)
40 changes: 30 additions & 10 deletions modules/llamacpp_model.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
import re
from functools import partial

import llama_cpp
import numpy as np
import torch

Expand All @@ -10,6 +9,23 @@
from modules.logging_colors import logger
from modules.text_generation import get_max_prompt_length

try:
import llama_cpp
except:
llama_cpp = None

try:
import llama_cpp_cuda
except:
llama_cpp_cuda = None


def llama_cpp_lib():
if (shared.args.cpu and llama_cpp is not None) or llama_cpp_cuda is None:
return llama_cpp
else:
return llama_cpp_cuda


def ban_eos_logits_processor(eos_token, input_ids, logits):
logits[eos_token] = -float('inf')
Expand All @@ -34,6 +50,10 @@ def __del__(self):

@classmethod
def from_pretrained(self, path):

Llama = llama_cpp_lib().Llama
LlamaCache = llama_cpp_lib().LlamaCache

result = self()
cache_capacity = 0
if shared.args.cache_capacity is not None:
Expand All @@ -54,6 +74,7 @@ def from_pretrained(self, path):
params = {
'model_path': str(path),
'n_ctx': shared.args.n_ctx,
'seed': int(shared.args.llama_cpp_seed),
'n_threads': shared.args.threads or None,
'n_threads_batch': shared.args.threads_batch or None,
'n_batch': shared.args.n_batch,
Expand All @@ -67,9 +88,9 @@ def from_pretrained(self, path):
'rope_freq_scale': 1.0 / shared.args.compress_pos_emb,
}

result.model = llama_cpp.Llama(**params)
result.model = Llama(**params)
if cache_capacity > 0:
result.model.set_cache(llama_cpp.LlamaCache(capacity_bytes=cache_capacity))
result.model.set_cache(LlamaCache(capacity_bytes=cache_capacity))

# This is ugly, but the model and the tokenizer are the same object in this library.
return result, result
Expand All @@ -93,13 +114,13 @@ def load_grammar(self, string):
if string != self.grammar_string:
self.grammar_string = string
if string.strip() != '':
self.grammar = llama_cpp.LlamaGrammar.from_string(string)
self.grammar = llama_cpp_lib().LlamaGrammar.from_string(string)
else:
self.grammar = None

def generate(self, prompt, state, callback=None):

LogitsProcessorList = llama_cpp.LogitsProcessorList
LogitsProcessorList = llama_cpp_lib().LogitsProcessorList

prompt = prompt if type(prompt) is str else prompt.decode()

Expand All @@ -123,16 +144,15 @@ def generate(self, prompt, state, callback=None):
max_tokens=state['max_new_tokens'],
temperature=state['temperature'],
top_p=state['top_p'],
frequency_penalty=state['frequency_penalty'],
presence_penalty=state['presence_penalty'],
repeat_penalty=state['repetition_penalty'],
top_k=state['top_k'],
stream=True,
seed=int(state['seed']) if state['seed'] != -1 else None,
repeat_penalty=state['repetition_penalty'],
presence_penalty=state['presence_penalty'],
frequency_penalty=state['frequency_penalty'],
tfs_z=state['tfs'],
mirostat_mode=int(state['mirostat_mode']),
mirostat_tau=state['mirostat_tau'],
mirostat_eta=state['mirostat_eta'],
stream=True,
logits_processor=logit_processors,
grammar=self.grammar
)
Expand Down
4 changes: 3 additions & 1 deletion modules/loaders.py
Original file line number Diff line number Diff line change
Expand Up @@ -99,9 +99,11 @@
'no_mmap',
'mlock',
'no_mul_mat_q',
'llama_cpp_seed',
'alpha_value',
'rope_freq_base',
'compress_pos_emb',
'cpu',
'numa',
],
'llamacpp_HF': [
Expand All @@ -117,6 +119,7 @@
'alpha_value',
'rope_freq_base',
'compress_pos_emb',
'cpu',
'numa',
'cfg_cache',
'no_use_fast',
Expand Down Expand Up @@ -363,7 +366,6 @@
'repetition_penalty',
'presence_penalty',
'frequency_penalty',
'seed',
'mirostat_mode',
'mirostat_tau',
'mirostat_eta',
Expand Down
2 changes: 1 addition & 1 deletion modules/shared.py
Original file line number Diff line number Diff line change
Expand Up @@ -112,6 +112,7 @@
parser.add_argument('--mlock', action='store_true', help='Force the system to keep the model in RAM.')
parser.add_argument('--n-gpu-layers', type=int, default=0, help='Number of layers to offload to the GPU.')
parser.add_argument('--tensor_split', type=str, default=None, help='Split the model across multiple GPUs. Comma-separated list of proportions. Example: 18,17.')
parser.add_argument('--llama_cpp_seed', type=int, default=0, help='Seed for llama-cpp models. Default is 0 (random).')
parser.add_argument('--numa', action='store_true', help='Activate NUMA task allocation for llama.cpp.')
parser.add_argument('--logits_all', action='store_true', help='Needs to be set for perplexity evaluation to work. Otherwise, ignore it, as it makes prompt processing slower.')
parser.add_argument('--cache-capacity', type=str, help='Maximum cache capacity (llama-cpp-python). Examples: 2000MiB, 2GiB. When provided without units, bytes will be assumed.')
Expand Down Expand Up @@ -181,7 +182,6 @@
parser.add_argument('--mul_mat_q', action='store_true', help='DEPRECATED')
parser.add_argument('--api-blocking-port', type=int, default=5000, help='DEPRECATED')
parser.add_argument('--api-streaming-port', type=int, default=5005, help='DEPRECATED')
parser.add_argument('--llama_cpp_seed', type=int, default=0, help='DEPRECATED')
parser.add_argument('--use_fast', action='store_true', help='DEPRECATED')

args = parser.parse_args()
Expand Down
1 change: 1 addition & 0 deletions modules/ui.py
Original file line number Diff line number Diff line change
Expand Up @@ -80,6 +80,7 @@ def list_model_elements():
'n_gpu_layers',
'tensor_split',
'n_ctx',
'llama_cpp_seed',
'gpu_split',
'max_seq_len',
'compress_pos_emb',
Expand Down
1 change: 1 addition & 0 deletions modules/ui_model_menu.py
Original file line number Diff line number Diff line change
Expand Up @@ -120,6 +120,7 @@ def create_ui():
shared.gradio['load_in_4bit'] = gr.Checkbox(label="load-in-4bit", value=shared.args.load_in_4bit)
shared.gradio['use_double_quant'] = gr.Checkbox(label="use_double_quant", value=shared.args.use_double_quant)
shared.gradio['tensor_split'] = gr.Textbox(label='tensor_split', info='Split the model across multiple GPUs, comma-separated list of proportions, e.g. 18,17')
shared.gradio['llama_cpp_seed'] = gr.Number(label='Seed (0 for random)', value=shared.args.llama_cpp_seed)
shared.gradio['trust_remote_code'] = gr.Checkbox(label="trust-remote-code", value=shared.args.trust_remote_code, info='To enable this option, start the web UI with the --trust-remote-code flag. It is necessary for some models.', interactive=shared.args.trust_remote_code)
shared.gradio['cfg_cache'] = gr.Checkbox(label="cfg-cache", value=shared.args.cfg_cache, info='Create an additional cache for CFG negative prompts.')
shared.gradio['logits_all'] = gr.Checkbox(label="logits_all", value=shared.args.logits_all, info='Needs to be set for perplexity evaluation to work. Otherwise, ignore it, as it makes prompt processing slower.')
Expand Down
3 changes: 0 additions & 3 deletions one_click.py
Original file line number Diff line number Diff line change
Expand Up @@ -316,9 +316,6 @@ def update_requirements(initial_installation=False):
run_cmd("python -m pip uninstall -y " + package_name, environment=True)
print(f"Uninstalled {package_name}")

# Uninstall previous llama-cpp-python versions
run_cmd("python -m pip uninstall -y llama-cpp-python-cuda", environment=True)

# Make sure that API requirements are installed (temporary)
extension_req_path = os.path.join("extensions", "openai", "requirements.txt")
if os.path.exists(extension_req_path):
Expand Down
Loading

0 comments on commit 9d6f79d

Please sign in to comment.