Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge qwen to llama cpp #4281

Merged
merged 3 commits into from
Dec 1, 2023
Merged

Merge qwen to llama cpp #4281

merged 3 commits into from
Dec 1, 2023

Conversation

simonJJJ
Copy link
Contributor

@simonJJJ simonJJJ commented Dec 1, 2023

Hi, @ggerganov

As more and more people begin to use Qwen's open-source models, the influence of Qwen models is growing, especially in China. Many community members are interested in adding support for Qwen models to llama.cpp. To do this, we need to make some changes, which we hope can be merged into the main branch of llama.cpp. In the future, we would be happy to help maintain support for Qwen models in llama.cpp. We sincerely hope that our pull request can be accepted. Thank you.

This PR contains the following features:

  • scripts convert Qwen models to gguf format
  • add meta info of Qwen models
  • inference logic of Qwen models in llama.cpp
# convert Qwen HF models to gguf fp16 format
python convert-hf-to-gguf.py --outfile qwen7b-chat-f16.gguf --outtype f16 Qwen-7B-Chat

# quantize the model to 4-bits (using q4_0 method)
./build/bin/quantize qwen7b-chat-f16.gguf qwen7b-chat-q4_0.gguf q4_0

# chat with Qwen models
./build/bin/main -m qwen7b-chat-q4_0.gguf -n 512 --color -i -cml -f prompts/chat-with-qwen.txt

edits by gg

@ggerganov ggerganov added high priority Very important issue model Model specific need feedback Testing and feedback with results are needed labels Dec 1, 2023
@ggerganov
Copy link
Owner

Thank you! I've just finished downloading the models and will now test this PR

@ggerganov
Copy link
Owner

ggerganov commented Dec 1, 2023

Any ideas how to fix this?

$ ▶ python3 convert-hf-to-gguf.py models/qwen-1.8b --outfile models/qwen-1.8b/ggml-model-f16.gguf --outtype f16
Loading model: qwen-1.8b
gguf: This GGUF file is for Little Endian only
Set model parameters
Set model tokenizer
Traceback (most recent call last):
  File "/Users/ggerganov/development/github/llama.cpp/convert-hf-to-gguf.py", line 1020, in <module>
    model_instance.set_vocab()
  File "/Users/ggerganov/development/github/llama.cpp/convert-hf-to-gguf.py", line 871, in set_vocab
    tokenizer = AutoTokenizer.from_pretrained(dir_model, trust_remote_code=True)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/transformers/models/auto/tokenization_auto.py", line 751, in from_pretrained
    tokenizer_class = get_class_from_dynamic_module(class_ref, pretrained_model_name_or_path, **kwargs)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/transformers/dynamic_module_utils.py", line 499, in get_class_from_dynamic_module
    return get_class_in_module(class_name, final_module.replace(".py", ""))
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/transformers/dynamic_module_utils.py", line 199, in get_class_in_module
    module = importlib.import_module(module_path)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/[email protected]/3.11.4_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1126, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1126, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1140, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'transformers_modules.qwen-1'

I downloaded the Qwen 1.8B model from here: https://huggingface.co/Qwen/Qwen-1_8B

Edit: Qwen 72B is converting successfully

@sakura-umi
Copy link
Contributor

sakura-umi commented Dec 1, 2023

ModuleNotFoundError: No module named 'transformers_modules.qwen-1'

The problem is perhaps the model directory name, which should not contain "_"

@riverzhou
Copy link

riverzhou commented Dec 1, 2023

image

Try to load 14B Q8_0 to GPU, failed.

river@drfxi:~/LLM/llama.cpp$ ./build/bin/main -m ../Qwen-14B-Chat/ggml-Q8_0.gguf -f prompts/chat-with-baichuan.txt -i --color -ngl 999
Log start
main: build = 1578 (60d8085)
main: built with AMD clang version 17.0.0 (https://github.com/RadeonOpenCompute/llvm-project roc-5.7.0 23352 d1e13c532a947d0cbfc94759c00dcf152294aa13) for x86_64-unknown-linux-gnu
main: seed  = 1701427105
ggml_init_cublas: GGML_CUDA_FORCE_MMQ:   no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 ROCm devices:
  Device 0: AMD Radeon RX 7800 XT, compute capability 11.0
llama_model_loader: loaded meta data with 19 key-value pairs and 323 tensors from ../Qwen-14B-Chat/ggml-Q8_0.gguf (version GGUF V3 (latest))
llama_model_loader: - tensor    0:              blk.0.attn_qkv.bias f32      [ 15360,     1,     1,     1 ]
llama_model_loader: - tensor    1:            blk.0.attn_qkv.weight q8_0     [  5120, 15360,     1,     1 ]
llama_model_loader: - tensor    2:         blk.0.attn_output.weight q8_0     [  5120,  5120,     1,     1 ]
llama_model_loader: - tensor    3:           blk.0.attn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor    4:            blk.0.ffn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor    5:              blk.0.ffn_up.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor    6:            blk.0.ffn_gate.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor    7:                token_embd.weight q8_0     [  5120, 152064,     1,     1 ]
llama_model_loader: - tensor    8:            blk.0.ffn_down.weight q8_0     [ 13696,  5120,     1,     1 ]
llama_model_loader: - tensor    9:              blk.1.attn_qkv.bias f32      [ 15360,     1,     1,     1 ]
llama_model_loader: - tensor   10:            blk.1.attn_qkv.weight q8_0     [  5120, 15360,     1,     1 ]
llama_model_loader: - tensor   11:         blk.1.attn_output.weight q8_0     [  5120,  5120,     1,     1 ]
llama_model_loader: - tensor   12:           blk.1.attn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor   13:            blk.1.ffn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor   14:            blk.1.ffn_down.weight q8_0     [ 13696,  5120,     1,     1 ]
llama_model_loader: - tensor   15:              blk.1.ffn_up.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor   16:            blk.1.ffn_gate.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor   17:              blk.2.attn_qkv.bias f32      [ 15360,     1,     1,     1 ]
llama_model_loader: - tensor   18:            blk.2.attn_qkv.weight q8_0     [  5120, 15360,     1,     1 ]
llama_model_loader: - tensor   19:         blk.2.attn_output.weight q8_0     [  5120,  5120,     1,     1 ]
llama_model_loader: - tensor   20:           blk.2.attn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor   21:            blk.2.ffn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor   22:            blk.2.ffn_down.weight q8_0     [ 13696,  5120,     1,     1 ]
llama_model_loader: - tensor   23:              blk.2.ffn_up.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor   24:            blk.2.ffn_gate.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor   25:              blk.3.attn_qkv.bias f32      [ 15360,     1,     1,     1 ]
llama_model_loader: - tensor   26:            blk.3.attn_qkv.weight q8_0     [  5120, 15360,     1,     1 ]
llama_model_loader: - tensor   27:         blk.3.attn_output.weight q8_0     [  5120,  5120,     1,     1 ]
llama_model_loader: - tensor   28:           blk.3.attn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor   29:            blk.3.ffn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor   30:            blk.3.ffn_down.weight q8_0     [ 13696,  5120,     1,     1 ]
llama_model_loader: - tensor   31:              blk.3.ffn_up.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor   32:            blk.3.ffn_gate.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor   33:           blk.4.attn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor   34:              blk.4.attn_qkv.bias f32      [ 15360,     1,     1,     1 ]
llama_model_loader: - tensor   35:            blk.4.attn_qkv.weight q8_0     [  5120, 15360,     1,     1 ]
llama_model_loader: - tensor   36:         blk.4.attn_output.weight q8_0     [  5120,  5120,     1,     1 ]
llama_model_loader: - tensor   37:            blk.4.ffn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor   38:            blk.4.ffn_down.weight q8_0     [ 13696,  5120,     1,     1 ]
llama_model_loader: - tensor   39:              blk.4.ffn_up.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor   40:            blk.4.ffn_gate.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor   41:              blk.5.attn_qkv.bias f32      [ 15360,     1,     1,     1 ]
llama_model_loader: - tensor   42:            blk.5.attn_qkv.weight q8_0     [  5120, 15360,     1,     1 ]
llama_model_loader: - tensor   43:         blk.5.attn_output.weight q8_0     [  5120,  5120,     1,     1 ]
llama_model_loader: - tensor   44:           blk.5.attn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor   45:            blk.5.ffn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor   46:            blk.5.ffn_down.weight q8_0     [ 13696,  5120,     1,     1 ]
llama_model_loader: - tensor   47:              blk.5.ffn_up.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor   48:            blk.5.ffn_gate.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor   49:              blk.6.attn_qkv.bias f32      [ 15360,     1,     1,     1 ]
llama_model_loader: - tensor   50:            blk.6.attn_qkv.weight q8_0     [  5120, 15360,     1,     1 ]
llama_model_loader: - tensor   51:         blk.6.attn_output.weight q8_0     [  5120,  5120,     1,     1 ]
llama_model_loader: - tensor   52:           blk.6.attn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor   53:            blk.6.ffn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor   54:            blk.6.ffn_down.weight q8_0     [ 13696,  5120,     1,     1 ]
llama_model_loader: - tensor   55:              blk.6.ffn_up.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor   56:            blk.6.ffn_gate.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor   57:           blk.7.attn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor   58:          blk.10.attn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor   59:              blk.7.attn_qkv.bias f32      [ 15360,     1,     1,     1 ]
llama_model_loader: - tensor   60:            blk.7.attn_qkv.weight q8_0     [  5120, 15360,     1,     1 ]
llama_model_loader: - tensor   61:         blk.7.attn_output.weight q8_0     [  5120,  5120,     1,     1 ]
llama_model_loader: - tensor   62:            blk.7.ffn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor   63:            blk.7.ffn_down.weight q8_0     [ 13696,  5120,     1,     1 ]
llama_model_loader: - tensor   64:              blk.7.ffn_up.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor   65:            blk.7.ffn_gate.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor   66:              blk.8.attn_qkv.bias f32      [ 15360,     1,     1,     1 ]
llama_model_loader: - tensor   67:            blk.8.attn_qkv.weight q8_0     [  5120, 15360,     1,     1 ]
llama_model_loader: - tensor   68:         blk.8.attn_output.weight q8_0     [  5120,  5120,     1,     1 ]
llama_model_loader: - tensor   69:           blk.8.attn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor   70:            blk.8.ffn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor   71:            blk.8.ffn_down.weight q8_0     [ 13696,  5120,     1,     1 ]
llama_model_loader: - tensor   72:              blk.8.ffn_up.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor   73:            blk.8.ffn_gate.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor   74:              blk.9.attn_qkv.bias f32      [ 15360,     1,     1,     1 ]
llama_model_loader: - tensor   75:            blk.9.attn_qkv.weight q8_0     [  5120, 15360,     1,     1 ]
llama_model_loader: - tensor   76:         blk.9.attn_output.weight q8_0     [  5120,  5120,     1,     1 ]
llama_model_loader: - tensor   77:           blk.9.attn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor   78:            blk.9.ffn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor   79:            blk.9.ffn_down.weight q8_0     [ 13696,  5120,     1,     1 ]
llama_model_loader: - tensor   80:              blk.9.ffn_up.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor   81:            blk.9.ffn_gate.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor   82:             blk.10.attn_qkv.bias f32      [ 15360,     1,     1,     1 ]
llama_model_loader: - tensor   83:           blk.10.attn_qkv.weight q8_0     [  5120, 15360,     1,     1 ]
llama_model_loader: - tensor   84:        blk.10.attn_output.weight q8_0     [  5120,  5120,     1,     1 ]
llama_model_loader: - tensor   85:           blk.10.ffn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor   86:           blk.10.ffn_down.weight q8_0     [ 13696,  5120,     1,     1 ]
llama_model_loader: - tensor   87:             blk.10.ffn_up.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor   88:           blk.10.ffn_gate.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor   89:             blk.11.attn_qkv.bias f32      [ 15360,     1,     1,     1 ]
llama_model_loader: - tensor   90:           blk.11.attn_qkv.weight q8_0     [  5120, 15360,     1,     1 ]
llama_model_loader: - tensor   91:        blk.11.attn_output.weight q8_0     [  5120,  5120,     1,     1 ]
llama_model_loader: - tensor   92:          blk.11.attn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor   93:           blk.11.ffn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor   94:           blk.11.ffn_down.weight q8_0     [ 13696,  5120,     1,     1 ]
llama_model_loader: - tensor   95:             blk.11.ffn_up.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor   96:           blk.11.ffn_gate.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor   97:             blk.12.attn_qkv.bias f32      [ 15360,     1,     1,     1 ]
llama_model_loader: - tensor   98:           blk.12.attn_qkv.weight q8_0     [  5120, 15360,     1,     1 ]
llama_model_loader: - tensor   99:        blk.12.attn_output.weight q8_0     [  5120,  5120,     1,     1 ]
llama_model_loader: - tensor  100:          blk.12.attn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  101:           blk.12.ffn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  102:           blk.12.ffn_down.weight q8_0     [ 13696,  5120,     1,     1 ]
llama_model_loader: - tensor  103:             blk.12.ffn_up.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  104:           blk.12.ffn_gate.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  105:          blk.13.attn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  106:             blk.13.attn_qkv.bias f32      [ 15360,     1,     1,     1 ]
llama_model_loader: - tensor  107:           blk.13.attn_qkv.weight q8_0     [  5120, 15360,     1,     1 ]
llama_model_loader: - tensor  108:        blk.13.attn_output.weight q8_0     [  5120,  5120,     1,     1 ]
llama_model_loader: - tensor  109:           blk.13.ffn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  110:           blk.13.ffn_down.weight q8_0     [ 13696,  5120,     1,     1 ]
llama_model_loader: - tensor  111:             blk.13.ffn_up.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  112:           blk.13.ffn_gate.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  113:             blk.14.attn_qkv.bias f32      [ 15360,     1,     1,     1 ]
llama_model_loader: - tensor  114:           blk.14.attn_qkv.weight q8_0     [  5120, 15360,     1,     1 ]
llama_model_loader: - tensor  115:        blk.14.attn_output.weight q8_0     [  5120,  5120,     1,     1 ]
llama_model_loader: - tensor  116:          blk.14.attn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  117:           blk.14.ffn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  118:           blk.14.ffn_down.weight q8_0     [ 13696,  5120,     1,     1 ]
llama_model_loader: - tensor  119:             blk.14.ffn_up.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  120:           blk.14.ffn_gate.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  121:             blk.15.attn_qkv.bias f32      [ 15360,     1,     1,     1 ]
llama_model_loader: - tensor  122:           blk.15.attn_qkv.weight q8_0     [  5120, 15360,     1,     1 ]
llama_model_loader: - tensor  123:        blk.15.attn_output.weight q8_0     [  5120,  5120,     1,     1 ]
llama_model_loader: - tensor  124:          blk.15.attn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  125:           blk.15.ffn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  126:           blk.15.ffn_down.weight q8_0     [ 13696,  5120,     1,     1 ]
llama_model_loader: - tensor  127:             blk.15.ffn_up.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  128:           blk.15.ffn_gate.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  129:          blk.16.attn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  130:             blk.16.attn_qkv.bias f32      [ 15360,     1,     1,     1 ]
llama_model_loader: - tensor  131:           blk.16.attn_qkv.weight q8_0     [  5120, 15360,     1,     1 ]
llama_model_loader: - tensor  132:        blk.16.attn_output.weight q8_0     [  5120,  5120,     1,     1 ]
llama_model_loader: - tensor  133:           blk.16.ffn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  134:           blk.16.ffn_down.weight q8_0     [ 13696,  5120,     1,     1 ]
llama_model_loader: - tensor  135:             blk.16.ffn_up.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  136:           blk.16.ffn_gate.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  137:             blk.17.attn_qkv.bias f32      [ 15360,     1,     1,     1 ]
llama_model_loader: - tensor  138:           blk.17.attn_qkv.weight q8_0     [  5120, 15360,     1,     1 ]
llama_model_loader: - tensor  139:        blk.17.attn_output.weight q8_0     [  5120,  5120,     1,     1 ]
llama_model_loader: - tensor  140:          blk.17.attn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  141:           blk.17.ffn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  142:           blk.17.ffn_down.weight q8_0     [ 13696,  5120,     1,     1 ]
llama_model_loader: - tensor  143:             blk.17.ffn_up.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  144:           blk.17.ffn_gate.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  145:             blk.18.attn_qkv.bias f32      [ 15360,     1,     1,     1 ]
llama_model_loader: - tensor  146:           blk.18.attn_qkv.weight q8_0     [  5120, 15360,     1,     1 ]
llama_model_loader: - tensor  147:        blk.18.attn_output.weight q8_0     [  5120,  5120,     1,     1 ]
llama_model_loader: - tensor  148:          blk.18.attn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  149:           blk.18.ffn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  150:           blk.18.ffn_down.weight q8_0     [ 13696,  5120,     1,     1 ]
llama_model_loader: - tensor  151:             blk.18.ffn_up.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  152:           blk.18.ffn_gate.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  153:          blk.19.attn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  154:             blk.19.attn_qkv.bias f32      [ 15360,     1,     1,     1 ]
llama_model_loader: - tensor  155:           blk.19.attn_qkv.weight q8_0     [  5120, 15360,     1,     1 ]
llama_model_loader: - tensor  156:        blk.19.attn_output.weight q8_0     [  5120,  5120,     1,     1 ]
llama_model_loader: - tensor  157:           blk.19.ffn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  158:           blk.19.ffn_down.weight q8_0     [ 13696,  5120,     1,     1 ]
llama_model_loader: - tensor  159:             blk.19.ffn_up.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  160:           blk.19.ffn_gate.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  161:             blk.20.attn_qkv.bias f32      [ 15360,     1,     1,     1 ]
llama_model_loader: - tensor  162:           blk.20.attn_qkv.weight q8_0     [  5120, 15360,     1,     1 ]
llama_model_loader: - tensor  163:        blk.20.attn_output.weight q8_0     [  5120,  5120,     1,     1 ]
llama_model_loader: - tensor  164:          blk.20.attn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  165:           blk.20.ffn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  166:           blk.20.ffn_down.weight q8_0     [ 13696,  5120,     1,     1 ]
llama_model_loader: - tensor  167:             blk.20.ffn_up.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  168:           blk.20.ffn_gate.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  169:             blk.21.attn_qkv.bias f32      [ 15360,     1,     1,     1 ]
llama_model_loader: - tensor  170:           blk.21.attn_qkv.weight q8_0     [  5120, 15360,     1,     1 ]
llama_model_loader: - tensor  171:        blk.21.attn_output.weight q8_0     [  5120,  5120,     1,     1 ]
llama_model_loader: - tensor  172:          blk.21.attn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  173:           blk.21.ffn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  174:           blk.21.ffn_down.weight q8_0     [ 13696,  5120,     1,     1 ]
llama_model_loader: - tensor  175:             blk.21.ffn_up.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  176:           blk.21.ffn_gate.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  177:          blk.22.attn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  178:             blk.22.attn_qkv.bias f32      [ 15360,     1,     1,     1 ]
llama_model_loader: - tensor  179:           blk.22.attn_qkv.weight q8_0     [  5120, 15360,     1,     1 ]
llama_model_loader: - tensor  180:        blk.22.attn_output.weight q8_0     [  5120,  5120,     1,     1 ]
llama_model_loader: - tensor  181:           blk.22.ffn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  182:           blk.22.ffn_down.weight q8_0     [ 13696,  5120,     1,     1 ]
llama_model_loader: - tensor  183:             blk.22.ffn_up.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  184:           blk.22.ffn_gate.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  185:             blk.23.attn_qkv.bias f32      [ 15360,     1,     1,     1 ]
llama_model_loader: - tensor  186:           blk.23.attn_qkv.weight q8_0     [  5120, 15360,     1,     1 ]
llama_model_loader: - tensor  187:        blk.23.attn_output.weight q8_0     [  5120,  5120,     1,     1 ]
llama_model_loader: - tensor  188:          blk.23.attn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  189:           blk.23.ffn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  190:           blk.23.ffn_down.weight q8_0     [ 13696,  5120,     1,     1 ]
llama_model_loader: - tensor  191:             blk.23.ffn_up.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  192:           blk.23.ffn_gate.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  193:             blk.24.attn_qkv.bias f32      [ 15360,     1,     1,     1 ]
llama_model_loader: - tensor  194:           blk.24.attn_qkv.weight q8_0     [  5120, 15360,     1,     1 ]
llama_model_loader: - tensor  195:        blk.24.attn_output.weight q8_0     [  5120,  5120,     1,     1 ]
llama_model_loader: - tensor  196:          blk.24.attn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  197:           blk.24.ffn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  198:           blk.24.ffn_down.weight q8_0     [ 13696,  5120,     1,     1 ]
llama_model_loader: - tensor  199:             blk.24.ffn_up.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  200:           blk.24.ffn_gate.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  201:          blk.25.attn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  202:             blk.25.attn_qkv.bias f32      [ 15360,     1,     1,     1 ]
llama_model_loader: - tensor  203:           blk.25.attn_qkv.weight q8_0     [  5120, 15360,     1,     1 ]
llama_model_loader: - tensor  204:        blk.25.attn_output.weight q8_0     [  5120,  5120,     1,     1 ]
llama_model_loader: - tensor  205:           blk.25.ffn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  206:           blk.25.ffn_down.weight q8_0     [ 13696,  5120,     1,     1 ]
llama_model_loader: - tensor  207:             blk.25.ffn_up.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  208:           blk.25.ffn_gate.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  209:             blk.26.attn_qkv.bias f32      [ 15360,     1,     1,     1 ]
llama_model_loader: - tensor  210:           blk.26.attn_qkv.weight q8_0     [  5120, 15360,     1,     1 ]
llama_model_loader: - tensor  211:        blk.26.attn_output.weight q8_0     [  5120,  5120,     1,     1 ]
llama_model_loader: - tensor  212:          blk.26.attn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  213:           blk.26.ffn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  214:           blk.26.ffn_down.weight q8_0     [ 13696,  5120,     1,     1 ]
llama_model_loader: - tensor  215:             blk.26.ffn_up.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  216:           blk.26.ffn_gate.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  217:             blk.27.attn_qkv.bias f32      [ 15360,     1,     1,     1 ]
llama_model_loader: - tensor  218:           blk.27.attn_qkv.weight q8_0     [  5120, 15360,     1,     1 ]
llama_model_loader: - tensor  219:        blk.27.attn_output.weight q8_0     [  5120,  5120,     1,     1 ]
llama_model_loader: - tensor  220:          blk.27.attn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  221:           blk.27.ffn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  222:           blk.27.ffn_down.weight q8_0     [ 13696,  5120,     1,     1 ]
llama_model_loader: - tensor  223:             blk.27.ffn_up.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  224:           blk.27.ffn_gate.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  225:          blk.28.attn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  226:             blk.28.attn_qkv.bias f32      [ 15360,     1,     1,     1 ]
llama_model_loader: - tensor  227:           blk.28.attn_qkv.weight q8_0     [  5120, 15360,     1,     1 ]
llama_model_loader: - tensor  228:        blk.28.attn_output.weight q8_0     [  5120,  5120,     1,     1 ]
llama_model_loader: - tensor  229:           blk.28.ffn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  230:           blk.28.ffn_down.weight q8_0     [ 13696,  5120,     1,     1 ]
llama_model_loader: - tensor  231:             blk.28.ffn_up.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  232:           blk.28.ffn_gate.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  233:             blk.29.attn_qkv.bias f32      [ 15360,     1,     1,     1 ]
llama_model_loader: - tensor  234:           blk.29.attn_qkv.weight q8_0     [  5120, 15360,     1,     1 ]
llama_model_loader: - tensor  235:        blk.29.attn_output.weight q8_0     [  5120,  5120,     1,     1 ]
llama_model_loader: - tensor  236:          blk.29.attn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  237:           blk.29.ffn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  238:           blk.29.ffn_down.weight q8_0     [ 13696,  5120,     1,     1 ]
llama_model_loader: - tensor  239:             blk.29.ffn_up.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  240:           blk.29.ffn_gate.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  241:             blk.30.attn_qkv.bias f32      [ 15360,     1,     1,     1 ]
llama_model_loader: - tensor  242:           blk.30.attn_qkv.weight q8_0     [  5120, 15360,     1,     1 ]
llama_model_loader: - tensor  243:        blk.30.attn_output.weight q8_0     [  5120,  5120,     1,     1 ]
llama_model_loader: - tensor  244:          blk.30.attn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  245:           blk.30.ffn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  246:           blk.30.ffn_down.weight q8_0     [ 13696,  5120,     1,     1 ]
llama_model_loader: - tensor  247:             blk.30.ffn_up.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  248:           blk.30.ffn_gate.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  249:          blk.31.attn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  250:             blk.31.attn_qkv.bias f32      [ 15360,     1,     1,     1 ]
llama_model_loader: - tensor  251:           blk.31.attn_qkv.weight q8_0     [  5120, 15360,     1,     1 ]
llama_model_loader: - tensor  252:        blk.31.attn_output.weight q8_0     [  5120,  5120,     1,     1 ]
llama_model_loader: - tensor  253:           blk.31.ffn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  254:           blk.31.ffn_down.weight q8_0     [ 13696,  5120,     1,     1 ]
llama_model_loader: - tensor  255:             blk.31.ffn_up.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  256:           blk.31.ffn_gate.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  257:             blk.32.attn_qkv.bias f32      [ 15360,     1,     1,     1 ]
llama_model_loader: - tensor  258:           blk.32.attn_qkv.weight q8_0     [  5120, 15360,     1,     1 ]
llama_model_loader: - tensor  259:        blk.32.attn_output.weight q8_0     [  5120,  5120,     1,     1 ]
llama_model_loader: - tensor  260:          blk.32.attn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  261:           blk.32.ffn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  262:           blk.32.ffn_down.weight q8_0     [ 13696,  5120,     1,     1 ]
llama_model_loader: - tensor  263:             blk.32.ffn_up.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  264:           blk.32.ffn_gate.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  265:             blk.33.attn_qkv.bias f32      [ 15360,     1,     1,     1 ]
llama_model_loader: - tensor  266:           blk.33.attn_qkv.weight q8_0     [  5120, 15360,     1,     1 ]
llama_model_loader: - tensor  267:        blk.33.attn_output.weight q8_0     [  5120,  5120,     1,     1 ]
llama_model_loader: - tensor  268:          blk.33.attn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  269:           blk.33.ffn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  270:           blk.33.ffn_down.weight q8_0     [ 13696,  5120,     1,     1 ]
llama_model_loader: - tensor  271:             blk.33.ffn_up.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  272:           blk.33.ffn_gate.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  273:          blk.34.attn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  274:             blk.34.attn_qkv.bias f32      [ 15360,     1,     1,     1 ]
llama_model_loader: - tensor  275:           blk.34.attn_qkv.weight q8_0     [  5120, 15360,     1,     1 ]
llama_model_loader: - tensor  276:        blk.34.attn_output.weight q8_0     [  5120,  5120,     1,     1 ]
llama_model_loader: - tensor  277:           blk.34.ffn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  278:           blk.34.ffn_down.weight q8_0     [ 13696,  5120,     1,     1 ]
llama_model_loader: - tensor  279:             blk.34.ffn_up.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  280:           blk.34.ffn_gate.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  281:             blk.35.attn_qkv.bias f32      [ 15360,     1,     1,     1 ]
llama_model_loader: - tensor  282:           blk.35.attn_qkv.weight q8_0     [  5120, 15360,     1,     1 ]
llama_model_loader: - tensor  283:        blk.35.attn_output.weight q8_0     [  5120,  5120,     1,     1 ]
llama_model_loader: - tensor  284:          blk.35.attn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  285:           blk.35.ffn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  286:           blk.35.ffn_down.weight q8_0     [ 13696,  5120,     1,     1 ]
llama_model_loader: - tensor  287:             blk.35.ffn_up.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  288:           blk.35.ffn_gate.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  289:             blk.36.attn_qkv.bias f32      [ 15360,     1,     1,     1 ]
llama_model_loader: - tensor  290:           blk.36.attn_qkv.weight q8_0     [  5120, 15360,     1,     1 ]
llama_model_loader: - tensor  291:        blk.36.attn_output.weight q8_0     [  5120,  5120,     1,     1 ]
llama_model_loader: - tensor  292:          blk.36.attn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  293:           blk.36.ffn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  294:           blk.36.ffn_down.weight q8_0     [ 13696,  5120,     1,     1 ]
llama_model_loader: - tensor  295:             blk.36.ffn_up.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  296:           blk.36.ffn_gate.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  297:          blk.37.attn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  298:             blk.37.attn_qkv.bias f32      [ 15360,     1,     1,     1 ]
llama_model_loader: - tensor  299:           blk.37.attn_qkv.weight q8_0     [  5120, 15360,     1,     1 ]
llama_model_loader: - tensor  300:        blk.37.attn_output.weight q8_0     [  5120,  5120,     1,     1 ]
llama_model_loader: - tensor  301:           blk.37.ffn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  302:           blk.37.ffn_down.weight q8_0     [ 13696,  5120,     1,     1 ]
llama_model_loader: - tensor  303:             blk.37.ffn_up.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  304:           blk.37.ffn_gate.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  305:             blk.38.attn_qkv.bias f32      [ 15360,     1,     1,     1 ]
llama_model_loader: - tensor  306:           blk.38.attn_qkv.weight q8_0     [  5120, 15360,     1,     1 ]
llama_model_loader: - tensor  307:        blk.38.attn_output.weight q8_0     [  5120,  5120,     1,     1 ]
llama_model_loader: - tensor  308:          blk.38.attn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  309:           blk.38.ffn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  310:           blk.38.ffn_down.weight q8_0     [ 13696,  5120,     1,     1 ]
llama_model_loader: - tensor  311:             blk.38.ffn_up.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  312:           blk.38.ffn_gate.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  313:             blk.39.attn_qkv.bias f32      [ 15360,     1,     1,     1 ]
llama_model_loader: - tensor  314:           blk.39.attn_qkv.weight q8_0     [  5120, 15360,     1,     1 ]
llama_model_loader: - tensor  315:        blk.39.attn_output.weight q8_0     [  5120,  5120,     1,     1 ]
llama_model_loader: - tensor  316:          blk.39.attn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  317:           blk.39.ffn_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  318:           blk.39.ffn_down.weight q8_0     [ 13696,  5120,     1,     1 ]
llama_model_loader: - tensor  319:             blk.39.ffn_up.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  320:           blk.39.ffn_gate.weight q8_0     [  5120, 13696,     1,     1 ]
llama_model_loader: - tensor  321:               output_norm.weight f32      [  5120,     1,     1,     1 ]
llama_model_loader: - tensor  322:                    output.weight q8_0     [  5120, 152064,     1,     1 ]
llama_model_loader: - kv   0:                       general.architecture str              = qwen
llama_model_loader: - kv   1:                               general.name str              = Qwen
llama_model_loader: - kv   2:                        qwen.context_length u32              = 8192
llama_model_loader: - kv   3:                           qwen.block_count u32              = 40
llama_model_loader: - kv   4:                      qwen.embedding_length u32              = 5120
llama_model_loader: - kv   5:                   qwen.feed_forward_length u32              = 27392
llama_model_loader: - kv   6:                        qwen.rope.freq_base f32              = 10000.000000
llama_model_loader: - kv   7:                  qwen.rope.dimension_count u32              = 128
llama_model_loader: - kv   8:                  qwen.attention.head_count u32              = 40
llama_model_loader: - kv   9:      qwen.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  10:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  11:                      tokenizer.ggml.tokens arr[str,152064]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  12:                  tokenizer.ggml.token_type arr[i32,152064]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  13:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  14:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  15:                tokenizer.ggml.eos_token_id u32              = 151643
llama_model_loader: - kv  16:            tokenizer.ggml.unknown_token_id u32              = 151643
llama_model_loader: - kv  17:               general.quantization_version u32              = 2
llama_model_loader: - kv  18:                          general.file_type u32              = 7
llama_model_loader: - type  f32:  121 tensors
llama_model_loader: - type q8_0:  202 tensors
llm_load_vocab: special tokens definition check successful ( 421/152064 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = qwen
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 152064
llm_load_print_meta: n_merges         = 151387
llm_load_print_meta: n_ctx_train      = 8192
llm_load_print_meta: n_embd           = 5120
llm_load_print_meta: n_head           = 40
llm_load_print_meta: n_head_kv        = 40
llm_load_print_meta: n_layer          = 40
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff             = 27392
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 8192
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: model type       = 13B
llm_load_print_meta: model ftype      = mostly Q8_0
llm_load_print_meta: model params     = 14.17 B
llm_load_print_meta: model size       = 14.02 GiB (8.50 BPW)
llm_load_print_meta: general.name   = Qwen
llm_load_print_meta: BOS token = 151643 '[PAD151643]'
llm_load_print_meta: EOS token = 151643 '[PAD151643]'
llm_load_print_meta: UNK token = 151643 '[PAD151643]'
llm_load_print_meta: LF token  = 148848 'ÄĬ'
llm_load_tensors: ggml ctx size =    0.12 MiB
llm_load_tensors: using ROCm for GPU acceleration
error loading model: create_tensor: 1-dimensional tensor 'blk.0.attn_qkv.bias' cannot be split on the GPU
llama_load_model_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model '../Qwen-14B-Chat/ggml-Q8_0.gguf'
main: error: unable to load model

llama.cpp Outdated Show resolved Hide resolved
@ggerganov
Copy link
Owner

ggerganov commented Dec 1, 2023

Qwen 72B F16 seems to be working fine:

make -j && ./main -m ./models/qwen-72b-fast/ggml-model-f16.gguf -p "I believe the meaning of life is" -ngl 1 -s 1 -n 64 --verbose-prompt
output on M2 Ultra
llm_load_vocab: special tokens definition check successful ( 421/152064 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = qwen
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 152064
llm_load_print_meta: n_merges         = 151387
llm_load_print_meta: n_ctx_train      = 32768
llm_load_print_meta: n_embd           = 8192
llm_load_print_meta: n_head           = 64
llm_load_print_meta: n_head_kv        = 64
llm_load_print_meta: n_layer          = 80
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff             = 49152
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 32768
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: model type       = ?B
llm_load_print_meta: model ftype      = mostly F16 (guessed)
llm_load_print_meta: model params     = 72.29 B
llm_load_print_meta: model size       = 134.65 GiB (16.00 BPW) 
llm_load_print_meta: general.name   = Qwen
llm_load_print_meta: BOS token = 151643 '[PAD151643]'
llm_load_print_meta: EOS token = 151643 '[PAD151643]'
llm_load_print_meta: UNK token = 151643 '[PAD151643]'
llm_load_print_meta: LF token  = 148848 'ÄĬ'
llm_load_tensors: ggml ctx size =    0.24 MiB
llm_load_tensors: mem required  = 137884.77 MiB
...................................................................................................
llama_new_context_with_model: n_ctx      = 512
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: kv self size  = 1280.00 MiB
llama_build_graph: non-view tensors processed: 2004/2004
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M2 Ultra
ggml_metal_init: picking default device: Apple M2 Ultra
ggml_metal_init: default.metallib not found, loading from source
ggml_metal_init: loading '/Users/ggerganov/development/github/llama.cpp/ggml-metal.metal'
ggml_metal_init: GPU name:   Apple M2 Ultra
ggml_metal_init: GPU family: MTLGPUFamilyApple8 (1008)
ggml_metal_init: hasUnifiedMemory              = true
ggml_metal_init: recommendedMaxWorkingSetSize  = 147456.00 MiB
ggml_metal_init: maxTransferRate               = built-in GPU
llama_new_context_with_model: compute buffer total size = 316.06 MiB
llama_new_context_with_model: max tensor size =  2376.00 MiB
ggml_metal_add_buffer: allocated 'data            ' buffer, size = 110592.00 MiB, offs =            0
ggml_metal_add_buffer: allocated 'data            ' buffer, size = 29674.25 MiB, offs = 113472684032, (140266.88 / 147456.00)
ggml_metal_add_buffer: allocated 'kv              ' buffer, size =  1280.02 MiB, (141546.89 / 147456.00)
ggml_metal_add_buffer: allocated 'alloc           ' buffer, size =   313.02 MiB, (141859.91 / 147456.00)

system_info: n_threads = 16 / 24 | AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | 

main: prompt: 'I believe the meaning of life is'
main: number of tokens in prompt = 7
    40 -> 'I'
  4411 -> ' believe'
   279 -> ' the'
  7290 -> ' meaning'
   315 -> ' of'
  2272 -> ' life'
   374 -> ' is'

sampling: 
	repeat_last_n = 64, repeat_penalty = 1.100, frequency_penalty = 0.000, presence_penalty = 0.000
	top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.800
	mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
generate: n_ctx = 512, n_batch = 512, n_predict = 64, n_keep = 0


I believe the meaning of life is to live it for others. I’ve tried and failed at many things in my life, but what’s made me happy are those moments when I have helped someone else.
My first wife was a wonderful woman who loved children. She had worked as a nanny(女佣)and always said she wanted a child of her
llama_print_timings:        load time =   18242.98 ms
llama_print_timings:      sample time =      87.74 ms /    64 runs   (    1.37 ms per token,   729.40 tokens per second)
llama_print_timings: prompt eval time =     315.70 ms /     7 tokens (   45.10 ms per token,    22.17 tokens per second)
llama_print_timings:        eval time =   13321.29 ms /    63 runs   (  211.45 ms per token,     4.73 tokens per second)
llama_print_timings:       total time =   13777.97 ms
ggml_metal_free: deallocating
Log end

Edit: Unfortunately, none of the quantized 72B seems to work, even on the CPU.
For example, 72B Q8_0:

make -j && ./main -m ./models/qwen-72b-fast/ggml-model-q8_0.gguf -p "I believe the meaning of life is" -ngl 0 -s 1 -n 64 --verbose-prompt

system_info: n_threads = 16 / 24 | AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | 

main: prompt: 'I believe the meaning of life is'
main: number of tokens in prompt = 7
    40 -> 'I'
  4411 -> ' believe'
   279 -> ' the'
  7290 -> ' meaning'
   315 -> ' of'
  2272 -> ' life'
   374 -> ' is'

sampling: 
	repeat_last_n = 64, repeat_penalty = 1.100, frequency_penalty = 0.000, presence_penalty = 0.000
	top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.800
	mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
generate: n_ctx = 512, n_batch = 512, n_predict = 64, n_keep = 0


I believe the meaning of life is⯑⯑⯑⯑⯑⯑⯑⯑⯑⯑⯑⯑⯑⯑⯑⯑⯑

No asserts in Debug. Weird 🤔

Edit 2: fixed - see below

@CausalLM
Copy link
Contributor

CausalLM commented Dec 1, 2023

Tried LLaMAfied Qwen 72B, failed on all quantized, but works well with F16.
#4283
Using LLaMAfied model: https://huggingface.co/CausalLM/72B-preview
I will try InternLM (LLaMA arch with bias on QKV) later.

@ggerganov
Copy link
Owner

Tried LLaMAfied Qwen 72B, failed on all quantized, but works well with F16.

Yes, it's also exactly the same with Qwen 72B Chat - F16 works and none of the quantum ones work.

I checked that LLaMA v2 70B works with quantization - it is of similar size.
Maybe it is something related to the bigger vocabulary of Qwen, but I don't understand why it fails only with quantum data

@ggerganov
Copy link
Owner

ggerganov commented Dec 1, 2023

If the token_embd and output tensors are not quantized (i.e. leave them in F16) it works.

I am looking more into this and it seems that the original token_embd and output tensor have both nan and inf for some of their weights.
This is strange - is this an error? Can someone else confirm?

Edit: see next comment

@arivero
Copy link

arivero commented Dec 1, 2023

position embedding failing, perhaps?

@ggerganov
Copy link
Owner

ggerganov commented Dec 1, 2023

Wow, we have a bug in the quantization code - integer overflow when using multiple threads:

llama.cpp/llama.cpp

Lines 7657 to 7671 in 8d6d9f0

auto block_size = tensor->type == GGML_TYPE_F16 ? 1 : (size_t)ggml_blck_size(tensor->type);
auto block_size_bytes = ggml_type_size(tensor->type);
GGML_ASSERT(nelements % block_size == 0);
auto nblocks = nelements / block_size;
auto blocks_per_thread = nblocks / nthread;
auto spare_blocks = nblocks - (blocks_per_thread * nthread); // if blocks aren't divisible by thread count
for (auto tnum = 0, in_buff_offs = 0, out_buff_offs = 0; tnum < nthread; tnum++) {
auto thr_blocks = blocks_per_thread + (tnum == nthread - 1 ? spare_blocks : 0); // num blocks for this thread
auto thr_elems = thr_blocks * block_size; // number of elements for this thread
auto thr_block_bytes = thr_blocks * block_size_bytes; // number of input bytes for this thread
auto compute = [qtype] (ggml_type typ, uint8_t * inbuf, float * outbuf, int nels) {

Some of these auto's end up 32-bit integers instead of 64-bit.
Took me quite some time to find this!

Will push a fix shortly

Fix: #4284

@ggerganov
Copy link
Owner

We just need to fix Qwen-1.8B conversion now and we can merge.

Any ideas what this error means?
I'm very interested in trying this model

@ggerganov ggerganov merged commit 37c746d into ggerganov:master Dec 1, 2023
35 checks passed
@zhangp365
Copy link

From the log, I can find some chat template information, but not all. I also searched on Hugging Face and GitHub, but I couldn't find the complete chat template. Could you please provide the chat template?

Reverse prompt: '<|im_start|>user
'
sampling:
        repeat_last_n = 64, repeat_penalty = 1.100, frequency_penalty = 0.000, presence_penalty = 0.000
        top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.800
        mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
generate: n_ctx = 512, n_batch = 512, n_predict = 512, n_keep = 19


== Running in interactive mode. ==
 - Press Ctrl+C to interject at any time.
 - Press Return to return control to LLaMa.
 - To return control without starting a new line, end your input with '/'.
 - If you want to submit another line, end your input with '\'.

<|im_start|>system
You are a helpful assistant.<|im_end|>

@aisensiy
Copy link

aisensiy commented Dec 4, 2023

Just uploaded the q4_0 gguf to hf: https://huggingface.co/aisensiy/Qwen-72B-Chat-GGUF

@simonJJJ
Copy link
Contributor Author

simonJJJ commented Dec 4, 2023

@ggerganov thank you so much. I have a very bad fever since Friday night.

@simonJJJ
Copy link
Contributor Author

simonJJJ commented Dec 4, 2023

From the log, I can find some chat template information, but not all. I also searched on Hugging Face and GitHub, but I couldn't find the complete chat template. Could you please provide the chat template?

Reverse prompt: '<|im_start|>user
'
sampling:
        repeat_last_n = 64, repeat_penalty = 1.100, frequency_penalty = 0.000, presence_penalty = 0.000
        top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.800
        mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
generate: n_ctx = 512, n_batch = 512, n_predict = 512, n_keep = 19


== Running in interactive mode. ==
 - Press Ctrl+C to interject at any time.
 - Press Return to return control to LLaMa.
 - To return control without starting a new line, end your input with '/'.
 - If you want to submit another line, end your input with '\'.

<|im_start|>system
You are a helpful assistant.<|im_end|>

See make_context link.

@zhangp365
Copy link

From the log, I can find some chat template information, but not all. I also searched on Hugging Face and GitHub, but I couldn't find the complete chat template. Could you please provide the chat template?

Reverse prompt: '<|im_start|>user
'
sampling:
        repeat_last_n = 64, repeat_penalty = 1.100, frequency_penalty = 0.000, presence_penalty = 0.000
        top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.800
        mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
generate: n_ctx = 512, n_batch = 512, n_predict = 512, n_keep = 19


== Running in interactive mode. ==
 - Press Ctrl+C to interject at any time.
 - Press Return to return control to LLaMa.
 - To return control without starting a new line, end your input with '/'.
 - If you want to submit another line, end your input with '\'.

<|im_start|>system
You are a helpful assistant.<|im_end|>

See make_context link.

I got it, thank you very much.

@svcvit
Copy link

svcvit commented Dec 5, 2023

Unable to support Qwen-VL-Chat. python convert-hf-to-gguf.py --outfile qwenvl-chat-f16.gguf --outtype f16 Qwen-VL-Chat

………………
blk.31.attn_qkv.bias, n_dims = 1, torch.bfloat16 --> float32
blk.31.attn_output.weight, n_dims = 2, torch.bfloat16 --> float32
blk.31.ffn_norm.weight, n_dims = 1, torch.bfloat16 --> float32
blk.31.ffn_up.weight, n_dims = 2, torch.bfloat16 --> float32
blk.31.ffn_gate.weight, n_dims = 2, torch.bfloat16 --> float32
blk.31.ffn_down.weight, n_dims = 2, torch.bfloat16 --> float32
output_norm.weight, n_dims = 1, torch.bfloat16 --> float32
Can not map tensor 'transformer.visual.positional_embedding'

@cmp-nct
Copy link
Contributor

cmp-nct commented Dec 7, 2023

I just digged a bit into Qwen-VL
So llava-1.5 is, thankfully, very simple. It comes with just two mm_projector tensors, those are the binder between ViT and llama2
Qwen-VL is more complex, it comes with something like a 48 layer visual transformer in addition to a projector
https://huggingface.co/Qwen/Qwen-VL-Chat/blob/main/pytorch_model.bin.index.json

I was able to convert the visual encoder to gguf (a fine tuned ViT Big 2B) with a small hack but this needs an entire (small) new architecture to run.
And given that .. I'm not sure how much more work CogVLM would be which is far superior.

@dgo2dance
Copy link

$ ▶ python3 convert-hf-to-gguf.py models/qwen-1.8b --outfile models/qwen-1.8b/ggml-model-f16.gguf --outtype f

So, can the qwen1.8B be successfully converted now

@hobbytp
Copy link

hobbytp commented Dec 10, 2023

@simonJJJ please also update the "Supported models" list in the llama.cpp README on the first page.

@simonJJJ
Copy link
Contributor Author

@simonJJJ please also update the "Supported models" list in the llama.cpp README on the first page.

will do it

YellowRoseCx added a commit to YellowRoseCx/koboldcpp-rocm that referenced this pull request Dec 12, 2023
commit 53b5ae02cb1b533b78302422951bcfdeca6e2738
Author: YellowRoseCx <[email protected]>
Date:   Tue Dec 12 12:08:29 2023 -0600

    mixtral fan service

commit 168b1d74e26d0321e2e89358303b6c33e8d7d33e
Merge: f13295b de15d4a6
Author: YellowRoseCx <[email protected]>
Date:   Tue Dec 12 12:00:52 2023 -0600

    Merge branch 'kcpp-rocm-mixtral2' into main2

commit de15d4a632939a685ec12fa17355298542facf15
Merge: 74acc54 ea4402b
Author: YellowRoseCx <[email protected]>
Date:   Tue Dec 12 11:45:19 2023 -0600

    Merge branch 'mixtral' into kcpp-rocm-mixtral

commit ea4402b
Author: Georgi Gerganov <[email protected]>
Date:   Tue Dec 12 17:03:38 2023 +0200

    test-backend-ops : add one more sum_rows test

commit a51bc0c
Author: Georgi Gerganov <[email protected]>
Date:   Tue Dec 12 15:55:42 2023 +0200

    metal : fix binary ops for ne10 % 4 != 0

commit 08eb991
Author: Georgi Gerganov <[email protected]>
Date:   Tue Dec 12 14:14:15 2023 +0200

    metal : add cpy f16 -> f32 kernel

commit a742d9f
Author: slaren <[email protected]>
Date:   Tue Dec 12 12:46:33 2023 +0100

    gguf-py : bump version

commit 6a419f4
Author: Georgi Gerganov <[email protected]>
Date:   Tue Dec 12 13:04:33 2023 +0200

    convert : support safetensors format

commit 74acc54
Author: Concedo <[email protected]>
Date:   Tue Dec 12 10:53:34 2023 +0800

    Revert "Hide hipBLAS (ROCm) if CuBLAS exists - vice versa"

    This reverts commit 4b854d4.

commit f1cbfab
Author: slaren <[email protected]>
Date:   Mon Dec 11 20:02:55 2023 +0100

    convert : fix style

commit 7dc75e3
Author: slaren <[email protected]>
Date:   Mon Dec 11 20:00:28 2023 +0100

    convert : use 1e6 rope_freq_base for mixtral

commit 296c945
Author: slaren <[email protected]>
Date:   Mon Dec 11 16:53:25 2023 +0100

    cuda : fix mul_mat_id with multi gpu

commit 33e50f1
Author: slaren <[email protected]>
Date:   Mon Dec 11 12:27:48 2023 +0100

    test-backend-ops : disable MOE test with thread sanitizer

commit ffda94c
Author: slaren <[email protected]>
Date:   Mon Dec 11 12:15:31 2023 +0100

    test-backend-ops : simplify and disable slow tests to avoid CI timeout

commit 06581f2
Author: Concedo <[email protected]>
Date:   Mon Dec 11 16:54:42 2023 +0800

    perf endpoint lets you monitor if the embedded horde worker has issues

commit fce971d
Author: Concedo <[email protected]>
Date:   Mon Dec 11 16:17:10 2023 +0800

    do not build the clblast noavx2 binary if not on windows

commit 8cbaed1
Author: Georgi Gerganov <[email protected]>
Date:   Mon Dec 11 08:55:16 2023 +0200

    llama : fix hard-coded number of experts

commit 4b854d4
Author: YellowRoseCx <[email protected]>
Date:   Sun Dec 10 22:49:35 2023 -0600

    Hide hipBLAS (ROCm) if CuBLAS exists - vice versa

commit b002981
Author: slaren <[email protected]>
Date:   Mon Dec 11 02:43:52 2023 +0100

    test-backend-ops : fix dequantize block offset

commit f1380d7
Author: slaren <[email protected]>
Date:   Sun Dec 10 22:58:31 2023 +0100

    test-backend-ops : add cpy from f32 -> all types test

commit 54d254b
Author: slaren <[email protected]>
Date:   Sun Dec 10 21:52:11 2023 +0100

    test-backend-ops : cleanup, add moe test for batches

commit e2cf3b7
Author: henk717 <[email protected]>
Date:   Sun Dec 10 14:30:17 2023 +0100

    koboldcpp.sh - The Mamba Multitool (LostRuins#554)

    * .sh script V1

    * koboldcpp.sh polish

    * koboldcpp.sh dist generator

    * Include html's in dist

    * RWKV in Linux Dist

    * Lower dependency requirements

    * Eliminate wget dependency

    * More distinct binary name

    I know its technically amd64, but I don't want to cause confusion among nvidia users.

    * Use System OpenCL

    Unsure how this will behave in the pyinstaller build, but pocl ended up CPU only. With a bit of luck the pyinstaller uses the one from the actual system if compiled in a system without opencl, while conda now includes it for that specific system.

    * Add cblas dependency

    Missing this causes compile failures on some system's

    * ICD workaround

    Ideally we find a better solution, but conda forces ICD and needs this for the successful compile. However, pyinstaller then embeds the ICD causing it to be limited to the system it was compiled for. By temporarily removing the ICD pyinstaller can't find it and everything remains functional. Ideally we do this on a pyinstaller level, but I could not find any good options to do so yet.

    ---------

    Co-authored-by: root <root@DESKTOP-DQ1QRAG>

commit 54ba263
Author: Georgi Gerganov <[email protected]>
Date:   Sun Dec 10 15:27:41 2023 +0200

    test-backend-ops : make experts more evenly probable (test_moe)

commit b0b83dd
Author: Georgi Gerganov <[email protected]>
Date:   Sun Dec 10 14:30:38 2023 +0200

    metal : fix ggml_mul_mat_id for F32

commit 65923a8
Author: Georgi Gerganov <[email protected]>
Date:   Sun Dec 10 14:17:46 2023 +0200

    convert : determine n_ctx correctly

commit 8614aa7
Author: slaren <[email protected]>
Date:   Sun Dec 10 13:12:11 2023 +0100

    cuda : fix get_rows when ncols is odd

commit cefebb3
Author: slaren <[email protected]>
Date:   Sun Dec 10 13:11:39 2023 +0100

    test-backend-ops : add moe test

commit e640cbe
Author: Georgi Gerganov <[email protected]>
Date:   Sun Dec 10 13:57:54 2023 +0200

    llama : add n_expert and n_expert_used to hparams + change quants

commit d1259b7
Author: Georgi Gerganov <[email protected]>
Date:   Sun Dec 10 13:00:13 2023 +0200

    llama : do not quantize expert gating tensors

commit 6cfb31f
Author: Georgi Gerganov <[email protected]>
Date:   Sun Dec 10 10:59:13 2023 +0200

    metal : add indirect mat-vec kernels for all quantization types

commit 016f9bb
Author: Georgi Gerganov <[email protected]>
Date:   Sun Dec 10 09:38:21 2023 +0200

    metal : fix ggml_get_rows to work with non-cont src1

commit 0710b0f
Author: slaren <[email protected]>
Date:   Sat Dec 9 23:29:47 2023 +0100

    llama : offload missing ffn_moe_silu

commit 62b95f9
Author: slaren <[email protected]>
Date:   Sat Dec 9 22:39:34 2023 +0100

    cuda : support non-contiguous src1 in get_rows

commit 2e4db48
Author: slaren <[email protected]>
Date:   Sat Dec 9 22:38:22 2023 +0100

    ggml : update get_rows f16 and q

commit ac3f7d8
Author: slaren <[email protected]>
Date:   Sat Dec 9 19:19:03 2023 +0100

    ggml : get_rows : support non-contiguos tensors with gaps, generalize up to 3D

commit 8c5b66e
Author: Georgi Gerganov <[email protected]>
Date:   Sat Dec 9 15:30:34 2023 +0200

    metal : reduce the kernel launches for ggml_mul_mat_id

commit 7e2006b
Author: Georgi Gerganov <[email protected]>
Date:   Sat Dec 9 14:24:58 2023 +0200

    metal : add/mul/div use general kernel when src1 not cont

commit 06dfde3
Author: slaren <[email protected]>
Date:   Sat Dec 9 13:21:09 2023 +0100

    llama : add basic support for offloading moe with CUDA

commit 2cbcba8
Author: Georgi Gerganov <[email protected]>
Date:   Sat Dec 9 14:18:42 2023 +0200

    metal : add more general support for ggml_get_rows + tests

commit 9064b1c
Author: Georgi Gerganov <[email protected]>
Date:   Sat Dec 9 14:04:54 2023 +0200

    ggml : fix ggml_get_rows to take into account ne02 / ne11

commit ee8fb39
Author: slaren <[email protected]>
Date:   Sat Dec 9 12:42:25 2023 +0100

    ggml : add n_as argument to ggml_mul_mat_id

commit 7372b62
Author: Georgi Gerganov <[email protected]>
Date:   Sat Dec 9 13:18:58 2023 +0200

    ggml : ggml_get_rows support 2D indexing [n_tokens, n_experts] (cpu only)

commit 8b185b7
Author: Georgi Gerganov <[email protected]>
Date:   Sat Dec 9 13:01:42 2023 +0200

    llama : fix expert weighting in the FFN

commit 7ea3695
Author: Georgi Gerganov <[email protected]>
Date:   Sat Dec 9 12:45:15 2023 +0200

    llama : first working version

commit af1a096
Author: Georgi Gerganov <[email protected]>
Date:   Sat Dec 9 12:07:39 2023 +0200

    llama : fix cur -> cur_expert

commit aedfad1
Author: Georgi Gerganov <[email protected]>
Date:   Sat Dec 9 11:47:40 2023 +0200

    llama : update graph to support MoE

commit 861cd67
Author: Georgi Gerganov <[email protected]>
Date:   Sat Dec 9 11:19:46 2023 +0200

    ggml : sync latest ggml_mul_mat_id

commit a3eefe9
Author: Georgi Gerganov <[email protected]>
Date:   Sat Dec 9 11:14:03 2023 +0200

    llama : model loading

commit d38e41e
Author: Georgi Gerganov <[email protected]>
Date:   Sat Dec 9 10:59:37 2023 +0200

    convert : fix n_ff typo

commit dff8cbe
Author: Georgi Gerganov <[email protected]>
Date:   Sat Dec 9 10:51:58 2023 +0200

    convert : support Mixtral as LLAMA arch

commit 7a69152
Author: Concedo <[email protected]>
Date:   Fri Dec 8 21:06:32 2023 +0800

    lowvram var defaults

commit 7418bca
Author: Concedo <[email protected]>
Date:   Fri Dec 8 19:20:30 2023 +0800

    up ver

commit c47bc28
Author: Concedo <[email protected]>
Date:   Fri Dec 8 18:35:45 2023 +0800

    slight refactor for noscript ui

commit 7469f20
Author: Concedo <[email protected]>
Date:   Fri Dec 8 18:16:14 2023 +0800

    use lowvram flag for offload qkv

commit ec21fa7
Merge: 930cdfb fe680e3
Author: Concedo <[email protected]>
Date:   Fri Dec 8 17:42:26 2023 +0800

    Merge branch 'master' into concedo_experimental

    # Conflicts:
    #	.github/workflows/build.yml
    #	.gitignore
    #	CMakeLists.txt
    #	Makefile
    #	Package.swift
    #	README.md
    #	ggml-cuda.cu
    #	llama.cpp
    #	llama.h
    #	scripts/sync-ggml.sh
    #	tests/CMakeLists.txt

commit 930cdfb
Author: Concedo <[email protected]>
Date:   Fri Dec 8 16:53:30 2023 +0800

    updated lite, added patch that links to noscript mode

commit fe680e3
Author: Georgi Gerganov <[email protected]>
Date:   Thu Dec 7 22:26:54 2023 +0200

    sync : ggml (new ops, tests, backend, etc.) (ggerganov#4359)

    * sync : ggml (part 1)

    * sync : ggml (part 2, CUDA)

    * sync : ggml (part 3, Metal)

    * ggml : build fixes

    ggml-ci

    * cuda : restore lost changes

    * cuda : restore lost changes (StableLM rope)

    * cmake : enable separable compilation for CUDA

    ggml-ci

    * ggml-cuda : remove device side dequantize

    * Revert "cmake : enable separable compilation for CUDA"

    This reverts commit 09e35d0.

    * cuda : remove assert for rope

    * tests : add test-backend-ops

    * ggml : fix bug in ggml_concat

    * ggml : restore `ggml_get_n_tasks()` logic in `ggml_graph_plan()`

    * ci : try to fix macOS

    * ggml-backend : remove backend self-registration

    * ci : disable Metal for macOS cmake build

    ggml-ci

    * metal : fix "supports family" call

    * metal : fix assert

    * metal : print resource path

    ggml-ci

    ---------

    Co-authored-by: slaren <[email protected]>

commit bcc0eb4
Author: Georgi Gerganov <[email protected]>
Date:   Thu Dec 7 13:03:17 2023 +0200

    llama : per-layer KV cache + quantum K cache (ggerganov#4309)

    * per-layer KV

    * remove unnecessary copies

    * less code duplication, offload k and v separately

    * llama : offload KV cache per-layer

    * llama : offload K shift tensors

    * llama : offload for rest of the model arches

    * llama : enable offload debug temporarily

    * llama : keep the KV related layers on the device

    * llama : remove mirrors, perform Device -> Host when partial offload

    * common : add command-line arg to disable KV cache offloading

    * llama : update session save/load

    * llama : support quantum K cache (ggerganov#4312)

    * llama : support quantum K cache (wip)

    * metal : add F32 -> Q8_0 copy kernel

    * cuda : add F32 -> Q8_0 copy kernel

    ggml-ci

    * cuda : use mmv kernel for quantum cache ops

    * llama : pass KV cache type through API

    * llama : fix build

    ggml-ci

    * metal : add F32 -> Q4_0 copy kernel

    * metal : add F32 -> Q4_1 copy kernel

    * cuda : wip

    * cuda : add F32 -> Q4_0 and F32 -> Q4_1 copy kernels

    * llama-bench : support type_k/type_v

    * metal : use mm kernel only for quantum KV cache

    * cuda : add comment

    * llama : remove memory_f16 and kv_f16 flags

    ---------

    Co-authored-by: slaren <[email protected]>

    * readme : add API change notice

    ---------

    Co-authored-by: slaren <[email protected]>

commit 81bc921
Author: Hongyu Ouyang <[email protected]>
Date:   Thu Dec 7 02:25:22 2023 -0800

    train : fix ggerganov#4227 (double free in examples/train-text-from-scratch/train-text-from-scratch.cpp) (ggerganov#4351)

    On commit b1108 (44c117f) xaedes added

        ggml_allocr * alloc = NULL;

        ... (many lines in between)

        if (alloc) {
            ggml_allocr_free(alloc);
        }

    Which is correct, but it's easy to lose context after many lines in between.

    On commit b1287 (0e76a899) xaedes made a big change. From here on, alloc is freed eagerly.

        alloc = ggml_allocr_new(...)
        ... (short lines of code)
        ggml_allocr_free(alloc)

    This happens a few times, but alloc is never set to NULL, and many lines below,
    we still have

        if (alloc) {
            ggml_allocr_free(alloc);
        }

    which causes a double-free.

commit 05cd6e5
Author: Georgi Gerganov <[email protected]>
Date:   Wed Dec 6 20:21:59 2023 +0200

    server : recognize cache_prompt parameter in OAI API (ggerganov#4347)

commit c751152
Author: Concedo <[email protected]>
Date:   Thu Dec 7 00:52:25 2023 +0800

    noscript mode is done

commit 12002d8
Author: Concedo <[email protected]>
Date:   Wed Dec 6 17:51:08 2023 +0800

    very basic noscript mode

commit caa9249
Author: Georgi Gerganov <[email protected]>
Date:   Wed Dec 6 10:41:03 2023 +0200

    common : fix compile warning

commit da5eaef
Author: stduhpf <[email protected]>
Date:   Wed Dec 6 09:08:17 2023 +0100

    speculative : support `--color` (ggerganov#4343)

    * speculative: add some colors

    * minor : add braces

    ---------

    Co-authored-by: Georgi Gerganov <[email protected]>

commit 5f6e0c0
Author: Marcus Dunn <[email protected]>
Date:   Tue Dec 5 10:55:12 2023 -1000

    grammar : pre-computed pieces + reserve mem + less string copies (ggerganov#4330)

    * reserve space for codepoints

    * improvement for the appended 0

    * used precomputed token text for grammar sample

    * reserve canidates_decoded

    * reserve canidates_grammar

    * remove candidates_decoded

    * Revert "remove candidates_decoded"

    This reverts commit 3773328.

    * changed decode_utf8 to take src by ref

commit 5aa365d
Author: Kerfuffle <[email protected]>
Date:   Tue Dec 5 10:19:18 2023 -0700

    llama : allow overriding GGUF metadata when loading model (ggerganov#4092)

    * feat: Allow overriding GGUF metadata when loading model

    * Fix the one time GCC is stricter than clang about something

    * Step1

    * Refactor... basically everything!

    * Nuke obsolete GetArrayLen struct

    * simplify std::string specialization

    * Various cleanups

    Add informational output when overrides are applied

    Warn user when an override with the wrong type is specified

    * Fix broken logic for parsing bool KV overrides
    Fix issue where overrides didn't apply when key missing in GGUF metadata
    Resolve merge changes

    * llama : rearrange model params

    * Update new GET_KEY call

    Add note that metadata KV overrides aren't reflected in initial metadata KV info dump

    ---------

    Co-authored-by: cebtenzzre <[email protected]>
    Co-authored-by: Georgi Gerganov <[email protected]>

commit b6f952f
Author: Concedo <[email protected]>
Date:   Tue Dec 5 21:08:10 2023 +0800

    improved exit logic

commit 52c8bc3
Author: MaggotHATE <[email protected]>
Date:   Tue Dec 5 15:05:51 2023 +0500

    sampling : custom samplers order (ggerganov#4285)

    * Samplers sequence order w parameter

    * Cleaned commented code

    * Fixed formatting

    * Rewrote with unordered_map

    * Revert and rewrite, too many problems and safeguards would be needed

    * Fixed code style

    * Code style fixes according to review

    * More readable samplers input string, fixed help

    * Style fix in sampler_queue

    * Formatting fixes

    * Fixing whitespaces

commit e4b76bb
Author: kchro3 <[email protected]>
Date:   Mon Dec 4 23:29:46 2023 -0800

    swift : revert compiler checks for swift package (ggerganov#4332)

commit 23b5e12
Author: Daniel Bevenius <[email protected]>
Date:   Mon Dec 4 17:04:21 2023 +0100

    simple : update error message for KV cache check (ggerganov#4324)

    This commit updates the error message that is printed when the
    KV cache is not big enough to hold all the prompt and generated
    tokens. Specifically it removes the reference to n_parallel and
    replaces it with n_len.

    Signed-off-by: Daniel Bevenius <[email protected]>

commit d208995
Author: Miwa / Ensan <[email protected]>
Date:   Tue Dec 5 01:03:49 2023 +0900

    swift : fix concatenation method to avoid invalid UTF8 stringfication (ggerganov#4325)

commit 5c9f90c
Author: Miwa / Ensan <[email protected]>
Date:   Mon Dec 4 22:43:45 2023 +0900

    swift : fix prompt tokenization logic (ggerganov#4321)

commit a5a5839
Author: Concedo <[email protected]>
Date:   Mon Dec 4 21:10:42 2023 +0800

    handle accidentally selecting a kcpps file as model instead

commit 4fa44e8
Author: Ikko Eltociear Ashimine <[email protected]>
Date:   Mon Dec 4 16:57:35 2023 +0900

    grammar-parser : fix typo (ggerganov#4318)

    preceeding -> preceding

commit 8602f5a
Merge: ac36aee fbbc428
Author: Concedo <[email protected]>
Date:   Sun Dec 3 22:00:14 2023 +0800

    Merge branch 'master' into concedo_experimental

commit fbbc428
Author: Georgi Gerganov <[email protected]>
Date:   Sun Dec 3 15:56:35 2023 +0200

    ggml : reuse ggml_get_n_tasks() in ggml_graph_plan() (ggerganov#4308)

    * ggml : fix soft max out-of-bounds access

    ggml-ci

    * ggml : reuse ggml_get_n_tasks() in ggml_graph_plan()

    ggml-ci

commit ac36aee
Merge: 48544cd 33e171d
Author: Concedo <[email protected]>
Date:   Sun Dec 3 21:56:29 2023 +0800

    Merge branch 'master' into concedo_experimental

    # Conflicts:
    #	CMakeLists.txt
    #	Makefile

commit adf3de4
Author: Georgi Gerganov <[email protected]>
Date:   Sun Dec 3 15:56:22 2023 +0200

    ggml : fix soft max out-of-bounds access (ggerganov#4307)

    ggml-ci

commit 48544cd
Author: Concedo <[email protected]>
Date:   Sun Dec 3 21:46:50 2023 +0800

    Revert "Revert "ggml : add ggml_soft_max_ext (ggerganov#4256)""

    This reverts commit a8e66ef.

commit 33e171d
Author: Ed Lee <[email protected]>
Date:   Sun Dec 3 01:10:43 2023 -0800

    server : fix OpenAI API `stop` field to be optional (ggerganov#4299)

    (cherry picked from commit Mozilla-Ocho/llamafile@e8c92bc)

commit 6949b50
Author: Rickard Edén <[email protected]>
Date:   Sun Dec 3 10:03:25 2023 +0100

    py : add grammar to oai like api (ggerganov#4294)

commit d7b800b
Author: Georgi Gerganov <[email protected]>
Date:   Sun Dec 3 10:58:16 2023 +0200

    llama : pad KV cache size (ggerganov#4280)

    * llama : pad KV cache size to 32

    * metal : try to improve batched decoding

commit 6570a20
Author: Concedo <[email protected]>
Date:   Sun Dec 3 15:44:53 2023 +0800

    token count includes ids

commit 5a7d312
Author: Georgi Gerganov <[email protected]>
Date:   Fri Dec 1 20:39:12 2023 +0200

    llama : avoid using "optional" keyword (ggerganov#4283)

commit d5a1cbd
Author: Georgi Gerganov <[email protected]>
Date:   Fri Dec 1 20:35:03 2023 +0200

    llama : support optional tensors (ggerganov#4283)

commit b220222
Author: Miwa / Ensan <[email protected]>
Date:   Sat Dec 2 03:19:45 2023 +0900

    swift : fix token_to_piece implementation (ggerganov#4278)

    * Fix token_to_piece implementation in Swift

    * Fix errors

commit 511f52c
Author: Jared Van Bortel <[email protected]>
Date:   Fri Dec 1 13:18:35 2023 -0500

    build : enable libstdc++ assertions for debug builds (ggerganov#4275)

commit 03562f3
Author: CausalLM <[email protected]>
Date:   Sat Dec 2 02:17:06 2023 +0800

    llama : support attention bias on LLaMA architecture (ggerganov#4283)

    * Support attention_bias on LLaMA architecture

    QKVO bias, should fix InternLM (ggerganov#3133) and works for LLaMAfied Qwen models (ggerganov#3743 (comment)).

    * check existence of qkvo bias while loading llama models

    Tested on LLaMA2, CUDA and CPU.

    * Update llama.cpp

commit 37c746d
Author: Shijie <[email protected]>
Date:   Sat Dec 2 02:16:31 2023 +0800

    llama : add Qwen support (ggerganov#4281)

    * enable qwen to llama.cpp

    * llama : do not GPU split bias tensors

    ---------

    Co-authored-by: Georgi Gerganov <[email protected]>

commit 880f579
Author: Georgi Gerganov <[email protected]>
Date:   Fri Dec 1 18:42:11 2023 +0200

    llama : fix integer overflow during quantization (ggerganov#4284)

    happens with multi-threaded quantization of Qwen-72B

    ggml-ci
@zhangjiekui
Copy link

zhangjiekui commented Dec 18, 2023

Yes, it can run. But the generating result seems not so good. I tried 14B gguf, but get so many reptitions , even likes '/n/n/n‘.........for tens time. So there is sth must be corrected or improved. Thanks. Below is an example result:

您好!很高兴能与您聊天。您有什么问题需要我回答吗?
非常感谢您的介绍!作为一个AI助手,我的主要任务是帮助用户解答问题、提供信息和完成各种指令任务。如果您有任何问题或需要帮助,请随时告诉我,我会尽力提供支持。请问您有什么需要了解的内容或者想要讨论的话题?非常感谢您的介绍!作为一个AI助手,我的主要任务是帮助用户解答问题、提供信息和完成各种指令任务。如果您有任何问题或需要帮助,请随时告诉我,我会尽力提供支持。请问您有什么需要了解的内容或者想要讨论的话题?

@nlp4whp
Copy link

nlp4whp commented Dec 18, 2023

Yes, it can run. But the generating result seems not so good. I tried 14B gguf, but get so many reptitions , even likes '/n/n/n‘.........for tens time. So there is sth must be corrected or improved. Thanks

If I add suffix like \nAI: in query (User:{query}\nAI:), it will return normal result most time; I tried qwen-7B-chat + q4_0

@riverzhou
Copy link

Yes, it can run. But the generating result seems not so good. I tried 14B gguf, but get so many reptitions , even likes '/n/n/n‘.........for tens time. So there is sth must be corrected or improved. Thanks

Same problem.

@AnLoffredo
Copy link

I actually have a similar problem. I have \n set to return control, but the model will generate !!! constantly, even when not appropriate. It's weird because all three have the same logit value, and the correct punctuation usually follows.

Hey!!!, how are you!!!?

It's always in a set of three. I assumed it was something I was doing wrong but now I'm not entirely sure.

@hariji814
Copy link

python3 convert-hf-to-gguf.py --outfile qwen14b-chat-f16.gguf --outtype f16 ../../LLM/Qwen-14B-Chat/
Loading model: Qwen-14B-Chat
gguf: This GGUF file is for Little Endian only
Set model parameters
Set model tokenizer
gguf: Adding 151387 merge(s).
gguf: Setting special token type bos to 151643
gguf: Setting special token type eos to 151643
gguf: Setting special token type unk to 151643
Exporting model to 'qwen14b-chat-f16.gguf'
gguf: loading model part 'model-00001-of-00015.safetensors'
Traceback (most recent call last):
File "/mnt/e/LLMInf/llama.cpp/convert-hf-to-gguf.py", line 1027, in
model_instance.write()
File "/mnt/e/LLMInf/llama.cpp/convert-hf-to-gguf.py", line 126, in write
self.write_tensors()
File "/mnt/e/LLMInf/llama.cpp/convert-hf-to-gguf.py", line 925, in write_tensors
model_kv = dict(self.get_tensors())
File "/mnt/e/LLMInf/llama.cpp/convert-hf-to-gguf.py", line 60, in get_tensors
ctx = cast(ContextManager[Any], safe_open(self.dir_model / part_name, framework="pt", device="cpu"))
RuntimeError: unable to mmap 2047396712 bytes from file <../../LLM/Qwen-14B-Chat/model-00001-of-00015.safetensors>: Cannot allocate memory (12)
@simonJJJ what can i do with this ? in my PC memory is 64GB ,i think it's enough

@hariji814
Copy link

python3 convert-hf-to-gguf.py --outfile qwen14b-chat-f16.gguf --outtype f16 ../../LLM/Qwen-14B-聊天/ 加载型号:Qwen-14B-Chat gguf:此 GGUF 文件仅适用于 Little Endian 设置模型参数 设置模型分词器 gguf:添加151387合并。 gguf:设置特殊令牌类型 bos 为 151643 gguf:将特殊令牌类型 eos 设置为151643 gguf:将特殊令牌类型 unk 设置为 151643 导出模型到“qwen14b-chat-f16.gguf” gguf:加载模型部件“model-00001-of-00015.safetensors” 回溯(最近一次调用最后一次): 文件 “/mnt/e/LLMInf/llama.cpp/convert-hf-to-gguf.py”,第 1027 行,在 model_instance.write() 中文件“/mnt/e/LLMInf/llama.cpp/convert-hf-to-gguf.py”,第 126 行,写入 self.write_tensors() 文件“/mnt/e/LLMInf/llama.cpp/convert-hf-to-gguf.py”,第 925 行,write_tensors model_kv = dict(self.get_tensors()) 文件“/mnt/e/LLMInf/llama.cpp/convert-hf-to-gguf.py”,第 60 行,get_tensors ctx = cast(ContextManager[Any], safe_open(self.dir_model / part_name, framework=“pt”, device=“cpu”))RuntimeError:无法从文件< mmap 2047396712 字节。/../LLM/Qwen-14B-Chat/model-00001-of-00015.safetensors>:无法分配内存 (12) 我能用这个做什么?在我的 PC 内存中是 64GB,我认为这已经足够了

Qwen-14B-Chat this model come from modelscope platfrom

mounta11n pushed a commit to mounta11n/plusplus-camall that referenced this pull request Dec 19, 2023
* enable qwen to llama.cpp

* llama : do not GPU split bias tensors

---------

Co-authored-by: Georgi Gerganov <[email protected]>
@yanshuaibupt
Copy link

I generated qwen14b-chat-f16.gguf by the script, but when I set the qwen in localai as the backend, I got the following problem:
image

@renraeldab
Copy link

It can run without errors but I think something is wrong with the chat completion. It always outputs a lot of "\n" no matter how I change the history messages. transformers.AutoModelForCausalLM.chat() for Qwen parses messages in its own way, but I am not sure if Llama.create_chat_completion() is doing this right.

llama_print_timings:        load time =     396.49 ms
llama_print_timings:      sample time =     548.76 ms /   717 runs   (    0.77 ms per token,  1306.59 tokens per second)
llama_print_timings: prompt eval time =     396.43 ms /    15 tokens (   26.43 ms per token,    37.84 tokens per second)
llama_print_timings:        eval time =   39471.61 ms /   716 runs   (   55.13 ms per token,    18.14 tokens per second)
llama_print_timings:       total time =   49647.80 ms /   731 tokens
INFO:     ::1:54012 - "POST /v1/chat/completions HTTP/1.1" 200 OK
{'id': 'chatcmpl-9cc8ef09-405a-4ac9-b1bc-4bf3eca02f16', 'object': 'chat.completion', 'created': 1711962634, 'model': 'Qwen-1_8B_Chat/ggml-model-Q4_K_M.gguf', 'choices': [{'index': 0, 'message': {'content': '    <br><br>\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n', 'role': 'assistant'}, 'finish_reason': 'stop'}], 'usage': {'prompt_tokens': 15, 'completion_tokens': 716, 'total_tokens': 731}}

For comparison, this is the result of completion.

llama_print_timings:        load time =     396.49 ms
llama_print_timings:      sample time =      13.93 ms /    16 runs   (    0.87 ms per token,  1148.60 tokens per second)
llama_print_timings: prompt eval time =     160.45 ms /     4 tokens (   40.11 ms per token,    24.93 tokens per second)
llama_print_timings:        eval time =     761.05 ms /    15 runs   (   50.74 ms per token,    19.71 tokens per second)
llama_print_timings:       total time =    1149.10 ms /    19 tokens
INFO:     ::1:54086 - "POST /v1/completions HTTP/1.1" 200 OK
{'id': 'cmpl-c9a4c974-d8dc-48b3-82cf-8fd73e9fb1a2', 'object': 'text_completion', 'created': 1711963517, 'model': 'Qwen-1_8B_Chat/ggml-model-Q4_K_M.gguf', 'choices': [{'text': '\n\n\nHello! How may I assist you today?', 'index': 0, 'logprobs': None, 'finish_reason': 'length'}], 'usage': {'prompt_tokens': 5, 'completion_tokens': 16, 'total_tokens': 21}}

hodlen added a commit to hodlen/llama.cpp that referenced this pull request Apr 1, 2024
llama : restore prefix space in llama tokenizer (ggerganov#4081)

gguf : fix potential infinite loops while parsing (ggerganov#4100)

Co-authored-by: Bernhard Gstrein <[email protected]>

Respect tokenizer.ggml.add_bos_token value when tokenizing (ggerganov#4040)

* gguf-py: gguf-dump: Respect --no-tensor flag in JSON mode.

* Respect add_bos_token GGUF metadata value

* gguf-py: Try to fix SpecialVocab giving up too easily for the Nth time

llama : fix data units (ggerganov#4101)

* llama : fix data units

ggml-ci

* Revert "llama : fix data units"

This reverts commit f5feac8.

* llama : disambiguate data units

ggml-ci

cuda : get_row_rounding F32 (ggerganov#4095)

* Fix ggerganov#4017

* Update ggml-cuda.cu

Co-authored-by: Jared Van Bortel <[email protected]>

* Update ggml-cuda.cu

Co-authored-by: Jared Van Bortel <[email protected]>

---------

Co-authored-by: Jared Van Bortel <[email protected]>

finetune : zero the loraB initial vectors (ggerganov#4082)

* finetune : zero the loraB initial vectors

Without this, the first iteration is starting out far from the base model, instead of exactly on it.
Zeroing loraB is what the paper recommends. loralib also zeroes at least one of the init vector pairs
(though it departs from the paper in using a different distribution for the other vector, in some cases).

* tabs to spaces

* Use ggml_set_zero instead of adding a new function

finetune : speed-up ggml_compute_forward_out_prod_f32 via BLAS (ggerganov#4079)

* Remove logically superfluous assertions and order by dimension

* Use cblas_sgemm() to implement ggml_compute_forward_out_prod()

* Remove ggml_compute_forward_out_prod_use_blas(), fix compiling errors on cmake/zig, remove trailing whitespace

* Add openBLAS support for sgemm() in compute_forward_out_prod()

llama : add functions to get the model's metadata (ggerganov#4013)

* llama : add functions to get the model's metadata

* format -> std::to_string

* better documentation

train : move number of gpu layers argument parsing to common/train.cpp (ggerganov#4074)

- introduces help entry for the argument
 - cuts '--gpu-layers' form in order to simplify usage and documentation.

Signed-off-by: Jiri Podivin <[email protected]>
Co-authored-by: Jiri Podivin <[email protected]>

py : remove superfluous import statements (ggerganov#4076)

Signed-off-by: Jiri Podivin <[email protected]>
Co-authored-by: Jiri Podivin <[email protected]>

llava : fix compilation warning that fread return value is not used (ggerganov#4069)

common : improve yaml log escaping (ggerganov#4080)

* logging: improve escaping in yaml output

* logging: include review feedback

py : Falcon HF compatibility (ggerganov#4104)

Falcon HF compatibility

convert : use 'model' value if it exists. This allows karpathy/tinyllamas to load (ggerganov#4089)

Co-authored-by: Don Mahurin <@>

examples : add tokenize (ggerganov#4039)

tokenize : fix trailing whitespace

build : support ppc64le build for make and CMake (ggerganov#3963)

* build: support ppc64le build for make and CMake

* build: keep __POWER9_VECTOR__ ifdef and extend with __powerpc64__

Co-authored-by: Georgi Gerganov <[email protected]>

---------

Co-authored-by: Georgi Gerganov <[email protected]>

llama : increase max nodes (ggerganov#4115)

Clean up ggml-cuda.cu warnings when compiling with clang (for ROCM) (ggerganov#4124)

* ggml-cuda.cu: Clean up warnings when compiling with clang

* ggml-cuda.cu: Move static items into anonymous namespace

* ggml-cuda.cu: Fix use of namespace start macro

* Revert "ggml-cuda.cu: Fix use of namespace start macro"

This reverts commit 26c1149.

* Revert "ggml-cuda.cu: Move static items into anonymous namespace"

This reverts commit e29757e.

scripts : Remove missed baichuan convert script (ggerganov#4127)

tokenize example: Respect normal add BOS token behavior (ggerganov#4126)

Allow building with Makefile

gguf-py : export chat templates (ggerganov#4125)

* gguf-py : export chat templates

* llama.cpp : escape new lines in gguf kv info prints

* gguf-py : bump version

* gguf-py : check chat_template type

* gguf-py : initialize chat_template

gitignore : tokenize

common : comma should be semicolon (ggerganov#4137)

server : relay error messages (ggerganov#4131)

finetune : add --n-gpu-layers flag info to --help (ggerganov#4128)

Revert "finetune : add --n-gpu-layers flag info to --help (ggerganov#4128)"

This reverts commit 05e8301.

speculative : fix prompt tokenization in speculative example (ggerganov#4025)

* Support special tokens and not adding BOS to prompt in speculative

* Adapt to new should_add_bos function

* Ensure tgt and dft have same add_bos setting

ci : add flake8 to github actions (python linting) (ggerganov#4129)

Disabled rules:

* E203 Whitespace before ':' - disabled because we often use 'C' Style where values are aligned

* E211 Whitespace before '(' (E211) - disabled because we often use 'C' Style where values are aligned

* E221 Multiple spaces before operator - disabled because we often use 'C' Style where values are aligned

* E225 Missing whitespace around operator - disabled because it's broken so often it seems like a standard

* E231 Missing whitespace after ',', ';', or ':' - disabled because we often use 'C' Style where values are aligned

* E241 Multiple spaces after ',' - disabled because we often use 'C' Style where values are aligned

* E251 Unexpected spaces around keyword / parameter equals - disabled because it's broken so often it seems like a standard

* E261 At least two spaces before inline comment - disabled because it's broken so often it seems like a standard

* E266 Too many leading '#' for block comment - sometimes used as "section" separator

* E501 Line too long - disabled because it's broken so often it seems like a standard

* E701 Multiple statements on one line (colon) - broken only in convert.py when defining abstract methods (we can use# noqa instead)

* E704 Multiple statements on one line - broken only in convert.py when defining abstract methods (we can use# noqa instead)

main : Add ChatML functionality to main example (ggerganov#4046)

Co-authored-by: Sebastian Cramond <[email protected]>

readme : update ROCm Windows instructions (ggerganov#4122)

* Update README.md

* Update README.md

Co-authored-by: Jared Van Bortel <[email protected]>

---------

Co-authored-by: Jared Van Bortel <[email protected]>

finetune - update readme to mention llama support only (ggerganov#4148)

stablelm : simplify + speedup generation (ggerganov#4153)

docs : add llama-star arch idea

examples : fix typo in parallel example doc comment (ggerganov#4181)

Signed-off-by: Daniel Bevenius <[email protected]>

readme : update hot topics

llama : KV cache view API + better KV cache management (ggerganov#4170)

* llama : keep track of used KV cells + better KV cache management

* llama : zero KV cache used upon clear

ggml-ci

* llama : allow exporting a view of the KV cache (ggerganov#4180)

* Allow exporting a view of the KV cache

* Allow dumping the sequences per cell in common

* Track max contiguous cells value and position as well

* Fix max contiguous empty cells index calculation

Make dump functions deal with lengths or sequences counts > 10 better

* Fix off by one error in dump_kv_cache_view

* Add doc comments for KV cache view functions

Eliminate cell sequence struct; use llama_seq_id directly

Minor cleanups

* common : add -dkvc arg for enabling kv cache dumps

---------

Co-authored-by: Kerfuffle <[email protected]>

Fix incorrect format strings and uninitialized variables. (ggerganov#4133)

* Fix incorrect format strings and uninitialized variables.

* Address comments

* Add the missing include statement

readme : use PATH for Windows ROCm (ggerganov#4195)

* Update README.md to use PATH for Windows ROCm

* Update README.md

* Update README.md

main.swift : fix eos checking (ggerganov#4197)

llama_token_eos(const struct llama_model *) is currently getting struct llama_context type variable context as a parameter.

convert : fix tensors using grad in some models (ggerganov#4173)

ggml-cuda : support stablelm rope (ggerganov#4156)

* ggml-cuda : support stablelm rope

* remove unused freq_base kernel parameter

* add n_dims parameter to llm_build_k_shift, default to n_rot via overload

* llama : fix llm_build_k_shift args

---------

Co-authored-by: Georgi Gerganov <[email protected]>

llama : set metal log callback correctly (ggerganov#4204)

server : OAI API compatibility (ggerganov#4198)

* Add openai-compatible POST /v1/chat/completions API endpoint to server example

* fix code style

* Update server README.md

* Improve server README.md

* Fix server.cpp code style according to review

* server : some style changes

* server : indentation

* server : enable special tokens during tokenization by default

* server : minor code style

* server : change random string generator

* straightforward /v1/models endpoint

---------

Co-authored-by: kir-gadjello <[email protected]>
Co-authored-by: Tobi Lütke <[email protected]>

readme : update hot topics

Update docs for yarn_ext_factor <0.0 as unspecified instead of NaN (ggerganov#4189)

llama : grammar `reserve` space in `decode_utf8` (ggerganov#4210)

* reserve space for codepoints

* improvement for the appended 0

scripts : Use mmap in torch load (ggerganov#4202)

* Use mmap in torch load, prefer .bin files when loading

* Revert .bin > .safetensors preference

metal : fix yarn (ggerganov#4220)

get the correct n_orig_ctx in metal

lookahead : add example for lookahead decoding (ggerganov#4207)

* lookahead : init

* lookahead : generate and store n-grams

* lookahead : use loop instead recursion to generate n-grams

* lookahead : initial working implementation

* lookahead : filter repeating n-grams

* lookahead : use deterministic init

* lookahead : add to Makefile

* lookahead : fix a bug in the seq_id of the lookahead tokens

* lookahead : add comments

---------

Co-authored-by: slaren <[email protected]>

readme : update hot topics

lookahead : support `-n -1` infinite generation

ggml : fix -Warray-bounds warning with gcc (ggerganov#4231)

examples : iOS example with swift ui (ggerganov#4159)

* copy to llama.cpp as subdir

* attempt enabling metal, fails

* ggml metal compiles!

* Update README.md

* initial conversion to new format, utf8 errors?

* bug fixes, but now has an invalid memory access :(

* added O3, now has insufficient memory access

* begin sync with master

* update to match latest code, new errors

* fixed it!

* fix for loop conditionals, increase result size

* fix current workflow errors

* attempt a llama.swiftui workflow

* Update .github/workflows/build.yml

Co-authored-by: Georgi Gerganov <[email protected]>

---------

Co-authored-by: Georgi Gerganov <[email protected]>

readme : add Amica to UI list (ggerganov#4230)

cmake : fix issue with version info not getting baked into LlamaConfig.cmake (ggerganov#3970)

* Split CPP generation from build-info query

* Remove blank lines

* Add BUILD_SHARED_LIBS option

ggml : re-enable BLAS for CPU when src0 != F32 + remove redundant full offload checks in llama.cpp (ggerganov#4240)

* ggml : use blas even if src0 is not F32

* llama : use n_threads_batch only when n_tokens >= 32

ggml-ci

* llama : revert n_threads_batch logic

ggml-ci

ggml : restore abort() in GGML_ASSERT (ggerganov#4242)

readme : add FreeChat (ggerganov#4248)

examples : add readme files

py : fix oai proxy (ggerganov#3972)

* fix oai proxy

fix generation not stoped while bot stop talking in chat mode

fix possible `slot_id` not exist

response for cors (and pre flight)

* oai proxy: workaround for some client (such as Chatbox)

* use stop as separator to replace hardcoded `\n`

llama : fix typical sampling (ggerganov#4261)

Typical sampling was broken because after copying new_candidates into canditates, the "sorted" bool is left at "true", but the new data is no longer sorted according to probability. Patch to set "sorted" to false.

Test: Generating with temp=0.0001 (approx. argmax)  should generate the same sequence at typical>=1.0 and typical=0.9999 (approx. disabled, but enters the typical sampling codepath).

convert.py : fix llama/llama2 conversion due to vocab_size=-1 (ggerganov#4258)

llama : fix alignment of general.name in print meta (ggerganov#4254)

* llama: fix alignment of general.name in print meta

This commit fixes the alignment of the general.name field in the
llm_load_print_meta function.

Currently the output looks like this:
```console
llm_load_print_meta: model ftype      = mostly Q4_0
llm_load_print_meta: model params     = 13.02 B
llm_load_print_meta: model size       = 6.86 GiB (4.53 BPW)
llm_load_print_meta: general.name   = LLaMA v2
```
And with this commit it looks like this:
```console
llm_load_print_meta: model ftype      = mostly Q4_0
llm_load_print_meta: model params     = 13.02 B
llm_load_print_meta: model size       = 6.86 GiB (4.53 BPW)
llm_load_print_meta: general.name     = LLaMA v2
```

Signed-off-by: Daniel Bevenius <[email protected]>

* llama: fix alignment of special tokens

Signed-off-by: Daniel Bevenius <[email protected]>

---------

Signed-off-by: Daniel Bevenius <[email protected]>

readme : fix typo (ggerganov#4253)

llama.cpp uses GitHub Actions, not Gitlab Actions.

cmake : fix the metal file foder path (ggerganov#4217)

batched.swift : update README.md (ggerganov#4214)

docs: update how to run

docker : add finetune option (ggerganov#4211)

readme : fix (ggerganov#4135)

* fix: readme

* chore: resolve comments

* chore: resolve comments

main : pass LOG_TEE callback to llama.cpp log (ggerganov#4033)

* main : Call llama_log_set to use LOG_TEE

* tabs to spaces

llava : ShareGPT4V compatibility (vision encoder only loading) (ggerganov#4172)

* ShareGPT4 compatibility (vision encoder only loading)

Load only a CLIP vision encoder (as supplied by ShareGPT finetunes)
Corrects the argument parsing for --img_mean and --img_std (which were previously not parsed but attempted to access)
Defines defaults for img_mean and img_std which are equal to the llava 1.5 CLIP encoder, so you do not have to provide them

* Update convert-image-encoder-to-gguf.py

build : fix build info generation and cleanup Makefile (ggerganov#3920)

* cmake : fix joining of REAL_GIT_DIR

* fix includes with help from include-what-you-use

* make : remove unneeded deps and add test-rope target

* fix C includes in C++ source files

* Revert "fix includes with help from include-what-you-use"

This reverts commit 635e9fa.

make : fix Apple clang determination bug (ggerganov#4272)

Co-authored-by: Will Findley <[email protected]>

server : add single-client multi-prompt support (ggerganov#4232)

* * add multiprompt support

* * cleanup

* * more cleanup

* * remove atomicity of id_gen, and change lock_guard to unique_lock on completion requests

* * remove all references to mutex_multitasks

* Update examples/server/server.cpp

Co-authored-by: Jared Van Bortel <[email protected]>

* Update examples/server/server.cpp

Co-authored-by: Jared Van Bortel <[email protected]>

* Update examples/server/server.cpp

Co-authored-by: Jared Van Bortel <[email protected]>

* Update examples/server/server.cpp

Co-authored-by: Jared Van Bortel <[email protected]>

* * change to set

---------

Co-authored-by: Jared Van Bortel <[email protected]>

server : add --log-disable to disable logging to file (ggerganov#4260)

* * add --log-disable to disable logging to file in the server example

* * typo fix

ggml : add ggml_soft_max_ext (ggerganov#4256)

* metal : implement soft_max_ext

* cuda : implement soft_max_ext

* ggml : implement soft_max_ext (CPU)

* batched-bench : print threads

ggml-ci

* metal : simplify soft_max encoding

ggml-ci

* cuda : use 512 threads for soft_max instead of 32

* ggml : update soft max cpu

* cuda : do warp-based block reduce

* cuda : increase max block size to 1024

* cuda : fix warp reduction initialization of shared mem

* metal : warp-based reduction for soft max kernel

* metal : warp-based reduce for rms_norm

* metal : simplify soft max kernel

ggml-ci

* alloc : fix build with debug

py : add requirements file for convert-hf-to-gguf.py (ggerganov#4277)

This commit adds a requirements file for the convert-hf-to-gguf.py
script, and also add the torch and transformers packages to it.

The motivation for this is that currently running convert-hf-to-gguf.py
will produce the following error:
```console
$ python3 -m venv venv
$ source venv/bin/activate
(venv) $ pip install -r requirements.txt
Collecting numpy==1.24.4
Collecting sentencepiece==0.1.98
Collecting gguf>=0.1.0
Installing collected packages: sentencepiece, numpy, gguf
Successfully installed gguf-0.5.1 numpy-1.24.4 sentencepiece-0.1.98

(venv) $ python convert-hf-to-gguf.py --help
Traceback (most recent call last):
  File "llama.cpp/convert-hf-to-gguf.py", line 16, in <module>
    import torch
ModuleNotFoundError: No module named 'torch'
```
With this commit, and using requirements-hf-to-gguf.txt instead of
requirements.txt, the script can be run and shows the help output.

Signed-off-by: Daniel Bevenius <[email protected]>

llama : fix integer overflow during quantization (ggerganov#4284)

happens with multi-threaded quantization of Qwen-72B

ggml-ci

llama : add Qwen support (ggerganov#4281)

* enable qwen to llama.cpp

* llama : do not GPU split bias tensors

---------

Co-authored-by: Georgi Gerganov <[email protected]>

llama : support attention bias on LLaMA architecture (ggerganov#4283)

* Support attention_bias on LLaMA architecture

QKVO bias, should fix InternLM (ggerganov#3133) and works for LLaMAfied Qwen models (ggerganov#3743 (comment)).

* check existence of qkvo bias while loading llama models

Tested on LLaMA2, CUDA and CPU.

* Update llama.cpp

build : enable libstdc++ assertions for debug builds (ggerganov#4275)

swift : fix token_to_piece implementation (ggerganov#4278)

* Fix token_to_piece implementation in Swift

* Fix errors

llama : support optional tensors (ggerganov#4283)

llama : avoid using "optional" keyword (ggerganov#4283)

llama : pad KV cache size (ggerganov#4280)

* llama : pad KV cache size to 32

* metal : try to improve batched decoding

py : add grammar to oai like api (ggerganov#4294)

server : fix OpenAI API `stop` field to be optional (ggerganov#4299)

(cherry picked from commit Mozilla-Ocho/llamafile@e8c92bc)

ggml : fix soft max out-of-bounds access (ggerganov#4307)

ggml-ci

ggml : reuse ggml_get_n_tasks() in ggml_graph_plan() (ggerganov#4308)

* ggml : fix soft max out-of-bounds access

ggml-ci

* ggml : reuse ggml_get_n_tasks() in ggml_graph_plan()

ggml-ci

grammar-parser : fix typo (ggerganov#4318)

preceeding -> preceding

swift : fix prompt tokenization logic (ggerganov#4321)

swift : fix concatenation method to avoid invalid UTF8 stringfication (ggerganov#4325)

simple : update error message for KV cache check (ggerganov#4324)

This commit updates the error message that is printed when the
KV cache is not big enough to hold all the prompt and generated
tokens. Specifically it removes the reference to n_parallel and
replaces it with n_len.

Signed-off-by: Daniel Bevenius <[email protected]>

swift : revert compiler checks for swift package (ggerganov#4332)

sampling : custom samplers order (ggerganov#4285)

* Samplers sequence order w parameter

* Cleaned commented code

* Fixed formatting

* Rewrote with unordered_map

* Revert and rewrite, too many problems and safeguards would be needed

* Fixed code style

* Code style fixes according to review

* More readable samplers input string, fixed help

* Style fix in sampler_queue

* Formatting fixes

* Fixing whitespaces

llama : allow overriding GGUF metadata when loading model (ggerganov#4092)

* feat: Allow overriding GGUF metadata when loading model

* Fix the one time GCC is stricter than clang about something

* Step1

* Refactor... basically everything!

* Nuke obsolete GetArrayLen struct

* simplify std::string specialization

* Various cleanups

Add informational output when overrides are applied

Warn user when an override with the wrong type is specified

* Fix broken logic for parsing bool KV overrides
Fix issue where overrides didn't apply when key missing in GGUF metadata
Resolve merge changes

* llama : rearrange model params

* Update new GET_KEY call

Add note that metadata KV overrides aren't reflected in initial metadata KV info dump

---------

Co-authored-by: cebtenzzre <[email protected]>
Co-authored-by: Georgi Gerganov <[email protected]>

grammar : pre-computed pieces + reserve mem + less string copies (ggerganov#4330)

* reserve space for codepoints

* improvement for the appended 0

* used precomputed token text for grammar sample

* reserve canidates_decoded

* reserve canidates_grammar

* remove candidates_decoded

* Revert "remove candidates_decoded"

This reverts commit 3773328.

* changed decode_utf8 to take src by ref

speculative : support `--color` (ggerganov#4343)

* speculative: add some colors

* minor : add braces

---------

Co-authored-by: Georgi Gerganov <[email protected]>

common : fix compile warning

server : recognize cache_prompt parameter in OAI API (ggerganov#4347)

train : fix ggerganov#4227 (double free in examples/train-text-from-scratch/train-text-from-scratch.cpp) (ggerganov#4351)

On commit b1108 (44c117f) xaedes added

    ggml_allocr * alloc = NULL;

    ... (many lines in between)

    if (alloc) {
        ggml_allocr_free(alloc);
    }

Which is correct, but it's easy to lose context after many lines in between.

On commit b1287 (0e76a899) xaedes made a big change. From here on, alloc is freed eagerly.

    alloc = ggml_allocr_new(...)
    ... (short lines of code)
    ggml_allocr_free(alloc)

This happens a few times, but alloc is never set to NULL, and many lines below,
we still have

    if (alloc) {
        ggml_allocr_free(alloc);
    }

which causes a double-free.

llama : per-layer KV cache + quantum K cache (ggerganov#4309)

* per-layer KV

* remove unnecessary copies

* less code duplication, offload k and v separately

* llama : offload KV cache per-layer

* llama : offload K shift tensors

* llama : offload for rest of the model arches

* llama : enable offload debug temporarily

* llama : keep the KV related layers on the device

* llama : remove mirrors, perform Device -> Host when partial offload

* common : add command-line arg to disable KV cache offloading

* llama : update session save/load

* llama : support quantum K cache (ggerganov#4312)

* llama : support quantum K cache (wip)

* metal : add F32 -> Q8_0 copy kernel

* cuda : add F32 -> Q8_0 copy kernel

ggml-ci

* cuda : use mmv kernel for quantum cache ops

* llama : pass KV cache type through API

* llama : fix build

ggml-ci

* metal : add F32 -> Q4_0 copy kernel

* metal : add F32 -> Q4_1 copy kernel

* cuda : wip

* cuda : add F32 -> Q4_0 and F32 -> Q4_1 copy kernels

* llama-bench : support type_k/type_v

* metal : use mm kernel only for quantum KV cache

* cuda : add comment

* llama : remove memory_f16 and kv_f16 flags

---------

Co-authored-by: slaren <[email protected]>

* readme : add API change notice

---------

Co-authored-by: slaren <[email protected]>

sync : ggml (new ops, tests, backend, etc.) (ggerganov#4359)

* sync : ggml (part 1)

* sync : ggml (part 2, CUDA)

* sync : ggml (part 3, Metal)

* ggml : build fixes

ggml-ci

* cuda : restore lost changes

* cuda : restore lost changes (StableLM rope)

* cmake : enable separable compilation for CUDA

ggml-ci

* ggml-cuda : remove device side dequantize

* Revert "cmake : enable separable compilation for CUDA"

This reverts commit 09e35d0.

* cuda : remove assert for rope

* tests : add test-backend-ops

* ggml : fix bug in ggml_concat

* ggml : restore `ggml_get_n_tasks()` logic in `ggml_graph_plan()`

* ci : try to fix macOS

* ggml-backend : remove backend self-registration

* ci : disable Metal for macOS cmake build

ggml-ci

* metal : fix "supports family" call

* metal : fix assert

* metal : print resource path

ggml-ci

---------

Co-authored-by: slaren <[email protected]>

grammar : revert the replacement of llama_token_to_piece with id_to_token (ggerganov#4396)

Update README.md (ggerganov#4388)

Fix small typo.

ggml : increased GGML_MAX_PARAMS to allow finetuning of 70b models (ggerganov#4424)

server : fix local model name in server (ggerganov#4420)

llama : document logits_all deprecation (ggerganov#4418)

llama_context_params.logits_all is a parameter for controlling
llama_eval. This documents that logits_all should not be used with
llama_decode and llama_batch.

build : target Windows 8 for standard mingw-w64 (ggerganov#4405)

* build : target Windows 8 for standard mingw-w64

* make : fix missing console.o deps

This was causing a link error with `make all` on Windows.

english : use `typos` to fix comments and logs (ggerganov#4354)

server : tweak default sampling parameters (ggerganov#4367)

* Set a more typical Top P setting as the default

* Update temp max

llama : add Mixtral support (ggerganov#4406)

* convert : support Mixtral as LLAMA arch

* convert : fix n_ff typo

* llama : model loading

* ggml : sync latest ggml_mul_mat_id

* llama : update graph to support MoE

* llama : fix cur -> cur_expert

* llama : first working version

* llama : fix expert weighting in the FFN

* ggml : ggml_get_rows support 2D indexing [n_tokens, n_experts] (cpu only)

* ggml : add n_as argument to ggml_mul_mat_id

* ggml : fix ggml_get_rows to take into account ne02 / ne11

* metal : add more general support for ggml_get_rows + tests

* llama : add basic support for offloading moe with CUDA

* metal : add/mul/div use general kernel when src1 not cont

* metal : reduce the kernel launches for ggml_mul_mat_id

* ggml : get_rows : support non-contiguos tensors with gaps, generalize up to 3D

* ggml : update get_rows f16 and q

* cuda : support non-contiguous src1 in get_rows

* llama : offload missing ffn_moe_silu

* metal : fix ggml_get_rows to work with non-cont src1

* metal : add indirect mat-vec kernels for all quantization types

* llama : do not quantize expert gating tensors

* llama : add n_expert and n_expert_used to hparams + change quants

* test-backend-ops : add moe test

* cuda : fix get_rows when ncols is odd

* convert : determine n_ctx correctly

* metal : fix ggml_mul_mat_id for F32

* test-backend-ops : make experts more evenly probable (test_moe)

* test-backend-ops : cleanup, add moe test for batches

* test-backend-ops : add cpy from f32 -> all types test

* test-backend-ops : fix dequantize block offset

* llama : fix hard-coded number of experts

* test-backend-ops : simplify and disable slow tests to avoid CI timeout

* test-backend-ops : disable MOE test with thread sanitizer

* cuda : fix mul_mat_id with multi gpu

* convert : use 1e6 rope_freq_base for mixtral

* convert : fix style

* convert : support safetensors format

* gguf-py : bump version

* metal : add cpy f16 -> f32 kernel

* metal : fix binary ops for ne10 % 4 != 0

* test-backend-ops : add one more sum_rows test

* ggml : do not use BLAS with ggml_mul_mat_id

* convert-hf : support for mixtral-instruct (ggerganov#4428)

* convert : typo fix, add additional hyperparameters, use LLaMA arch for Mixtral-instruct

* convert : use sentencepiece tokenizer for Mixtral-instruct

* convert : make flake8 happy

* metal : fix soft_max kernels

ref: ggerganov/ggml@1914017

* metal : limit kernels to not use more than the allowed threads

---------

Co-authored-by: Georgi Gerganov <[email protected]>
Co-authored-by: Radek Pilar <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
high priority Very important issue model Model specific need feedback Testing and feedback with results are needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.