Skip to content

Commit

Permalink
Merge pull request #6260 from oobabooga/dev
Browse files Browse the repository at this point in the history
Merge dev branch
  • Loading branch information
oobabooga authored Jul 23, 2024
2 parents 0315122 + 3ee6822 commit d1115f1
Show file tree
Hide file tree
Showing 32 changed files with 602 additions and 321 deletions.
44 changes: 22 additions & 22 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -204,16 +204,16 @@ List of command-line flags
usage: server.py [-h] [--multi-user] [--character CHARACTER] [--model MODEL] [--lora LORA [LORA ...]] [--model-dir MODEL_DIR] [--lora-dir LORA_DIR] [--model-menu] [--settings SETTINGS]
[--extensions EXTENSIONS [EXTENSIONS ...]] [--verbose] [--chat-buttons] [--idle-timeout IDLE_TIMEOUT] [--loader LOADER] [--cpu] [--auto-devices]
[--gpu-memory GPU_MEMORY [GPU_MEMORY ...]] [--cpu-memory CPU_MEMORY] [--disk] [--disk-cache-dir DISK_CACHE_DIR] [--load-in-8bit] [--bf16] [--no-cache] [--trust-remote-code]
[--force-safetensors] [--no_use_fast] [--use_flash_attention_2] [--load-in-4bit] [--use_double_quant] [--compute_dtype COMPUTE_DTYPE] [--quant_type QUANT_TYPE] [--flash-attn]
[--tensorcores] [--n_ctx N_CTX] [--threads THREADS] [--threads-batch THREADS_BATCH] [--no_mul_mat_q] [--n_batch N_BATCH] [--no-mmap] [--mlock] [--n-gpu-layers N_GPU_LAYERS]
[--tensor_split TENSOR_SPLIT] [--numa] [--logits_all] [--no_offload_kqv] [--cache-capacity CACHE_CAPACITY] [--row_split] [--streaming-llm] [--attention-sink-size ATTENTION_SINK_SIZE]
[--gpu-split GPU_SPLIT] [--autosplit] [--max_seq_len MAX_SEQ_LEN] [--cfg-cache] [--no_flash_attn] [--cache_8bit] [--cache_4bit] [--num_experts_per_token NUM_EXPERTS_PER_TOKEN]
[--triton] [--no_inject_fused_mlp] [--no_use_cuda_fp16] [--desc_act] [--disable_exllama] [--disable_exllamav2] [--wbits WBITS] [--groupsize GROUPSIZE] [--no_inject_fused_attention]
[--hqq-backend HQQ_BACKEND] [--deepspeed] [--nvme-offload-dir NVME_OFFLOAD_DIR] [--local_rank LOCAL_RANK] [--alpha_value ALPHA_VALUE] [--rope_freq_base ROPE_FREQ_BASE]
[--compress_pos_emb COMPRESS_POS_EMB] [--listen] [--listen-port LISTEN_PORT] [--listen-host LISTEN_HOST] [--share] [--auto-launch] [--gradio-auth GRADIO_AUTH]
[--gradio-auth-path GRADIO_AUTH_PATH] [--ssl-keyfile SSL_KEYFILE] [--ssl-certfile SSL_CERTFILE] [--api] [--public-api] [--public-api-id PUBLIC_API_ID] [--api-port API_PORT]
[--api-key API_KEY] [--admin-key ADMIN_KEY] [--nowebui] [--multimodal-pipeline MULTIMODAL_PIPELINE] [--model_type MODEL_TYPE] [--pre_layer PRE_LAYER [PRE_LAYER ...]]
[--checkpoint CHECKPOINT] [--monkey-patch]
[--force-safetensors] [--no_use_fast] [--use_flash_attention_2] [--use_eager_attention] [--load-in-4bit] [--use_double_quant] [--compute_dtype COMPUTE_DTYPE] [--quant_type QUANT_TYPE]
[--flash-attn] [--tensorcores] [--n_ctx N_CTX] [--threads THREADS] [--threads-batch THREADS_BATCH] [--no_mul_mat_q] [--n_batch N_BATCH] [--no-mmap] [--mlock]
[--n-gpu-layers N_GPU_LAYERS] [--tensor_split TENSOR_SPLIT] [--numa] [--logits_all] [--no_offload_kqv] [--cache-capacity CACHE_CAPACITY] [--row_split] [--streaming-llm]
[--attention-sink-size ATTENTION_SINK_SIZE] [--gpu-split GPU_SPLIT] [--autosplit] [--max_seq_len MAX_SEQ_LEN] [--cfg-cache] [--no_flash_attn] [--no_xformers] [--no_sdpa]
[--cache_8bit] [--cache_4bit] [--num_experts_per_token NUM_EXPERTS_PER_TOKEN] [--triton] [--no_inject_fused_mlp] [--no_use_cuda_fp16] [--desc_act] [--disable_exllama]
[--disable_exllamav2] [--wbits WBITS] [--groupsize GROUPSIZE] [--no_inject_fused_attention] [--hqq-backend HQQ_BACKEND] [--cpp-runner] [--deepspeed]
[--nvme-offload-dir NVME_OFFLOAD_DIR] [--local_rank LOCAL_RANK] [--alpha_value ALPHA_VALUE] [--rope_freq_base ROPE_FREQ_BASE] [--compress_pos_emb COMPRESS_POS_EMB] [--listen]
[--listen-port LISTEN_PORT] [--listen-host LISTEN_HOST] [--share] [--auto-launch] [--gradio-auth GRADIO_AUTH] [--gradio-auth-path GRADIO_AUTH_PATH] [--ssl-keyfile SSL_KEYFILE]
[--ssl-certfile SSL_CERTFILE] [--subpath SUBPATH] [--api] [--public-api] [--public-api-id PUBLIC_API_ID] [--api-port API_PORT] [--api-key API_KEY] [--admin-key ADMIN_KEY] [--nowebui]
[--multimodal-pipeline MULTIMODAL_PIPELINE] [--model_type MODEL_TYPE] [--pre_layer PRE_LAYER [PRE_LAYER ...]] [--checkpoint CHECKPOINT] [--monkey-patch]
Text generation web UI
Expand Down Expand Up @@ -254,6 +254,7 @@ Transformers/Accelerate:
--force-safetensors Set use_safetensors=True while loading the model. This prevents arbitrary code execution.
--no_use_fast Set use_fast=False while loading the tokenizer (it's True by default). Use this if you have any problems related to use_fast.
--use_flash_attention_2 Set use_flash_attention_2=True while loading the model.
--use_eager_attention Set attn_implementation= eager while loading the model.
bitsandbytes 4-bit:
--load-in-4bit Load the model with 4-bit precision (using bitsandbytes).
Expand All @@ -263,7 +264,7 @@ bitsandbytes 4-bit:
llama.cpp:
--flash-attn Use flash-attention.
--tensorcores Use llama-cpp-python compiled with tensor cores support. This increases performance on RTX cards. NVIDIA only.
--tensorcores NVIDIA only: use llama-cpp-python compiled with tensor cores support. This may increase performance on newer cards.
--n_ctx N_CTX Size of the prompt context.
--threads THREADS Number of threads to use.
--threads-batch THREADS_BATCH Number of threads to use for batches/prompt processing.
Expand All @@ -272,7 +273,7 @@ llama.cpp:
--no-mmap Prevent mmap from being used.
--mlock Force the system to keep the model in RAM.
--n-gpu-layers N_GPU_LAYERS Number of layers to offload to the GPU.
--tensor_split TENSOR_SPLIT Split the model across multiple GPUs. Comma-separated list of proportions. Example: 18,17.
--tensor_split TENSOR_SPLIT Split the model across multiple GPUs. Comma-separated list of proportions. Example: 60,40.
--numa Activate NUMA task allocation for llama.cpp.
--logits_all Needs to be set for perplexity evaluation to work. Otherwise, ignore it, as it makes prompt processing slower.
--no_offload_kqv Do not offload the K, Q, V to the GPU. This saves VRAM but reduces the performance.
Expand All @@ -287,6 +288,8 @@ ExLlamaV2:
--max_seq_len MAX_SEQ_LEN Maximum sequence length.
--cfg-cache ExLlamav2_HF: Create an additional cache for CFG negative prompts. Necessary to use CFG with that loader.
--no_flash_attn Force flash-attention to not be used.
--no_xformers Force xformers to not be used.
--no_sdpa Force Torch SDPA to not be used.
--cache_8bit Use 8-bit cache to save VRAM.
--cache_4bit Use Q4 cache to save VRAM.
--num_experts_per_token NUM_EXPERTS_PER_TOKEN Number of experts to use for generation. Applies to MoE models like Mixtral.
Expand All @@ -307,6 +310,9 @@ AutoAWQ:
HQQ:
--hqq-backend HQQ_BACKEND Backend for the HQQ loader. Valid options: PYTORCH, PYTORCH_COMPILE, ATEN.
TensorRT-LLM:
--cpp-runner Use the ModelRunnerCpp runner, which is faster than the default ModelRunner but doesn't support streaming yet.
DeepSpeed:
--deepspeed Enable the use of DeepSpeed ZeRO-3 for inference via the Transformers integration.
--nvme-offload-dir NVME_OFFLOAD_DIR DeepSpeed: Directory to use for ZeRO-3 NVME offloading.
Expand All @@ -327,6 +333,7 @@ Gradio:
--gradio-auth-path GRADIO_AUTH_PATH Set the Gradio authentication file path. The file should contain one or more user:password pairs in the same format as above.
--ssl-keyfile SSL_KEYFILE The path to the SSL certificate key file.
--ssl-certfile SSL_CERTFILE The path to the SSL certificate cert file.
--subpath SUBPATH Customize the subpath for gradio, use with reverse proxy
API:
--api Enable the API extension.
Expand Down Expand Up @@ -392,18 +399,11 @@ Run `python download-model.py --help` to see all the options.

https://colab.research.google.com/github/oobabooga/text-generation-webui/blob/main/Colab-TextGen-GPU.ipynb

## Acknowledgment

In August 2023, [Andreessen Horowitz](https://a16z.com/) (a16z) provided a generous grant to encourage and support my independent work on this project. I am **extremely** grateful for their trust and recognition.

## Links

#### Community
## Community

* Subreddit: https://www.reddit.com/r/oobabooga/
* Discord: https://discord.gg/jwZCF2dPQN

#### Support
## Acknowledgment

* ko-fi: https://ko-fi.com/oobabooga
* GitHub Sponsors: https://github.com/sponsors/oobabooga
In August 2023, [Andreessen Horowitz](https://a16z.com/) (a16z) provided a generous grant to encourage and support my independent work on this project. I am **extremely** grateful for their trust and recognition.
1 change: 1 addition & 0 deletions css/chat_style-TheEncrypted777.css
Original file line number Diff line number Diff line change
Expand Up @@ -90,6 +90,7 @@
line-height: 1.428571429 !important;
color: rgb(243 244 246) !important;
text-shadow: 2px 2px 2px rgb(0 0 0);
font-weight: 500;
}

.message-body p em {
Expand Down
3 changes: 2 additions & 1 deletion css/chat_style-cai-chat.css
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,7 @@
.message-body p {
font-size: 15px !important;
line-height: 22.5px !important;
font-weight: 500;
}

.message-body p, .chat .message-body ul, .chat .message-body ol {
Expand All @@ -59,4 +60,4 @@
.message-body p em {
color: rgb(110 110 110) !important;
font-weight: 500;
}
}
1 change: 1 addition & 0 deletions css/chat_style-messenger.css
Original file line number Diff line number Diff line change
Expand Up @@ -88,6 +88,7 @@
margin-bottom: 0 !important;
font-size: 15px !important;
line-height: 1.428571429 !important;
font-weight: 500;
}

.dark .message-body p em {
Expand Down
3 changes: 2 additions & 1 deletion css/chat_style-wpp.css
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,7 @@
margin-bottom: 0 !important;
font-size: 15px !important;
line-height: 1.428571429 !important;
font-weight: 500;
}

.dark .message-body p em {
Expand All @@ -52,4 +53,4 @@

.message-body p em {
color: rgb(110 110 110) !important;
}
}
10 changes: 10 additions & 0 deletions css/highlightjs/github.min.css

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

21 changes: 16 additions & 5 deletions css/main.css
Original file line number Diff line number Diff line change
Expand Up @@ -62,10 +62,6 @@ ol li p, ul li p {
border: 0;
}

.gradio-container-3-18-0 .prose * h1, h2, h3, h4 {
color: white;
}

.gradio-container {
max-width: 100% !important;
padding-top: 0 !important;
Expand Down Expand Up @@ -378,6 +374,10 @@ div.svelte-362y77>*, div.svelte-362y77>.form>* {
}
}

.chat-parent .prose {
visibility: visible;
}

.old-ui .chat-parent {
height: calc(100dvh - 192px - var(--header-height) - var(--input-delta));
margin-bottom: var(--input-delta) !important;
Expand All @@ -399,6 +399,13 @@ div.svelte-362y77>*, div.svelte-362y77>.form>* {
padding-bottom: 15px !important;
}

.message-body h1,
.message-body h2,
.message-body h3,
.message-body h4 {
color: var(--body-text-color);
}

.message-body li {
list-style-position: outside;
}
Expand Down Expand Up @@ -447,6 +454,11 @@ div.svelte-362y77>*, div.svelte-362y77>.form>* {
border-radius: 5px;
font-size: 82%;
padding: 1px 3px;
background: white !important;
color: #1f2328;
}

.dark .message-body code {
background: #0d1117 !important;
color: rgb(201 209 217);
}
Expand Down Expand Up @@ -796,4 +808,3 @@ div.svelte-362y77>*, div.svelte-362y77>.form>* {
max-height: 300px;
}
}

9 changes: 9 additions & 0 deletions js/dark_theme.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
function toggleDarkMode() {
document.body.classList.toggle("dark");
var currentCSS = document.getElementById("highlight-css");
if (currentCSS.getAttribute("href") === "file/css/highlightjs/github-dark.min.css") {
currentCSS.setAttribute("href", "file/css/highlightjs/github.min.css");
} else {
currentCSS.setAttribute("href", "file/css/highlightjs/github-dark.min.css");
}
}
9 changes: 3 additions & 6 deletions js/main.js
Original file line number Diff line number Diff line change
Expand Up @@ -445,14 +445,12 @@ function updateCssProperties() {

// Check if the chat container is visible
if (chatContainer.clientHeight > 0) {

// Calculate new chat height and adjust CSS properties
var numericHeight = chatContainer.parentNode.clientHeight - chatInputHeight + 40 - 100;
if (document.getElementById("chat-tab").style.paddingBottom != "") {
numericHeight += 20;
}
const newChatHeight = `${numericHeight}px`;

const newChatHeight = `${numericHeight}px`;
document.documentElement.style.setProperty("--chat-height", newChatHeight);
document.documentElement.style.setProperty("--input-delta", `${chatInputHeight - 40}px`);

Expand All @@ -463,15 +461,14 @@ function updateCssProperties() {

// Adjust scrollTop based on input height change
if (chatInputHeight !== currentChatInputHeight) {
chatContainer.scrollTop += chatInputHeight > currentChatInputHeight ? chatInputHeight : -chatInputHeight + 40;
chatContainer.scrollTop += chatInputHeight - currentChatInputHeight;
currentChatInputHeight = chatInputHeight;
}
}
}

// Observe textarea size changes and call update function
new ResizeObserver(updateCssProperties)
.observe(document.querySelector("#chat-input textarea"));
new ResizeObserver(updateCssProperties).observe(document.querySelector("#chat-input textarea"));

// Handle changes in window size
window.addEventListener("resize", updateCssProperties);
Expand Down
2 changes: 2 additions & 0 deletions modules/block_requests.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@

import requests

from modules import shared
from modules.logging_colors import logger

original_open = open
Expand Down Expand Up @@ -54,6 +55,7 @@ def my_open(*args, **kwargs):
'\n <script src="file/js/katex/auto-render.min.js"></script>'
'\n <script src="file/js/highlightjs/highlight.min.js"></script>'
'\n <script src="file/js/highlightjs/highlightjs-copy.min.js"></script>'
f'\n <link id="highlight-css" rel="stylesheet" href="file/css/highlightjs/{"github-dark" if shared.settings["dark_theme"] else "github"}.min.css">'
'\n <script>hljs.addPlugin(new CopyButtonPlugin());</script>'
'\n </head>'
)
Expand Down
Loading

0 comments on commit d1115f1

Please sign in to comment.