From 761e821f3affb54a48b6d91ba836ba3fec30c6b0 Mon Sep 17 00:00:00 2001 From: Charles Packer Date: Thu, 30 Nov 2023 17:52:32 -0800 Subject: [PATCH] Update local_llm.md (#542) --- docs/local_llm.md | 24 +++++++++++++++--------- 1 file changed, 15 insertions(+), 9 deletions(-) diff --git a/docs/local_llm.md b/docs/local_llm.md index 19b6e26e39..4f2fc61e4a 100644 --- a/docs/local_llm.md +++ b/docs/local_llm.md @@ -1,5 +1,3 @@ -## Using MemGPT with local LLMs - !!! warning "Need help?" If you need help visit our [Discord server](https://discord.gg/9GEQrxmVyE) and post in the #support channel. @@ -12,16 +10,22 @@ Make sure to check the [local LLM troubleshooting page](../local_llm_faq) to see common issues before raising a new issue or posting on Discord. -!!! warning "Recommended LLMs / models" - - To see a list of recommended LLMs to use with MemGPT, visit our [Discord server](https://discord.gg/9GEQrxmVyE) and check the #model-chat channel. - ### Installing dependencies To install dependencies required for running local models, run: -``` +```sh pip install 'pymemgpt[local]' ``` +If you installed from source (`git clone` then `pip install -e .`), do: +```sh +pip install -e '.[local]' +``` + +If you installed from source using Poetry, do: +```sh +poetry install -E local +``` + ### Quick overview 1. Put your own LLM behind a web server API (e.g. [oobabooga web UI](https://github.com/oobabooga/text-generation-webui#starting-the-web-ui)) @@ -100,8 +104,10 @@ If you would like us to support a new backend, feel free to open an issue or pul ### Which model should I use? +!!! warning "Recommended LLMs / models" + + To see a list of recommended LLMs to use with MemGPT, visit our [Discord server](https://discord.gg/9GEQrxmVyE) and check the #model-chat channel. + If you are experimenting with MemGPT and local LLMs for the first time, we recommend you try the Dolphin Mistral finetune (e.g. [ehartford/dolphin-2.2.1-mistral-7b](https://huggingface.co/ehartford/dolphin-2.2.1-mistral-7b) or a quantized variant such as [dolphin-2.2.1-mistral-7b.Q6_K.gguf](https://huggingface.co/TheBloke/dolphin-2.2.1-mistral-7B-GGUF)), and use the default `airoboros` wrapper. Generating MemGPT-compatible outputs is a harder task for an LLM than regular text output. For this reason **we strongly advise users to NOT use models below Q5 quantization** - as the model gets worse, the number of errors you will encounter while using MemGPT will dramatically increase (MemGPT will not send messages properly, edit memory properly, etc.). - -Check out [our local LLM GitHub discussion](https://github.com/cpacker/MemGPT/discussions/67) and [the MemGPT Discord server](https://discord.gg/9GEQrxmVyE) for more advice on model selection and help with local LLM troubleshooting.