Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Documentation update #541

Merged
merged 2 commits into from
Dec 1, 2023
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 7 additions & 10 deletions docs/autogen.md
Original file line number Diff line number Diff line change
@@ -1,24 +1,18 @@
## MemGPT + Autogen

!!! warning "Need help?"

If you need help visit our [Discord server](https://discord.gg/9GEQrxmVyE) and post in the #support channel.

You can also check the [GitHub discussion page](https://github.com/cpacker/MemGPT/discussions/65), but the Discord server is the official support channel and is monitored more actively.

## MemGPT + Autogen

[examples/agent_groupchat.py](https://github.com/cpacker/MemGPT/blob/main/memgpt/autogen/examples/agent_groupchat.py) contains an example of a groupchat where one of the agents is powered by MemGPT.

If you are using OpenAI, you can also run it using the [example notebook](https://github.com/cpacker/MemGPT/blob/main/memgpt/autogen/examples/memgpt_coder_autogen.ipynb).

In the next section, we detail how to set up MemGPT and Autogen to run with local LLMs.

## Example: connecting Autogen + MemGPT to non-OpenAI LLMs (using oobabooga web UI)

!!! warning "Enable the OpenAI extension"

In web UI make sure to enable the [openai extension](https://github.com/oobabooga/text-generation-webui/tree/main/extensions/openai)!

This is enabled by default in newer versions of web UI, but must be enabled manually in older versions of web UI.
## Example: connecting Autogen + MemGPT to non-OpenAI LLMs

To get MemGPT to work with a local LLM, you need to have an LLM running on a server that takes API requests.

Expand Down Expand Up @@ -48,7 +42,10 @@ Once you've confirmed that you're able to chat with a MemGPT agent using `memgpt

If you're using RunPod to run web UI, make sure that you set your endpoint to the RunPod IP address, **not the default localhost address**.

For example, during `memgpt configure`: `? Enter default endpoint: https://yourpodaddresshere-5000.proxy.runpod.net`
For example, during `memgpt configure`:
```text
? Enter default endpoint: https://yourpodaddresshere-5000.proxy.runpod.net
```

### Part 3: Creating a MemGPT AutoGen agent (groupchat example)

Expand Down