Releases: letta-ai/letta
0.3.2
🐞 Bugfix release
What's Changed
- fix: Automatically create
User
fromMemGPTConfig.anon_clientid
from client if does not exist by @sarahwooders in #981 - fix: various patches for Azure support + strip
Box
by @cpacker in #982 - fix: patch bug in verify first message + add
ChatCompletionRequest
models to the models dir by @cpacker in #985 - docs: Fix client README example by @sarahwooders in #984
- fix: only attempt to mount static files dir (for chatUI) if already generated by @cpacker in #991
- fix: allow multiple tools to be called by LLM and rewrite request by @cpacker in #992
- fix: patch bug in airo wrapper by @cpacker in #993
- fix: Chunk inserts into DB on CLI load by @sarahwooders in #994
- chore: bump to version 0.3.2 by @sarahwooders in #995
- fix: Modify chroma to use
collection.upsert
instead ofcollection.add
for inserts by @sarahwooders in #996
Full Changelog: 0.3.1...0.3.2
0.3.1
🐛 Bugfix release
🚧 What's Changed
- docs: small fix to docs by @haikyuu in #942
- fix: use
utf-8
encodings for all text files by @cpacker in #918 - fix: more instructive error for function loading by @tombedor in #945
- fix: set json loads strict to false by @tombedor in #946
- docs: Add roadmap link to readme by @cpacker in #962
- chore: Update bug_report.md to request
~/.memgpt/config
by @sarahwooders in #970 - fix: Fix Misplaced Else Statement and Correct If Condition Handling in set_config_with_dict() by @Luther-Sparks in #965
- fix: Allow
content
to beNone
forrole==tool
by @sarahwooders in #971 - fix: Remove document truncation for
memgpt load
by @sarahwooders in #978 - fix: Require
tool_calls
orcontent
to be set for assistant role by @sarahwooders in #976 - fix: patch mem lim exceeded by @cpacker in #977
- fix: Enhance CreateAgentDialog with Improved Error Handling and User Feedback by @arduenify in #932
- fix: Remove document truncation and replace DB inserts with upserts by @sarahwooders in #973
- chore: bump version to 0.3.1 by @sarahwooders in #979
👋 New Contributors
- @haikyuu made their first contribution in #942
- @Luther-Sparks made their first contribution in #965
✍️ Full Changelog: 0.3...0.3.1
0.3
This release is a major refactor of MemGPT which moves all agent, user, and system information into database storage. We implemented this refactor to enable people to run MemGPT has a hosted service that can support multiple users. You can still keep using MemGPT's CLI, but your data will be stored in local sqlite and chroma files (unless configured otherwise).
🚌 Migrating to 0.3
MemGPT will no longer be able to access existing agents and data sources unless they are migrated. You can migrate old agent state and and data sources contained in the ~/.memgpt/config
folder using the memgpt migrate
command.
> memgpt migrate
🌐 MemGPT Server
You can now run MemGPT as a service that can support multiple users. User authentication is coming soon which will make the server usable for production applications.
You can run the server with:
> memgpt server
INFO: Started server process [53568]
INFO: Waiting for application startup.
Writing out openapi.json file
INFO: Application startup complete.
INFO: Uvicorn running on http://localhost:8283 (Press CTRL+C to quit)
👋 New Contributors
- @ifsheldon made their first contribution in #780
- @tezer made their first contribution in #789
- @k0hacuu made their first contribution in #808
- @Maximilian-Winter made their first contribution in #796
- @robbyt made their first contribution in #843
Full Changelog: 0.2.11...0.3
0.2.11
MemGPT Python Client
MemGPT version 0.2.11
includes a new Python client for developers to easily build on MemGPT (special thanks to @BabellDev!)
To use the MemGPT Python client, simply do:
from memgpt import MemGPT
# creates a client object, which you can then use to create new MemGPT agents, message agents, etc
client = MemGPT()
For more information, check our documentation page.
✍️ What's Changed
- ci: Run tests using postgres docker container by @sarahwooders in #715
- fix: increase the func return char limit by @cpacker in #714
- fix: patch TEI error in load by @cpacker in #725
- fix: patch bug on TEI embedding lookup by @cpacker in #724
- fix: updated CLI interface to properly print searches on archival memory by @cpacker in #731
- fix: Typo in info log message and docs by @VladCuciureanu in #730
- fix: don't insert request heartbeat into pause heartbeat by @cpacker in #727
- docs: synced api reference by @cpacker in #737
- fix: cleanup failed agent creation by @cpacker in #726
- feat: chatml-noforce-roles wrapper + cli fix by @cpacker in #738
- fix: refactor + improve json parser by @cpacker in #739
- feat: Add MemGPT "Python Client" by @BabellDev in #713
- feat: enum choices for list command argument (issue #732) by @jimlloyd in #746
- docs: Include steps for Local LLMs by @sanegaming in #749
- docs: word choice in documentation by @oceaster in #760
- docs: Improve Local LLM information and add WSL Troubleshooting by @sanegaming in #752
- docs: linting, syntax, formatting & spelling fixes for all files by @oceaster in #761
- fix: fix string & ws rules in json_func_calls...gbnf by @jimlloyd in #754
- docs: update local_llm_settings.md by @cpacker in #765
- docs: Update python_client.md by @vinayak-revelation in #772
- fix: Update memgpt_coder_autogen.ipynb by @cpacker in #775
Full Changelog: 0.2.10...0.2.11
👋 New Contributors
- @VladCuciureanu made their first contribution in #730
- @BabellDev made their first contribution in #713
- @jimlloyd made their first contribution in #746
- @sanegaming made their first contribution in #749
- @oceaster made their first contribution in #760
- @vinayak-revelation made their first contribution in #772
0.2.10
Merry Christmas! 🎄🎁🎅
MemGPT version 0.2.10
includes:
- Improvements to local/open LLM performance
- Includes two new model wrappers that increase MemGPT "proactiveness" (when using local/open LLMs)
chatml-hints
andchatml-noforce-hints
- Use them by specifying them in
memgpt configure
or adding them tomemgpt run
- eg
memgpt run --model-wrapper chatml-noforce-hints
- eg
- Includes two new model wrappers that increase MemGPT "proactiveness" (when using local/open LLMs)
- Better visuals in the MemGPT CLI UI
- Various patches (
quickstart
command, AutoGen, ...)
✍️ What's Changed
- docs: Added a new docs page describing how to run custom LLM parameters by @cpacker in #688
- feat: improve CLI appearance by @cpacker in #687
- fix: moved configs for hosted to https, patched bug in embedding creation by @cpacker in #685
- fix: allow edge case of quickstart before run on first install by @cpacker in #684
- fix: Remove match/case to support python <3.10 by @cpacker in #691
- fix: typo in Dockerfile comment by @tombedor in #690
- fix: memgpt agent ignores user messages by @javiersastre in #679
- fix: Better errors on over length persona/human files by @cpacker in #695
- feat: set a default temperature in the common local llm settings by @cpacker in #696
- feat: added basic heartbeat override heuristics by @cpacker in #621
- docs: updated readme for quickstart by @cpacker in #698
- fix: misc fixes by @cpacker in #700
- feat: added new 'hint' wrappers that inject hints into the pre-prefix by @cpacker in #707
Full Changelog: 0.2.9...0.2.10
👋 New Contributors
- @tombedor made their first contribution in #690
- @javiersastre made their first contribution in #679
0.2.9
🐛 Bugfix release to patch issues with memgpt quickstart
command
See https://github.com/cpacker/MemGPT/releases/tag/0.2.8 for release details.
Full Changelog: 0.2.8...0.2.9
0.2.8
This release includes major updates to help it get easier to get started with MemGPT!
Note: release 0.2.8 superseded by bugfix release 0.2.9
🎄 Free MemGPT Hosted Endpoints
MemGPT now can be used with hosted LLM and embedding endpoints, which are free and do not require an access key! The LLM endpoint is running a variant of the newly released Mixtral model - specifically Dolphin 2.5 Mixtral 8x7b 🐬!
Since the endpoint is still in beta, please expect occasional downtime. You can check for uptime at https://status.memgpt.ai.
⚡ Quickstart Configuration
You can automatically configure MemGPT (for the MemGPT endpoints and OpenAI) with quickstart commands:
# using MemGPT free endpoint
> memgpt quickstart --latest
# using OpenAI endpoint
> memgpt quickstart --latest --backend openai
This will set default options in the file ~/.memgpt/config
which you can also modify with advanced options in memgpt configure
.
📖 Documentation Updates
MemGPT's documentation has migrated to https://memgpt.readme.io.
✍️ Full Change Log
- API server refactor + REST API by @cpacker in #593
- added
memgpt server
command by @cpacker in #611 - updated local APIs to return usage info by @cpacker in #585
- added autogen as an extra by @cpacker in #616
- Add safeguard on tokens returned by functions by @cpacker in #576
- patch bug where
function_args.copy()
throws runtime error by @cpacker in #617 - allow passing custom host to rest server by @cpacker in #618
- migrate to using completions endpoint by default by @cpacker in #628
- Patch bug with loading of old agents by @cpacker in #629
- fix: poetry add [html2text/docx2txt] by @cpacker in #633
- feat: Add semantic PR checking to enforce prefixes on PRs by @cpacker in #634
- feat: added memgpt folder command by @cpacker in #632
- feat: Add common + custom settings files for completion endpoints by @cpacker in #631
- feat: Migrate docs by @cpacker in #646
- feat: Updated contributing docs by @cpacker in #653
- fix: [446] better gitignore for IDEs and OS. by @agiletechnologist in #651
- feat: updated/added docs assets by @cpacker in #654
- feat: Add
memgpt quickstart
command by @cpacker in #641 - fix: patch ollama bug w/ raw mode by @cpacker in #663
- fix: Patch openai error message + openai quickstart by @cpacker in #665
- fix: added logging of raw response on debug by @cpacker in #666
- feat: added /summarize command by @cpacker in #667
- feat: Add new wrapper defaults by @cpacker in #656
- fix: Throw "env vars not set" early and enhance /attach for KeyboardInterrupt (#669) by @dejardim in #674
- fix: CLI conveniences (add-on to #674) by @cpacker in #675
- feat: pull model list for openai-compatible endpoints by @cpacker in #630
- fix: Update README.md by @cpacker in #676
- docs: patched asset links by @cpacker in #677
- feat: further simplify setup flow by @cpacker in #673
👋 New Contributors
Full Changelog: 0.2.7...0.2.8
0.2.7
Minor bugfix release
What's Changed
- allow passing
skip_verify
to autogen constructors by @cpacker in #581 - Chroma storage integration by @sarahwooders in #285
- Fix
pyproject.toml
chroma version by @sarahwooders in #582 - Remove broken tests from chroma merge by @sarahwooders in #584
- patch load_save test by @cpacker in #586
- Patch azure embeddings + handle azure deployments properly by @cpacker in #594
- AutoGen misc fixes by @cpacker in #603
- Add
lancedb
andchroma
into default package dependencies by @sarahwooders in #605 - Bump version 0.2.7 by @sarahwooders in #607
Full Changelog: 0.2.6...0.2.7
0.2.6
Bugfix release
What's Changed
- Add docs file for customizing embedding mode by @sarahwooders in #554
- Upgrade to
llama_index=0.9.10
by @sarahwooders in #556 - fix cannot import name 'EmptyIndex' from 'llama_index' by @cpacker in #558
- Fix typo in storage.md by @alxpez in #564
- use a consistent warning prefix across codebase by @cpacker in #569
- Update autogen.md to include Azure config example + patch for
pyautogen>=0.2.0
by @cpacker in #555 - Update autogen.md by @cpacker in #571
- Fix crash from bad key access into response_message by @claucambra in #437
- sort agents by directory-last-modified time by @cpacker in #574
- Add safety check to pop by @cpacker in #575
- Add
pyyaml
package topyproject.toml
by @cpacker in #557 - add back dotdict for backcompat by @cpacker in #572
- Bump version to 0.2.6 by @sarahwooders in #573
New Contributors
Full Changelog: 0.2.5...0.2.6
0.2.5
This release includes a number of bugfixes and new integrations:
- Bugfixes for AutoGen integration (including a common OpenAI dependency conflict issue)
- Documentations for how to use MemGPT with vLLM OpenAI compatible endpoints
- Integration with HuggingFace TEI for custom embedding models
This release also fully deprecates and removes legacy commands and configuration options which were no longer being maintained:
python main.py
command (replaced bymemgpt run
)- Usage of
BACKEND_TYPE
andOPENAI_BASE_URL
to configure local/custom LLMs (replaced bymemgpt configure
andmemgpt run
flags)
What's Changed
- add new manual json parser meant to catch send_message calls with trailing bad extra chars by @cpacker in #509
- add a longer prefix that to the default wrapper by @cpacker in #510
- add core memory char limits to text shown in core memory by @cpacker in #508
- [hotfix] extra arg being passed causing a runtime error by @cpacker in #517
- Add warning if no data sources loaded on
/attach
command by @sarahwooders in #513 - fix doc typo autogem to autogen by @paulasquin in #512
- Update contributing guidelines by @sarahwooders in #516
- Update contributing.md by @cpacker in #518
- Update contributing.md by @cpacker in #520
- Add support for HuggingFace Text Embedding Inference endpoint for embeddings by @sarahwooders in #524
- Update mkdocs theme, small fixes for
mkdocs.yml
by @cpacker in #522 - Update mkdocs.yml by @cpacker in #525
- Clean memory error messages by @cpacker in #523
- Fix class names used in persistence manager logging by @claucambra in #503
- Specify pyautogen dependency by adding install extra for autogen by @sarahwooders in #530
- Add
user
field for vLLM endpoint by @sarahwooders in #531 - Patch JSON parsing code (regex fallback) by @cpacker in #533
- Update bug_report.md by @cpacker in #532
- LanceDB integration bug fixes and improvements by @AyushExel in #528
- Remove
openai
package by @cpacker in #534 - Update contributing.md (typo) by @cpacker in #538
- Run formatting checks with poetry by @sarahwooders in #537
- Removing dead code + legacy commands by @sarahwooders in #536
- Remove usage of
BACKEND_TYPE
by @sarahwooders in #539 - Update AutoGen documentation and notebook example by @cpacker in #540
- Update local_llm.md by @cpacker in #542
- Documentation update by @cpacker in #541
- clean docs by @cpacker in #543
- Update autogen.md by @cpacker in #544
- update docs by @cpacker in #547
- added vLLM doc page since we support it by @cpacker in #545
New Contributors
- @paulasquin made their first contribution in #512
- @claucambra made their first contribution in #503
- @AyushExel made their first contribution in #528
Full Changelog: 0.2.4...0.2.5