Skip to content

Releases: letta-ai/letta

v0.5.5

22 Nov 17:08
Compare
Choose a tag to compare

🐛 Bugfix release to fix issues with the pip install letta[external-tools] option which was failing due to conflicting dependencies.
⚠️ Note: This release deprecates crewAI tools. We recommend using LangChain and Composio tools instead.

What's Changed

Full Changelog: 0.5.4...0.5.5

v0.5.4

21 Nov 02:21
742cdaa
Compare
Choose a tag to compare

🦟 Bugfix release

What's Changed

Full Changelog: 0.5.3...0.5.4

v0.5.3

20 Nov 01:08
746efc4
Compare
Choose a tag to compare

🐛 This release includes many bugfixes, and also migrates the Letta docker image to the letta/letta Dockerhub repository.

🔥 New features 🔥

📊 Add token counter to CLI /tokens #2047
🤲 Support for Together AI endpoints #2045 (documentation)
🔐 Password-protect letta server endpoints with the letta server --secure flag #2030

What's Changed

New Contributors

Full Changelog: 0.5.2...0.5.3

v0.5.2

07 Nov 07:00
d599d1f
Compare
Choose a tag to compare

🤖 Tags for agents (for associating agents with end users) #1984

You can now specify and query agents via an AgentState.tags field. If you want to associate end user IDs on your application, we recommend using tags to associate an agent with a specific end user:

# create agent for a specific user 
client.create_agent(tags=["my_user_id"])

# get agents for a user 
agents = client.get_agents(tags=["my_user_id"])

🛠️ Constrain agent behavior with tool rules #1954

We are introducing initial support for "tool rules", which allows developer to define constrains on their tools, such as requiring that a tool terminate agent execution. We added the following tool rules:

  • TerminalToolRule(tool_name=...) - If the tool is called, the agent ends execution
  • InitToolRule(tool_name=...) - The tool must be called first when an agent is run
  • ToolRule(tool_name=..., children=[...]) - If the tool is called, it must be followed by one of the tools specified in children

Tool rules are defined per-agent, and passed when creating agents:

# agent which must always call `first_tool_to_call`, `second_tool_to_call`, then `final_tool` when invoked
agent_state = client.create_agent(
  tool_rules = [
      InitToolRule(tool_name="first_tool_to_call"),
      ToolRule(tool_name="first_secret_word", children=["second_tool_to_call"]),
      ToolRule(tool_name="fourth_secret_word", children=["final_tool"]),
      TerminalToolRule(tool_name="send_message"),
  ]
)

By default, the send_message tool is marked with TerminalToolRule.

NOTE: All ToolRules types except for TerminalToolRule are only supported by models and providers which support structured outputs, which is currently only OpenAI with gpt-4o and gpt-4o-mini

🐛 Bugfixes + Misc

  • Fix error in tool creation on ADE
  • Fixes to tool updating
  • Deprecation of Block.name in favor of Block.template_name (only required for templated blocks) #1937
  • Move docker run letta/letta to run on port 8283 (previously 8083)
  • Properly returning LettaUsageStatistics #1955
  • Example notebooks on multi-agent, RAG, custom memory, and tools added in https://github.com/letta-ai/letta/tree/main/examples/notebooks

What's Changed

  • chore: Consolidate CI style checks by @mattzh72 in #1936
  • test: Add archival insert test to GPT-4 and make tests failure sensitive by @mattzh72 in #1930
  • fix: Fix letta delete-agent by @mattzh72 in #1940
  • feat: Add orm for Tools and clean up Tool logic by @mattzh72 in #1935
  • fix: update ollama model for testing by @sarahwooders in #1941
  • chore: Remove legacy code and instances of anon_clientid by @mattzh72 in #1942
  • fix: fix inconsistent name and label usage for blocks to resolve recursive validation issue by @sarahwooders in #1937
  • feat: Enable base constructs to automatically populate "created_by" and "last_updated_by" fields for relevant objects by @mattzh72 in #1944
  • feat: move docker run command to use port 8283 by @sarahwooders in #1949
  • fix: Clean up some legacy code and fix Groq provider by @mattzh72 in #1950
  • feat: add workflow to also publish to memgpt repository by @sarahwooders in #1953
  • feat: added returning usage data by @cpacker in #1955
  • fix: Fix create organization bug by @mattzh72 in #1956
  • chore: fix markdown error by @4shub in #1957
  • feat: Auto-refresh json_schema after tool update by @mattzh72 in #1958
  • feat: Implement tool calling rules for agents by @mattzh72 in #1954
  • fix: Make imports more explicit for BaseModel v1 or v2 by @mattzh72 in #1959
  • fix: math renderer error by @4shub in #1965
  • chore: add migration script by @4shub in #1960
  • fix: fix bug with POST /v1/agents/messages route returning empty LettaMessage base objects by @cpacker in #1966
  • fix: stop running the PR title validation on main, only on PRs by @cpacker in #1969
  • chore: fix lettaresponse by @4shub in #1968
  • fix: removed dead workflow file by @cpacker in #1970
  • feat: Add endpoint to add base tools to an org by @mattzh72 in #1971
  • chore: Migrate database by @4shub in #1974
  • chore: Tweak composio log levels by @mattzh72 in #1976
  • chore: Remove extra print statements by @mattzh72 in #1975
  • feat: rename block.name to block.template_name for clarity and add shared block tests by @sarahwooders in #1951
  • feat: added ability to disable the initial message sequence during agent creation by @cpacker in #1978
  • chore: Move ID generation logic out of the ORM layer and into the Pydantic model layer by @mattzh72 in #1981
  • feat: add ability to list agents by name for REST API and python SDK by @sarahwooders in #1982
  • feat: add convenience link to open ADE from server launch by @cpacker in #1986
  • docs: update badges in readme by @cpacker in #1985
  • fix: add name alias to block.template_name to fix ADE by @sarahwooders in #1987
  • chore: install all extras for prod by @4shub in #1989
  • fix: fix issue with linking tools and adding new tools by @sarahwooders in #1988
  • chore: add letta web saftey test by @4shub in #1991
  • feat: Add ability to add tags to agents by @mattzh72 in #1984
  • fix: Resync agents when tools are missing by @mattzh72 in #1994
  • Revert "fix: Resync agents when tools are missing" by @sarahwooders in #1996
  • fix: misc fixes (bad link to old docs, composio print statement, context window selection) by @cpacker in #1992
  • fix: no error when the tool name is invalid in agent state by @mattzh72 in #1997
  • feat: add e2e example scripts for documentation by @sarahwooders in #1995
  • chore: Continue relaxing tool constraints by @mattzh72 in #1999
  • chore: add endpoint to update users by @4shub in #1993
  • chore: Add tool rules example by @mattzh72 in #1998
  • feat: move HTML rendering of messages into LettaResponse and update notebook by @sarahwooders in #1983
  • feat: add example notebooks by @sarahwooders in #2001
  • chore: bump to version 0.5.2 by @sarahwooders in #2002

Full Changelog: 0.5.1...0.5.2

v0.5.1

23 Oct 19:30
d8f0c58
Compare
Choose a tag to compare

🛠️ Option to pre-load Composio, CrewAI, & LangChain tools

You can now auto-load tools from external libraries with the environment variable export LETTA_LOAD_DEFAULT_EXTERNAL_TOOLS=true. Then, if you run letta server, tools which do not require authorization.

export LETTA_LOAD_DEFAULT_EXTERNAL_TOOLS=true

# recommended: use with composio 
export COMPOSIO_API_KEY=...

pip install `letta[external-tools,server]`
letta server 

💭 Addition of put_inner_thoughts_in_kwargs field in LLMConfig

Some models (e.g. gpt-4o-mini) need to have inner thoughts as keyword arguments in the tool call (as opposed to having inner thoughts be in the content field). If you see your model missing inner thoughts generation, you should set put_inner_thoughts_in_kwargs=True in the LLMConfig.

🔐 Deprecation of Admin

Letta no longer requires authentication to use the Letta server and ADE. Although we still have a notion of user_id which can be passed in the BEARER_TOKEN, we expect a separate service to manage users and authentication. You will no longer need to create a user before creating an agent (agents will be assigned a default user_id).

🐞 Various Bugfixes

  • Fixes to Azure embeddings endpoint

What's Changed

Full Changelog: 0.5.0...0.5.1

v0.5.0

15 Oct 01:47
94d2a18
Compare
Choose a tag to compare

This release introduces major changes to how model providers are configured with Letta, as well as many bugfixes.

🧰 Dynamic model listing and multiple providers (#1814)

Model providers (e.g. OpenAI, Ollama, vLLM, etc.) are now enabled using environment variables, where multiple providers can be enabled at a time. When a provider is enabled, all supported LLM and embedding models will be listed as options to be selected in the CLI and ADE in a dropdown.

For example for OpenAI, you can simply get started with:

> export OPENAI_API_KEY=...
> letta run 
   ? Select LLM model: (Use arrow keys)
   » letta-free [type=openai] [ip=https://inference.memgpt.ai]
      gpt-4o-mini-2024-07-18 [type=openai] [ip=https://api.openai.com/v1]
      gpt-4o-mini [type=openai] [ip=https://api.openai.com/v1]
      gpt-4o-2024-08-06 [type=openai] [ip=https://api.openai.com/v1]
      gpt-4o-2024-05-13 [type=openai] [ip=https://api.openai.com/v1]
      gpt-4o [type=openai] [ip=https://api.openai.com/v1]
      gpt-4-turbo-preview [type=openai] [ip=https://api.openai.com/v1]
      gpt-4-turbo-2024-04-09 [type=openai] [ip=https://api.openai.com/v1]
      gpt-4-turbo [type=openai] [ip=https://api.openai.com/v1]
      gpt-4-1106-preview [type=openai] [ip=https://api.openai.com/v1]
      gpt-4-0613 [type=openai] [ip=https://api.openai.com/v1]
     ... 

Similarly, if you are using the ADE with letta server, you can select the model to use from the model dropdown.

# include models from OpenAI 
> export OPENAI_API_KEY=...

# include models from Anthropic 
> export ANTHROPIC_API_KEY=... 

# include models served by Ollama 
> export OLLAMA_BASE_URL=...

> letta server

We are deprecating the letta configure and letta quickstart commands, and the the use of ~/.letta/config for specifying the default LLMConfig and EmbeddingConfig, as it prevents a single letta server from being able to run agents with different model configurations concurrently, or to change the model configuration of an agent without re-starting the server. This workflow also required users to specify the model name, provider, and context window size manually via letta configure.

🧠 Integration testing for model providers

We added integration tests (including testing of MemGPT memory management tool-use) for the following model providers, and fixed many bugs in the process:

📊 Database migrations

We now support automated database migrations via alembic, implemented in #1867. You can expect future release to support automated migrations even if there are schema changes.

What's Changed

New Contributors

Full Changelog: 0.4.1...0.5.0

v0.4.1

04 Oct 01:23
9acb4f8
Compare
Choose a tag to compare

This release includes many bugfixes, as well as support for detaching data sources from agents and addition of additional tool providers.

⚒️ Support for Composio, LangChain, and CrewAI tools

We've improve support for external tool providers - you can use external tools (Composio, LangChain, and CrewAI) with:

pip install 'letta[external-tools]'

What's Changed

New Contributors

Full Changelog: 0.4.0...0.4.1

0.3.25

25 Aug 18:54
Compare
Choose a tag to compare

🐜 Bugfix release

  • fix: exit CLI on empty read for human or persona
  • fix: fix overflow error for existing memory fields with clipping

What's Changed

v0.3.24

17 Aug 01:27
d6b124b
Compare
Choose a tag to compare

Add new alpha revision of dev portal

What's Changed

Full Changelog: 0.3.23...0.3.24

0.3.23

17 Aug 01:08
88eb58a
Compare
Choose a tag to compare

🦗 Bugfix release

What's Changed

New Contributors

Full Changelog: 0.3.22...0.3.23