In this folder, there may exist multiple implementations of Agent
that will be used by the framework.
For example, agenthub/codeact_agent
, etc.
Contributors from different backgrounds and interests can choose to contribute to any (or all!) of these directions.
The abstraction for an agent can be found here.
Agents are run inside of a loop. At each iteration, agent.step()
is called with a
State input, and the agent must output an Action.
Every agent also has a self.llm
which it can use to interact with the LLM configured by the user.
See the LiteLLM docs for self.llm.completion
.
The state
contains:
- A history of actions taken by the agent, as well as any observations (e.g. file content, command output) from those actions
- A list of actions/observations that have happened since the most recent step
- A
root_task
, which contains a plan of action- The agent can add and modify subtasks through the
AddTaskAction
andModifyTaskAction
- The agent can add and modify subtasks through the
Here is a list of available Actions, which can be returned by agent.step()
:
CmdRunAction
- Runs a command inside a sandboxed terminalIPythonRunCellAction
- Execute a block of Python code interactively (in Jupyter notebook) and receivesCmdOutputObservation
. Requires setting upjupyter
plugin as a requirement.FileReadAction
- Reads the content of a fileFileWriteAction
- Writes new content to a fileBrowseURLAction
- Gets the content of a URLAddTaskAction
- Adds a subtask to the planModifyTaskAction
- Changes the state of a subtask.AgentFinishAction
- Stops the control loop, allowing the user/delegator agent to enter a new taskAgentRejectAction
- Stops the control loop, allowing the user/delegator agent to enter a new taskAgentFinishAction
- Stops the control loop, allowing the user to enter a new taskMessageAction
- Represents a message from an agent or the user
You can use action.to_dict()
and action_from_dict
to serialize and deserialize actions.
There are also several types of Observations. These are typically available in the step following the corresponding Action. But they may also appear as a result of asynchronous events (e.g. a message from the user).
Here is a list of available Observations:
CmdOutputObservation
BrowserOutputObservation
FileReadObservation
FileWriteObservation
ErrorObservation
SuccessObservation
You can use observation.to_dict()
and observation_from_dict
to serialize and deserialize observations.
Every agent must implement the following methods:
def step(self, state: "State") -> "Action"
step
moves the agent forward one step towards its goal. This probably means
sending a prompt to the LLM, then parsing the response into an Action
.