Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Delegator agent #45

Merged
merged 3 commits into from
Sep 6, 2024
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Delegator agent v1.0
cliffordattractor committed Aug 3, 2024
commit 6252c339e24c3f6581a9603fc438fee8974821d2
169 changes: 166 additions & 3 deletions submodules/moragents_dockers/README.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,22 @@
# Moragents

This repo contains multiple agents and a dapp that enables you to interact with the agents, all running locally and containerized with Docker.
## Overview
This project is a Flask-based AI chat application featuring intelligent responses from various language models and embeddings. It includes file uploading, cryptocurrency swapping, and a delegator system to manage multiple agents. The application, along with a dApp for agent interaction, runs locally and is containerized with Docker.


## Usage
## Pre-requisites


* [Download Ollama](https://ollama.com/ )for your operating system
* Then after finishing installation pull these two models:

```ollama pull llama3```

```ollama pull nomic-embed-text```


## Run with Docker Compose

Docker compose will build and run two containers. One will be for the agents, the other will be for the UI.

```docker-compose up```
@@ -17,6 +30,7 @@ Open in the browser: ```http://localhost:3333/```
Docker build will download the model. The first time that one of the agents are called, the model will be loaded into memory and this instance will be shared between all agents.

## Agents
Three agents are included:

### Data Agent

@@ -41,9 +55,158 @@ A typical flow looks like this:
- The agent requests any missing information, e.g. in this case the amount is missing
- Once all the information hase been collected, the agent looks up the assets on the current chain, retrieves contract addresses and generates a quote if available.
- The quote is shown to the user, who may either proceed or cancel
- If the user accepts the quote, the swap may proceed. The back-end will generate transactions which will be sent to the front-end to be signed by the user's wallet.
- If the user accepts the quote, the swap may proceed. The back-end will generate transactions which will be sent to the front-end to be signed by the user's wallet.
- If the allowance for the token being sold is too low, an approval transaction will be generated first

## RAG Agent
This agent will answer questions about an uploaded PDF file.


# Delegator
The Delegator handles user queries by analyzing the prompt and delegating it to the appropriate agent.

## API Endpoints

1. **Chat Functionality**
- Endpoint: `POST /`
- Handles chat interactions, delegating to appropriate agents when necessary.

2. **Message History**
- Endpoint: `GET /messages`
- Retrieves chat message history.

3. **Clear Messages**
- Endpoint: `GET /clear_messages`
- Clears the chat message history.

4. **Swap Operations**
- Endpoints:
- `POST /tx_status`: Check transaction status
- `POST /allowance`: Get allowance
- `POST /approve`: Approve transaction
- `POST /swap`: Perform swap

5. **File Upload**
- Endpoint: `POST /upload`
- Handles file uploads for RAG (Retrieval-Augmented Generation) purposes.



# Adding a New Agent

## Overview

Each agent is configured in the `config.py` file, which specifies the agent's path, class, and other details.

## Steps to Add a New Agent

### 1. Create a New Agent Folder

1. **Create a new folder** in the `agents/src` directory for your new agent.
2. **Implement the agent logic** within this folder. Ensure that the agent class is defined and ready to handle the specific type of queries it is designed for.

### 2. Update `config.py`

1. **Open the `config.py` file** located in the `agents/src` directory.
2. **Add a new entry** in the `DELEGATOR_CONFIG` dictionary with the following details:
- `path`: The path to the agent's module.
- `class`: The class name of the agent.
- `detail`: A description of when to use this agent.
- `name`: A unique name for the agent.
- `upload`: A boolean indicating if the agent requires a file to be uploaded from the front-end before it should be called.

#### Example:
```python:agents/src/config.py
DELEGATOR_CONFIG = {
"agents": [
# ... existing agents ...
{
"path": "new_agent.src.agent",
"class": "NewAgent",
"description": "if the prompt is related to new functionality, choose new agent",
"name": "new agent",
"upload": false
}
]
}
```


### 3. Implement Agent Logic

1. **Define the agent class** in the specified path.
2. **Ensure the agent can handle the queries** it is designed for.

#### Example:
```python:agents/src/new_agent/src/agent.py
class NewAgent:
def __init__(self, agent_info, llm, llm_ollama, embeddings, flask_app):
"""
Initialize the NewAgent.

Parameters:
- agent_info (dict): Configuration details for the agent.
- llm (object): The main language model instance.
- llm_ollama (object): An additional language model instance from Ollama.
- embeddings (object): Embedding model for handling vector representations.
- flask_app (Flask): The Flask application instance.
"""
self.agent_info = agent_info
self.llm = llm
self.llm_ollama = llm_ollama
self.embeddings = embeddings
self.flask_app = flask_app

def chat(self, request):
# Implement chat logic
pass

# Add other methods as needed
```


### 4. Handle Multi-Turn Conversations

Agents can handle multi-turn conversations by returning a next_turn_agent which indicates the name of the agent that should handle the next turn.

#### Example:
```python
class NewAgent:
def __init__(self, agent_info, llm, llm_ollama, embeddings, flask_app):
"""
Initialize the NewAgent.

Parameters:
- agent_info (dict): Configuration details for the agent.
- llm (object): The main language model instance.
- llm_ollama (object): An additional language model instance.
- embeddings (object): Embedding model for handling vector representations.
- flask_app (Flask): The Flask application instance.
"""
self.agent_info = agent_info
self.llm = llm
self.llm_ollama = llm_ollama
self.embeddings = embeddings
self.flask_app = flask_app

def chat(self, request, user_id):
# Process the query and determine the next agent
next_turn_agent = self.agent_info["name"]

# Generate response where we want to initiate a multi-turn conversation with the same agent.

return response, next_turn_agent

```

### 5. Integration

The `Delegator` will automatically:
- Import the agent module.
- Instantiate the agent class.
- Add the agent to its internal dictionary.

### 6. Test the New Agent

1. **Ensure the `Delegator` can properly route requests** to the new agent.
2. **Test the agent's functionality** through the chat interface.
1 change: 1 addition & 0 deletions submodules/moragents_dockers/agents/Dockerfile
Original file line number Diff line number Diff line change
@@ -28,5 +28,6 @@ EXPOSE 5000
# Set the environment variable for Flask
ENV FLASK_APP=src/app.py


# Run the application
CMD ["flask", "run", "--host", "0.0.0.0"]
22 changes: 13 additions & 9 deletions submodules/moragents_dockers/agents/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,11 +1,15 @@
llama-cpp-python
transformers
sentencepiece
protobuf
scikit-learn
huggingface-hub
llama-cpp-python==0.2.65
transformers==4.43.3
sentencepiece==0.2.0
protobuf==5.27.2
scikit-learn==1.5.1
huggingface-hub==0.24.3
flask==2.2.2
Werkzeug==2.2.2
gradio > /dev/null
flask-cors
web3
flask-cors==4.0.1
web3==6.20.1
pymupdf==1.22.5
faiss-cpu==1.8.0.post1
langchain-text-splitters==0.2.2
langchain-core==0.2.24
langchain-community==0.2.10
136 changes: 95 additions & 41 deletions submodules/moragents_dockers/agents/src/app.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,14 @@
from flask_cors import CORS
from flask import Flask, request, jsonify
import json
import os
import logging
from config import Config
from swap_agent.src import agent as swap_agent
from data_agent.src import agent as data_agent
from llama_cpp import Llama
from flask_cors import CORS
from flask import Flask, request, jsonify
from langchain_community.llms import Ollama
from delegator import Delegator
from llama_cpp.llama_tokenizer import LlamaHFTokenizer
from langchain_community.embeddings import OllamaEmbeddings


def load_llm():
@@ -19,54 +23,104 @@ def load_llm():
return llm


llm=load_llm()
llm = load_llm()

app = Flask(__name__)
CORS(app)

@app.route('/swap_agent/', methods=['POST'])
def swap_agent_chat():
global llm
return swap_agent.chat(request, llm)
upload_state = False
UPLOAD_FOLDER = os.path.join(os.getcwd(), 'uploads')
os.makedirs(UPLOAD_FOLDER, exist_ok=True)
app.config['UPLOAD_FOLDER'] = UPLOAD_FOLDER
app.config['MAX_CONTENT_LENGTH'] = Config.MAX_UPLOAD_LENGTH

llm_ollama = Ollama(model="llama3", base_url=Config.OLLAMA_URL)
embeddings = OllamaEmbeddings(model="nomic-embed-text", base_url=Config.OLLAMA_URL)

logging.basicConfig(level=logging.DEBUG)

delegator = Delegator(Config.DELEGATOR_CONFIG, llm, llm_ollama, embeddings, app)
messages = [{'role': "assistant", "content": "This highly experimental chatbot is not intended for making important decisions, and its responses are generated based on incomplete data and algorithms that may evolve rapidly. By using this chatbot, you acknowledge that you use it at your own discretion and assume all risks associated with its limitations and potential errors."}]

next_turn_agent = None


@app.route('/', methods=['POST'])
def chat():
global next_turn_agent, messages
data = request.get_json()
try:
if 'prompt' in data:
prompt = data['prompt']
messages.append(prompt)
if not next_turn_agent:
result = delegator.get_delegator_response(prompt, upload_state)
if "tool_calls" not in result:
messages.append({"role": "assistant", "content": result["content"]})
return jsonify({"role": "assistant", "content": result["content"]})
else:
if not result["tool_calls"]:
messages.append({"role": "assistant", "content": result["content"]})
return jsonify({"role": "assistant", "content": result["content"]})
res = json.loads(result['tool_calls'][0]['function']['arguments'])
response_swap = delegator.delegate_chat(res["next"], request)
if "next_turn_agent" in response_swap.keys():
next_turn_agent = response_swap["next_turn_agent"]
response = {"role": response_swap["role"], "content": response_swap["content"]}
else:
response_swap = delegator.delegate_chat(next_turn_agent, request)
next_turn_agent = response_swap["next_turn_agent"]
response = {"role": response_swap["role"], "content": response_swap["content"]}
messages.append(response)
return jsonify(response)
except Exception as e:
return jsonify({"Error": str(e)}), 500

@app.route('/swap_agent/tx_status', methods=['POST'])

@app.route('/tx_status', methods=['POST'])
def swap_agent_tx_status():
return swap_agent.tx_status(request)

@app.route('/swap_agent/messages', methods=['GET'])
def swap_agent_messages():
return swap_agent.get_messages()

@app.route('/swap_agent/clear_messages', methods=['GET'])
def swap_agent_clear_messages():
return swap_agent.clear_messages()

@app.route('/swap_agent/allowance', methods=['POST'])
global messages
response = delegator.delegate_route("crypto swap agent", request, "tx_status")
messages.append(response)
return jsonify(response)


@app.route('/messages', methods=['GET'])
def get_messages():
global messages
return jsonify({"messages": messages})


@app.route('/clear_messages', methods=['GET'])
def clear_messages():
global messages
messages = [{'role': "assistant", "content": "This highly experimental chatbot is not intended for making important decisions, and its responses are generated based on incomplete data and algorithms that may evolve rapidly. By using this chatbot, you acknowledge that you use it at your own discretion and assume all risks associated with its limitations and potential errors."}]
return jsonify({"response": "successfully cleared message history"})


@app.route('/allowance', methods=['POST'])
def swap_agent_allowance():
return swap_agent.get_allowance(request)

@app.route('/swap_agent/approve', methods=['POST'])
return delegator.delegate_route("crypto swap agent", request, "get_allowance")


@app.route('/approve', methods=['POST'])
def swap_agent_approve():
return swap_agent.approve(request)

@app.route('/swap_agent/swap', methods=['POST'])
def swap_agent_swap():
return swap_agent.swap(request)
return delegator.delegate_route("crypto swap agent", request, "approve")


@app.route('/data_agent/', methods=['POST'])
def data_agent_chat():
global llm
return data_agent.chat(request, llm)
@app.route('/swap', methods=['POST'])
def swap_agent_swap():
return delegator.delegate_route("crypto swap agent", request, "swap")


@app.route('/data_agent/messages', methods=['GET'])
def data_agent_messages():
return data_agent.get_messages()
@app.route('/upload', methods=['POST'])
def rag_agent_upload():
global messages, upload_state
response = delegator.delegate_route("rag agent", request, "upload_file")
messages.append(response)
upload_state = True
return jsonify(response)

@app.route('/data_agent/clear_messages', methods=['GET'])
def data_agent_clear_messages():
return data_agent.clear_messages()


if __name__ == '__main__':
app.run(host='0.0.0.0', port=5000, debug=True)
app.run(host='0.0.0.0', port=5000, debug=True)
Loading