Skip to content

Commit

Permalink
add implementation for paper "A Multi-LLM Agent" (#291)
Browse files Browse the repository at this point in the history
* update

* add style_repaint

* 恢复

* 恢复

* add style_repaint tool

* modify style_repaint

* fix error

* fix rate limit

* improve stability

* use dashscope to convert url

* modify and better

* fix

* refact framework: add base class for llm, tool, and agent

* refact llm

* refact llm for custom

* add role_play agent

* add unittest for role-play agent

* refactor/tool/image_gen

* fix pre-commit

* add agent_builder agent

* revert modelscope_agent/agent.py

* update agent and add react prompt

* adapt agentbuilder in apps

* update style_repaint

* update style_repaint

* change pipeline plugin tool (#248)

* change pipeline plugin tool

* Standardized input and output

* Standardized input and output

* Standardized input and output

* refactor ms agent

* save system to history

* avoid duplicated system in roleplay

* builder with history running

* test case

* update tool

* update

* modify import method

* code interpreter and knowledge

* running code interpreter with file

* Refactor/citest (#257)

* add citest

* cache dependency

* cache dependency update 1

* cache dependency update 2

* cache dependency update 3

---------

Co-authored-by: Zhicheng Zhang <[email protected]>

* add modelscope tool (#258)

* add modelscope tool

* add modelscope tool

* add modelscope tool

---------

Co-authored-by: Zhicheng Zhang <[email protected]>

* unittest-web search web browser

* update tool and unittest

* Refactor/gradio4 (#260)

* Feat/gradio4 (#229)

* feat: update gradio 4

* feat: chatbot requirement

* feat: update app.py to gradio 4

* fix: gradio 4 of process_configuration

* fix: typo

* fix: mschatbot

* feat: chatbot support copy

---------

Co-authored-by: nil.wyw <[email protected]>

* feat: update gradio 4

* feat: fileupload

* feat: MultimodalInput for appBot

* fix: ui

* feat: MultimodalInput for app.py

* feat: Textbox => MultimodalInput

* bump version to 0.2.4rc0

* fix: lint

* fix: lint

* fix: lint

* fix: lint

* fix: add quotes for src prop (#249)

* feat: update gradio 4

* feat: chatbot requirement

* feat: update app.py to gradio 4

* fix: gradio 4 of process_configuration

* fix: typo

* fix: mschatbot

* feat: chatbot support copy

* feat: fileupload

* fix: gradio 4 dataframe update

* fix: appBot

* fix: dataframe display when config update

* fix: src format

* fix: files path

---------

Co-authored-by: nil.wyw <[email protected]>

* fix pre-commit

* fix submit input.

* fix multimodal

* fix bugs & support appBot.py

* fix pre-commit

* fix auto-merge

---------

Co-authored-by: wyw <[email protected]>
Co-authored-by: nil.wyw <[email protected]>
Co-authored-by: Col0ring <[email protected]>
Co-authored-by: wenmeng.zwm <[email protected]>
Co-authored-by: Col0ring <[email protected]>
Co-authored-by: suluyan.sly <[email protected]>
Co-authored-by: skyline2006 <[email protected]>

* update reame and tool cfg

* Refactor/unittest (#264)

* update unittest

* update requirements

* update requirements

* update requirements

* update requirements

* update requirements

* update requirements

* update requirements

* update requirements

* update requirements

* update requirements

* update requirements

* update requirements

* update ci

* update ci

* update ci

* update ci

* update ci

* update ci

* update ci

* update ci

* update ci

* update ci

* merge gradio4

* update ci

* update ci

* update ci

* update ci

* pass unit test

* merge and pass unit test

---------

Co-authored-by: Zhicheng Zhang <[email protected]>

* merge origin master

* fix bugs

* fix video-to-image

* add qwen-max

* fix apps/agentfabric & update release version

* apps/agentfabric/user_core.py

* fix pre-commit

* fix bugs & add video play for code interpreter

* comment out translate

* Refactor log (#268)

* update logger

* add dashscope log

* update user log

* add comment

---------

Co-authored-by: ly119399 <[email protected]>

* Refactor/knowledgeretrieval (#271)

* pass retrieval knowledge

* add unit test

* update requirements

* pass lint

* refact name

* refact name

---------

Co-authored-by: Zhicheng Zhang <[email protected]>

* add logger (#272)

* comment out translate

* Refactor log (#268)

* update logger

* add dashscope log

* update user log

* add comment

---------

Co-authored-by: ly119399 <[email protected]>

---------

Co-authored-by: skyline2006 <[email protected]>
Co-authored-by: lylalala <[email protected]>
Co-authored-by: ly119399 <[email protected]>
Co-authored-by: Zhicheng Zhang <[email protected]>

* pass code unittest

* fix publish for missing MODELSCOPE_API_TOKEN alert & rm translation tool

* fix bugs

* update log

* fix bugs

* fix instruction

* fix tool log

* fix bugs

* fix pre-commit

* Zhipu-glm4 (#276)

support zhipu glm-4 model

---------

Co-authored-by: skyline2006 <[email protected]>
Co-authored-by: Zhicheng Zhang <[email protected]>

* add zhipuai to requirements.txt

* make sure .md file is parsed by nltk

* Update demo 20240118 (#275)

Co-authored-by: Jintao Huang <[email protected]>

* add zhipuai

* update retrieval knowledge logic to not load in agent fabric

* update name

* fix bugs

* fix pre-commit

* fix ci

* update max_token setting

* fix doc list extend method

* fix ci

* fix ci

* add docstr

* rm useless agent: function call, react, react_chat

* bugfix

* bugfix

* bugfix: change _detect_tool for role_play

* remove uuid related log in ms

* fix publish avatar

* version 0.3.0rc0

* fix bugs

* fix bug

* fix audio->video

* feat: openapi

* fix pre-commit

* update demo

* bug fixed by comment

* lint pass

* bug fixed

* bug fixed

* update

* update openai llm

* debug & fix pre-commit

* bugs fix & add demo

* fix lint

* fix ci

* fix bug in alpha_umi agent, change test case

* fix bug of alphaumi

* fix bug

* fix bug

* fix bug

* fix bug

* fix bug

* add tools and fix bug

* fix bug

* add alpha_umi demo

* fix bug & add rapidapi tool test

* add rapidapi tool test

* rename class name

* update sh

---------

Co-authored-by: wangyijunlyy <[email protected]>
Co-authored-by: tujianhong.tjh <[email protected]>
Co-authored-by: suluyan.sly <[email protected]>
Co-authored-by: Jianhong Tu <[email protected]>
Co-authored-by: mushenL <[email protected]>
Co-authored-by: Zhicheng Zhang <[email protected]>
Co-authored-by: Zhicheng Zhang <[email protected]>
Co-authored-by: wyw <[email protected]>
Co-authored-by: nil.wyw <[email protected]>
Co-authored-by: Col0ring <[email protected]>
Co-authored-by: wenmeng.zwm <[email protected]>
Co-authored-by: Col0ring <[email protected]>
Co-authored-by: skyline2006 <[email protected]>
Co-authored-by: lylalala <[email protected]>
Co-authored-by: ly119399 <[email protected]>
Co-authored-by: Jintao Huang <[email protected]>
Co-authored-by: shenweizhou.swz <[email protected]>
Co-authored-by: shenwzh3 <[email protected]>
  • Loading branch information
19 people authored Feb 29, 2024
1 parent 251b56f commit c9bb297
Show file tree
Hide file tree
Showing 16 changed files with 1,639 additions and 1 deletion.
2 changes: 1 addition & 1 deletion .dev_scripts/dockerci.sh
Original file line number Diff line number Diff line change
Expand Up @@ -13,4 +13,4 @@ cp tests/samples/* "${CODE_INTERPRETER_WORK_DIR}/"
ls "${CODE_INTERPRETER_WORK_DIR}"

# run ci
pytest
pytest tests
24 changes: 24 additions & 0 deletions demo/alpha_umi/run_deploy.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
export PYTHONPATH=./

export VLLM_USE_MODELSCOPE=True
python -m vllm.entrypoints.openai.api_server \
--model=iic/alpha-umi-planner-7b \
--revision=v1.0.0 --trust-remote-code \
--port 8090 \
--dtype float16 \
--gpu-memory-utilization 0.3 > planner.log &

python -m vllm.entrypoints.openai.api_server \
--model=iic/alpha-umi-caller-7b \
--revision=v1.0.0 --trust-remote-code \
--port 8091 \
--dtype float16 \
--gpu-memory-utilization 0.3 > caller.log &

python -m vllm.entrypoints.openai.api_server \
--model=iic/alpha-umi-summarizer-7b \
--revision=v1.0.0 --trust-remote-code \
--port 8092 \
--dtype float16 \
--gpu-memory-utilization 0.3 > summarizer.log &

5 changes: 5 additions & 0 deletions demo/alpha_umi/run_test.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
export PYTHONPATH=./
export RAPID_API_TOKEN="your rapid api token here"
export MODELSCOPE_API_TOKEN="your modelscope api token here"

python demo/alpha_umi/test_alpha_umi.py
53 changes: 53 additions & 0 deletions demo/alpha_umi/test_alpha_umi.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
import os
import time

from modelscope_agent.agents.alpha_umi import AlphaUmi
from openai import OpenAI

llm_configs = {
'planner_llm_config': {
'model': 'iic/alpha-umi-planner-7b',
'model_server': 'openai',
'api_base': 'http://localhost:8090/v1',
'is_chat': False
},
'caller_llm_config': {
'model': 'iic/alpha-umi-caller-7b',
'model_server': 'openai',
'api_base': 'http://localhost:8091/v1',
'is_chat': False
},
'summarizer_llm_config': {
'model': 'iic/alpha-umi-summarizer-7b',
'model_server': 'openai',
'api_base': 'http://localhost:8092/v1',
'is_chat': False
},
}


def test_alpha_umi():
function_list = [
"get_data_fact_for_numbers", "get_math_fact_for_numbers",
"get_year_fact_for_numbers", "listquotes_for_current_exchange",
"exchange_for_current_exchange"
]

bot = AlphaUmi(
function_list=function_list,
llm_planner=llm_configs['planner_llm_config'],
llm_caller=llm_configs['caller_llm_config'],
llm_summarizer=llm_configs['summarizer_llm_config'],
)

response = bot.run('how many CNY can I exchange for 1 US dollar? \
also, give me a special property about the number of CNY after exchange'
)

for chunk in response:
print(chunk)


if __name__ == '__main__':

test_alpha_umi()
254 changes: 254 additions & 0 deletions demo/demo_alpha_umi.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,254 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "45d56c67-7439-4264-912a-c0b4895cac63",
"metadata": {
"execution": {
"iopub.execute_input": "2023-09-04T14:17:41.716630Z",
"iopub.status.busy": "2023-09-04T14:17:41.716258Z",
"iopub.status.idle": "2023-09-04T14:17:42.097933Z",
"shell.execute_reply": "2023-09-04T14:17:42.097255Z",
"shell.execute_reply.started": "2023-09-04T14:17:41.716610Z"
}
},
"source": [
"### clone代码"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3851d799-7162-4e73-acab-3c13cb1e43bd",
"metadata": {
"ExecutionIndicator": {
"show": true
},
"tags": []
},
"outputs": [],
"source": [
"!git clone https://github.com/modelscope/modelscope-agent.git"
]
},
{
"cell_type": "markdown",
"id": "f71e64d0-f967-4244-98ba-4e5bc4530883",
"metadata": {
"execution": {
"iopub.execute_input": "2023-09-04T14:17:41.716630Z",
"iopub.status.busy": "2023-09-04T14:17:41.716258Z",
"iopub.status.idle": "2023-09-04T14:17:42.097933Z",
"shell.execute_reply": "2023-09-04T14:17:42.097255Z",
"shell.execute_reply.started": "2023-09-04T14:17:41.716610Z"
}
},
"source": [
"### 安装特定依赖"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "489900d6-cc33-4ada-b2be-7e3a139cf6ed",
"metadata": {},
"outputs": [],
"source": [
"! cd modelscope-agent && pip install -r requirements.txt"
]
},
{
"cell_type": "markdown",
"id": "9e9f3150",
"metadata": {},
"source": [
"### 本地配置"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a027a6e8",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"os.chdir('modelscope-agent/demo')\n",
"\n",
"import sys\n",
"sys.path.append('../')"
]
},
{
"cell_type": "markdown",
"id": "8f35e3f7",
"metadata": {},
"source": [
"### 部署模型"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ad038344",
"metadata": {},
"outputs": [],
"source": [
"os.system(\"export VLLM_USE_MODELSCOPE=True\")\n",
"os.system(\"python -m vllm.entrypoints.openai.api_server \\\n",
" --model=iic/alpha-umi-planner-7b \\\n",
" --revision=v1.0.0 --trust-remote-code \\\n",
" --port 8090 \\\n",
" --dtype float16 \\\n",
" --gpu-memory-utilization 0.3 > planner.log &\")\n",
"\n",
"os.system(\"python -m vllm.entrypoints.openai.api_server \\\n",
" --model=iic/alpha-umi-caller-7b \\\n",
" --revision=v1.0.0 --trust-remote-code \\\n",
" --port 8091 \\\n",
" --dtype float16 \\\n",
" --gpu-memory-utilization 0.3 > caller.log &\")\n",
"\n",
"os.system(\"python -m vllm.entrypoints.openai.api_server \\\n",
" --model=iic/alpha-umi-summarizer-7b \\\n",
" --revision=v1.0.0 --trust-remote-code \\\n",
" --port 8092 \\\n",
" --dtype float16 \\\n",
" --gpu-memory-utilization 0.3 > summarizer.log &\")\n"
]
},
{
"cell_type": "markdown",
"id": "3de23896",
"metadata": {},
"source": [
"### API_KEY管理"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "65e5dcc8",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"print('请输入DASHSCOPE_API_KEY')\n",
"os.environ['DASHSCOPE_API_KEY'] = input()\n",
"print('请输入ModelScope Token')\n",
"os.environ['MODELSCOPE_API_TOKEN'] = input()\n",
"print('请输入RapidAPI Token')\n",
"os.environ['RAPID_API_TOKEN'] = input()"
]
},
{
"cell_type": "markdown",
"id": "8c8defa3",
"metadata": {},
"source": [
"### 构建agent"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "01e90564",
"metadata": {},
"outputs": [],
"source": [
"from modelscope_agent.agents.alpha_umi import AlphaUmi\n",
"\n",
"\n",
"llm_configs = {\n",
" 'planner_llm_config': {\n",
" 'model': 'iic/alpha-umi-planner-7b',\n",
" 'model_server': 'openai',\n",
" 'api_base': 'http://localhost:8090/v1',\n",
" 'is_chat': False\n",
" },\n",
" 'caller_llm_config': {\n",
" 'model': 'iic/alpha-umi-caller-7b',\n",
" 'model_server': 'openai',\n",
" 'api_base': 'http://localhost:8091/v1',\n",
" 'is_chat': False\n",
" },\n",
" 'summarizer_llm_config': {\n",
" 'model': 'iic/alpha-umi-summarizer-7b',\n",
" 'model_server': 'openai',\n",
" 'api_base': 'http://localhost:8092/v1',\n",
" 'is_chat': False\n",
" },\n",
"}\n",
"\n",
"\n",
"\n",
"function_list = [\"get_data_fact_for_numbers\", \"get_math_fact_for_numbers\", \"get_year_fact_for_numbers\",\n",
" \"listquotes_for_current_exchange\",\n",
" \"exchange_for_current_exchange\"]\n",
"\n",
"\n",
"bot = AlphaUmi(\n",
" function_list=function_list,\n",
" llm_planner=llm_configs['planner_llm_config'],\n",
" llm_caller=llm_configs['caller_llm_config'],\n",
" llm_summarizer=llm_configs['summarizer_llm_config'],\n",
" )\n",
"\n"
]
},
{
"cell_type": "markdown",
"id": "064ad74e",
"metadata": {},
"source": [
"### 执行agent"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "fdd379ab",
"metadata": {},
"outputs": [],
"source": [
"response = bot.run('how many CNY can I exchange for 3.5 US dollar? \\\n",
" also, give me a special property about the number of CNY after exchange')\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "85434381",
"metadata": {},
"outputs": [],
"source": [
"text = ''\n",
"for chunk in response:\n",
" text += chunk\n",
"print(text)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.13"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
Loading

0 comments on commit c9bb297

Please sign in to comment.