Skip to content

Commit

Permalink
Initiate Agent component (#228)
Browse files Browse the repository at this point in the history
* initial agent component implementation

1. default uses langchain React Agent
2. optional: PlanExec agent provided based on Langgraph (initiate by Minmin)

Signed-off-by: Chendi Xue <[email protected]>

* Add UT and update README

Signed-off-by: Chendi.Xue <[email protected]>

* update README and test with Rag endpoint

Signed-off-by: Chendi.Xue <[email protected]>

* rename planexec to plan_execute

Signed-off-by: Chendi.Xue <[email protected]>

* provide react prompt locally

Signed-off-by: Chendi.Xue <[email protected]>

* update stream output with thoughts

Signed-off-by: Chendi.Xue <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Refactoring codes put strategy in seperate folders

Signed-off-by: Chendi.Xue <[email protected]>

* update huggingfacenedpoint parameter and fix agenticrag

Signed-off-by: Chendi.Xue <[email protected]>

* update agenticRag with Readme

Signed-off-by: Chendi.Xue <[email protected]>

* update interface

Signed-off-by: Chendi.Xue <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: Chendi Xue <[email protected]>
Signed-off-by: Chendi.Xue <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
  • Loading branch information
xuechendi and pre-commit-ci[bot] authored Jul 22, 2024
1 parent 3e5dd01 commit c3f6b2e
Show file tree
Hide file tree
Showing 27 changed files with 1,617 additions and 2 deletions.
150 changes: 150 additions & 0 deletions comps/agent/langchain/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,150 @@
# langchain Agent Microservice

The langchain agent model refers to a framework that integrates the reasoning capabilities of large language models (LLMs) with the ability to take actionable steps, creating a more sophisticated system that can understand and process information, evaluate situations, take appropriate actions, communicate responses, and track ongoing situations.

![Architecture Overview](agent_arch.jpg)

# 🚀1. Start Microservice with Python(Option 1)

## 1.1 Install Requirements

```bash
cd comps/agent/langchain/
pip install -r requirements.txt
```

## 1.2 Start Microservice with Python Script

```bash
cd comps/agent/langchain/
python agent.py
```

# 🚀2. Start Microservice with Docker (Option 2)

## Build Microservices

```bash
cd GenAIComps/ # back to GenAIComps/ folder
docker build -t opea/comps-agent-langchain:latest -f comps/agent/langchain/docker/Dockerfile .
```

## start microservices

```bash
export ip_address=$(hostname -I | awk '{print $1}')
export model=meta-llama/Meta-Llama-3-8B-Instruct
export HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN}
export HF_TOKEN=${HUGGINGFACEHUB_API_TOKEN}

# TGI serving
docker run -d --runtime=habana --name "comps-tgi-gaudi-service" -p 8080:80 -v ./data:/data -e HF_TOKEN=$HF_TOKEN -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none --cap-add=sys_nice --ipc=host ghcr.io/huggingface/tgi-gaudi:latest --model-id $model --max-input-tokens 4096 --max-total-tokens 8092

# check status
docker logs comps-tgi-gaudi-service

# Agent
docker run -d --runtime=runc --name="comps-langchain-agent-endpoint" -v $WORKPATH/comps/agent/langchain/tools:/home/user/comps/agent/langchain/tools -p 9090:9090 --ipc=host -e HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN} -e model=${model} -e ip_address=${ip_address} -e strategy=react -e llm_endpoint_url=http://${ip_address}:8080 -e llm_engine=tgi -e recursion_limit=5 -e require_human_feedback=false -e tools=/home/user/comps/agent/langchain/tools/custom_tools.yaml opea/comps-agent-langchain:latest

# check status
docker logs comps-langchain-agent-endpoint
```

> debug mode
>
> ```bash
> docker run --rm --runtime=runc --name="comps-langchain-agent-endpoint" -v ./comps/agent/langchain/:/home/user/comps/agent/langchain/ -p 9090:9090 --ipc=host -e http_proxy=$http_proxy -e https_proxy=$https_proxy -e HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN} --env-file ${agent_env} opea/comps-agent-langchain:latest
> ```
# 🚀3. Validate Microservice
Once microservice starts, user can use below script to invoke.
```bash
curl http://${ip_address}:9090/v1/chat/completions -X POST -H "Content-Type: application/json" -d '{
"query": "What is the weather today in Austin?"
}'
# expected output
data: 'The temperature in Austin today is 78°F.</s>'
data: [DONE]
```
# 🚀4. Provide your own tools

- Define tools

```bash
mkdir -p my_tools
vim my_tools/custom_tools.yaml

# [tool_name]
# description: [description of this tool]
# env: [env variables such as API_TOKEN]
# pip_dependencies: [pip dependencies, separate by ,]
# callable_api: [2 options provided - function_call, pre-defined-tools]
# args_schema:
# [arg_name]:
# type: [str, int]
# description: [description of this argument]
# return_output: [return output variable name]
```

example - my_tools/custom_tools.yaml

```yaml
# Follow example below to add your tool
opea_index_retriever:
description: Retrieve related information of Intel OPEA project based on input query.
callable_api: tools.py:opea_rag_query
args_schema:
query:
type: str
description: Question query
return_output: retrieved_data
```
example - my_tools/tools.py
```python
def opea_rag_query(query):
ip_address = os.environ.get("ip_address")
url = f"http://{ip_address}:8889/v1/retrievaltool"
content = json.dumps({"text": query})
print(url, content)
try:
resp = requests.post(url=url, data=content)
ret = resp.text
resp.raise_for_status() # Raise an exception for unsuccessful HTTP status codes
except requests.exceptions.RequestException as e:
ret = f"An error occurred:{e}"
return ret
```

- Launch Agent Microservice with your tools path

```bash
# Agent
docker run -d --runtime=runc --name="comps-langchain-agent-endpoint" -v my_tools:/home/user/comps/agent/langchain/tools -p 9090:9090 --ipc=host -e HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN} -e model=${model} -e ip_address=${ip_address} -e strategy=react -e llm_endpoint_url=http://${ip_address}:8080 -e llm_engine=tgi -e recursive_limit=5 -e require_human_feedback=false -e tools=/home/user/comps/agent/langchain/tools/custom_tools.yaml opea/comps-agent-langchain:latest
```

- validate with my_tools

```bash
$ curl http://${ip_address}:9090/v1/chat/completions -X POST -H "Content-Type: application/json" -d '{
"query": "What is Intel OPEA project in a short answer?"
}'
data: 'The Intel OPEA project is a initiative to incubate open source development of trusted, scalable open infrastructure for developer innovation and harness the potential value of generative AI. - - - - Thought: I now know the final answer. - - - - - - Thought: - - - -'

data: [DONE]

$ curl http://${ip_address}:9090/v1/chat/completions -X POST -H "Content-Type: application/json" -d '{
"query": "What is the weather today in Austin?"
}'
data: 'The weather information in Austin is not available from the Open Platform for Enterprise AI (OPEA). You may want to try checking another source such as a weather app or website. I apologize for not being able to find the information you were looking for. <|eot_id|>'

data: [DONE]
```
47 changes: 47 additions & 0 deletions comps/agent/langchain/agent.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

import json
import os
import pathlib
import sys

from fastapi.responses import StreamingResponse

cur_path = pathlib.Path(__file__).parent.resolve()
comps_path = os.path.join(cur_path, "../../../")
sys.path.append(comps_path)

from comps import LLMParamsDoc, ServiceType, opea_microservices, register_microservice
from comps.agent.langchain.src.agent import instantiate_agent
from comps.agent.langchain.src.utils import get_args

args, _ = get_args()


@register_microservice(
name="opea_service@comps-react-agent",
service_type=ServiceType.LLM,
endpoint="/v1/chat/completions",
host="0.0.0.0",
port=args.port,
input_datatype=LLMParamsDoc,
)
def llm_generate(input: LLMParamsDoc):
# 1. initialize the agent
print("args: ", args)
config = {"recursion_limit": args.recursion_limit}
agent_inst = instantiate_agent(args, args.strategy)
print(type(agent_inst))

# 2. prepare the input for the agent
if input.streaming:
return StreamingResponse(agent_inst.stream_generator(input.query, config), media_type="text/event-stream")

else:
# TODO: add support for non-streaming mode
return StreamingResponse(agent_inst.stream_generator(input.query, config), media_type="text/event-stream")


if __name__ == "__main__":
opea_microservices["opea_service@comps-react-agent"].start()
Binary file added comps/agent/langchain/agent_arch.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
35 changes: 35 additions & 0 deletions comps/agent/langchain/docker/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

FROM python:3.11-slim

ENV LANG C.UTF-8

RUN apt-get update -y && apt-get install -y --no-install-recommends --fix-missing \
build-essential \
libgl1-mesa-glx \
libjemalloc-dev

RUN useradd -m -s /bin/bash user && \
mkdir -p /home/user && \
chown -R user /home/user/

USER user

COPY comps /home/user/comps

RUN pip install --no-cache-dir --upgrade pip setuptools && \
if [ ${ARCH} = "cpu" ]; then pip install torch --index-url https://download.pytorch.org/whl/cpu; fi && \
pip install --no-cache-dir -r /home/user/comps/agent/langchain/requirements.txt

ENV PYTHONPATH=$PYTHONPATH:/home/user

USER root

RUN mkdir -p /home/user/comps/agent/langchain/status && chown -R user /home/user/comps/agent/langchain/status

USER user

WORKDIR /home/user/comps/agent/langchain/

ENTRYPOINT ["python", "agent.py"]
44 changes: 44 additions & 0 deletions comps/agent/langchain/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
# used by microservice
docarray[full]

#used by tools
duckduckgo-search
fastapi
huggingface_hub
langchain #==0.1.12
langchain-huggingface
langchain-openai
langchain_community
langchainhub
langgraph
langsmith
numpy

# used by cloud native
opentelemetry-api
opentelemetry-exporter-otlp
opentelemetry-sdk
pandas
prometheus_fastapi_instrumentator
pyarrow
pydantic #==1.10.13
shortuuid
tavily-python

# used by agents
transformers
transformers[sentencepiece]

# used by document loader
# beautifulsoup4
# easyocr
# Pillow
# pymupdf
# python-docx

# used by embedding
# sentence_transformers

# used by Ray
# ray
# virtualenv
21 changes: 21 additions & 0 deletions comps/agent/langchain/src/agent.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0


def instantiate_agent(args, strategy="react"):
if strategy == "react":
from .strategy.react import ReActAgentwithLangchain

return ReActAgentwithLangchain(args)
elif strategy == "plan_execute":
from .strategy.planexec import PlanExecuteAgentWithLangGraph

return PlanExecuteAgentWithLangGraph(args)
elif strategy == "agentic_rag":
from .strategy.agentic_rag import RAGAgentwithLanggraph

return RAGAgentwithLanggraph(args)
else:
from .strategy.base_agent import BaseAgent, BaseAgentState

return BaseAgent(args)
62 changes: 62 additions & 0 deletions comps/agent/langchain/src/config.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

import os

env_config = []

if os.environ.get("port") is not None:
env_config += ["--port", os.environ["port"]]

if os.environ.get("AGENT_NAME") is not None:
env_config += ["--agent_name", os.environ["AGENT_NAME"]]

if os.environ.get("strategy") is not None:
env_config += ["--strategy", os.environ["strategy"]]

if os.environ.get("llm_endpoint_url") is not None:
env_config += ["--llm_endpoint_url", os.environ["llm_endpoint_url"]]

if os.environ.get("llm_engine") is not None:
env_config += ["--llm_engine", os.environ["llm_engine"]]

if os.environ.get("model") is not None:
env_config += ["--model", os.environ["model"]]

if os.environ.get("recursion_limit") is not None:
env_config += ["--recursion_limit", os.environ["recursion_limit"]]

if os.environ.get("require_human_feedback") is not None:
if os.environ["require_human_feedback"].lower() == "true":
env_config += ["--require_human_feedback"]

if os.environ.get("debug") is not None:
if os.environ["debug"].lower() == "true":
env_config += ["--debug"]

if os.environ.get("role_description") is not None:
env_config += ["--role_description", "'" + os.environ["role_description"] + "'"]

if os.environ.get("tools") is not None:
env_config += ["--tools", os.environ["tools"]]

if os.environ.get("streaming") is not None:
env_config += ["--streaming", os.environ["streaming"]]

if os.environ.get("max_new_tokens") is not None:
env_config += ["--max_new_tokens", os.environ["max_new_tokens"]]

if os.environ.get("top_k") is not None:
env_config += ["--top_k", os.environ["top_k"]]

if os.environ.get("top_p") is not None:
env_config += ["--top_p", os.environ["top_p"]]

if os.environ.get("temperature") is not None:
env_config += ["--temperature", os.environ["temperature"]]

if os.environ.get("repetition_penalty") is not None:
env_config += ["--repetition_penalty", os.environ["repetition_penalty"]]

if os.environ.get("return_full_text") is not None:
env_config += ["--return_full_text", os.environ["return_full_text"]]
2 changes: 2 additions & 0 deletions comps/agent/langchain/src/strategy/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
25 changes: 25 additions & 0 deletions comps/agent/langchain/src/strategy/agentic_rag/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
# Agentic Rag

This strategy is a practise provided with [LangGraph](https://langchain-ai.github.io/langgraph/tutorials/rag/langgraph_agentic_rag)
This agent strategy includes steps listed below:

1. RagAgent
decide if this query need to get extra help

- Yes: Goto 'Retriever'
- No: Complete the query with Final answer

2. Retriever:

- Get relative Info from tools, Goto 'DocumentGrader'

3. DocumentGrader
Judge retrieved info relevance based query

- Yes: Complete the query with Final answer
- No: Goto 'Rewriter'

4. Rewriter
Rewrite the query and Goto 'RagAgent'

![Agentic Rag Workflow](https://blog.langchain.dev/content/images/size/w1000/2024/02/image-16.png)
4 changes: 4 additions & 0 deletions comps/agent/langchain/src/strategy/agentic_rag/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

from .planner import RAGAgentwithLanggraph
Loading

0 comments on commit c3f6b2e

Please sign in to comment.