Skip to content

Commit

Permalink
0.1.2 (#4)
Browse files Browse the repository at this point in the history
* Separate endpoints for vector and graph-only options

* Vector chain updated to create a vector index if none is already present in the database

* Dependencies updated

* LLM var moved from chains to a config file

* Changelog added

* README updated with notes on data requirements for vector support
  • Loading branch information
jalakoo authored Jul 28, 2024
1 parent 31f11db commit ee451f3
Show file tree
Hide file tree
Showing 9 changed files with 666 additions and 463 deletions.
42 changes: 42 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
# Changelog

All notable changes to this project will be documented in this file.

The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

## [0.1.2] - 2024-07-27

### Added

- Separate endpoints for vector and graph-only options

### Changed

- Vector chain updated to create a vector index if none is already present in the database
- Mode option in POST payload, now only requires the 'message' key-value
- Dependencies updated

## [0.1.1] - 2024-06-05

### Added

- CORS middleware
- Neo4j exception middleware

### Changed

- Replaced deprecated LLMChain implementation
- Vector chain simplified to use RetrievalQA chain
- Dependencies updated

## [0.1.0] - 2024-04-05

### Added

- Initial release.
- Core functionality implemented, including:
- FastAPI wrapper
- Vector chain example
- Graph chain example
- Simple Agent example that aggregates results of the Vector and Graph retrievers
10 changes: 9 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,14 @@ This kit provides a simple [FastAPI](https://fastapi.tiangolo.com/) backend serv
- An [OpenAI API Key](https://openai.com/blog/openai-api)
- A running [local](https://neo4j.com/download/) or [cloud](https://neo4j.com/cloud/platform/aura-graph-database/) Neo4j database

## Presumptions

For the vector portion of this kit to work, it presumes the following about the source data:

- There are Nodes labeled 'Chunk' already within the database. This target label type can be changed within app/vector_chain.py file - line 49
- Node records contain a 'text' property with the unstructured data of interest. This can be changed within the app/vector_chain.py file - line 52
- Node records contain a 'sources' property. This is used by LangChain's [RetrievalQAWithSourcesChain](https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.retrieval.RetrievalQAWithSourcesChain.html)

## Usage

Add a .env file to the root folder with the following keys and your own credentials (or these included public access only creds):
Expand All @@ -23,7 +31,7 @@ NEO4J_PASSWORD=read_only
OPENAI_API_KEY=<your_openai_key_here>
```

Then run: `poetry run uvicorn app.server:app --reload --port=8000 `
Then run: `poetry run uvicorn app.server:app --reload --port=8000`

Or add env variables at runtime:

Expand Down
17 changes: 17 additions & 0 deletions app/config.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
import os

# Neo4j Credentials
NEO4J_URI = os.getenv("NEO4J_URI")
NEO4J_DATABASE = os.getenv("NEO4J_DATABASE")
NEO4J_USERNAME = os.getenv("NEO4J_USERNAME")
NEO4J_PASSWORD = os.getenv("NEO4J_PASSWORD")

# ==================
# Change models here
# ==================
from langchain_openai import ChatOpenAI, OpenAIEmbeddings

OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
LLM = ChatOpenAI(temperature=0, openai_api_key=OPENAI_API_KEY)
EMBEDDINGS = OpenAIEmbeddings(openai_api_key=OPENAI_API_KEY)
# ==================
14 changes: 2 additions & 12 deletions app/graph_chain.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
from langchain_community.graphs import Neo4jGraph
from langchain.prompts.prompt import PromptTemplate
from langchain.schema.runnable import Runnable
from langchain_openai import ChatOpenAI
from app.config import LLM, NEO4J_DATABASE, NEO4J_PASSWORD, NEO4J_URI, NEO4J_USERNAME
import os

CYPHER_GENERATION_TEMPLATE = """Task:Generate Cypher statement to query a graph database.
Expand Down Expand Up @@ -44,14 +44,6 @@

def graph_chain() -> Runnable:

NEO4J_URI = os.getenv("NEO4J_URI")
NEO4J_DATABASE = os.getenv("NEO4J_DATABASE")
NEO4J_USERNAME = os.getenv("NEO4J_USERNAME")
NEO4J_PASSWORD = os.getenv("NEO4J_PASSWORD")
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")

LLM = ChatOpenAI(temperature=0, openai_api_key=OPENAI_API_KEY)

graph = Neo4jGraph(
url=NEO4J_URI,
username=NEO4J_USERNAME,
Expand All @@ -60,8 +52,6 @@ def graph_chain() -> Runnable:
sanitize=True,
)

graph.refresh_schema()

# Official API doc for GraphCypherQAChain at: https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.base.GraphQAChain.html#
graph_chain = GraphCypherQAChain.from_llm(
cypher_llm=LLM,
Expand All @@ -70,7 +60,7 @@ def graph_chain() -> Runnable:
graph=graph,
verbose=True,
return_intermediate_steps=True,
# return_direct = True,
return_direct=True,
)

return graph_chain
153 changes: 69 additions & 84 deletions app/server.py
Original file line number Diff line number Diff line change
@@ -1,113 +1,98 @@
from __future__ import annotations
from typing import Union
from app.graph_chain import graph_chain, CYPHER_GENERATION_PROMPT
from app.vector_chain import vector_chain, VECTOR_PROMPT
from app.simple_agent import simple_agent_chain
from fastapi import FastAPI, Request, Response
from fastapi.middleware.cors import CORSMiddleware
from starlette.middleware.base import BaseHTTPMiddleware
from fastapi import FastAPI
from typing import Union, Optional
from pydantic import BaseModel, Field
from neo4j import exceptions
import logging


class ApiChatPostRequest(BaseModel):
message: str = Field(..., description="The chat message to send")
mode: str = Field(
"agent",
description='The mode of the chat message. Current options are: "vector", "graph", "agent". Default is "agent"',
)


class ApiChatPostResponse(BaseModel):
response: str


class Neo4jExceptionMiddleware(BaseHTTPMiddleware):
async def dispatch(self, request: Request, call_next):
try:
response = await call_next(request)
return response
except exceptions.AuthError as e:
msg = f"Neo4j Authentication Error: {e}"
logging.warning(msg)
return Response(content=msg, status_code=400, media_type="text/plain")
except exceptions.ServiceUnavailable as e:
msg = f"Neo4j Database Unavailable Error: {e}"
logging.warning(msg)
return Response(content=msg, status_code=400, media_type="text/plain")
except Exception as e:
msg = f"Neo4j Uncaught Exception: {e}"
logging.error(msg)
return Response(content=msg, status_code=400, media_type="text/plain")


# Allowed CORS origins
origins = [
"http://127.0.0.1:8000", # Alternative localhost address
"http://localhost:8000",
]
message: Optional[str] = Field(None, description="The chat message response")


app = FastAPI()

# Add CORS middleware to allow cross-origin requests
app.add_middleware(
CORSMiddleware,
allow_origins=origins,
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],

@app.post(
"/api/chat",
response_model=None,
responses={"201": {"model": ApiChatPostResponse}},
tags=["chat"],
description="Endpoint utilizing a simple agent to composite responses from the Vector and Graph chains interfacing with a Neo4j instance.",
)
# Add Neo4j exception handling middleware
app.add_middleware(Neo4jExceptionMiddleware)
def send_chat_message(body: ApiChatPostRequest) -> Union[None, ApiChatPostResponse]:
"""
Send a chat message
"""

question = body.message

v_response = vector_chain().invoke(
{"question": question}, prompt=VECTOR_PROMPT, return_only_outputs=True
)
g_response = graph_chain().invoke(
{"query": question}, prompt=CYPHER_GENERATION_PROMPT, return_only_outputs=True
)

# Return an answer from a chain that composites both the Vector and Graph responses
response = simple_agent_chain().invoke(
{
"question": question,
"vector_result": v_response,
"graph_result": g_response,
}
)

return f"{response}", 200


@app.post(
"/api/chat",
"/api/chat/vector",
response_model=None,
responses={"201": {"model": ApiChatPostResponse}},
tags=["chat"],
description="Endpoint for utilizing only vector index for querying Neo4j instance.",
)
def send_chat_vector_message(
body: ApiChatPostRequest,
) -> Union[None, ApiChatPostResponse]:
"""
Send a chat message
"""

question = body.message

response = vector_chain().invoke(
{"question": question}, prompt=VECTOR_PROMPT, return_only_outputs=True
)

return f"{response}", 200


@app.post(
"/api/chat/graph",
response_model=None,
responses={"201": {"model": ApiChatPostResponse}},
tags=["chat"],
description="Endpoint using only Text2Cypher for querying with Neo4j instance.",
)
async def send_chat_message(body: ApiChatPostRequest):
def send_chat_graph_message(
body: ApiChatPostRequest,
) -> Union[None, ApiChatPostResponse]:
"""
Send a chat message
"""

question = body.message

# Simple exception check. See https://neo4j.com/docs/api/python-driver/current/api.html#errors for full set of driver exceptions

if body.mode == "vector":
# Return only the Vector answer
v_response = vector_chain().invoke(
{"query": question}, prompt=VECTOR_PROMPT, return_only_outputs=True
)
response = v_response
elif body.mode == "graph":
# Return only the Graph (text2Cypher) answer
g_response = graph_chain().invoke(
{"query": question},
prompt=CYPHER_GENERATION_PROMPT,
return_only_outputs=True,
)
response = g_response["result"]
else:
# Return both vector + graph answers
v_response = vector_chain().invoke(
{"query": question}, prompt=VECTOR_PROMPT, return_only_outputs=True
)
g_response = graph_chain().invoke(
{"query": question},
prompt=CYPHER_GENERATION_PROMPT,
return_only_outputs=True,
)["result"]

# Synthesize a composite of both the Vector and Graph responses
response = simple_agent_chain().invoke(
{
"question": question,
"vector_result": v_response,
"graph_result": g_response,
}
)

return response, 200
response = graph_chain().invoke(
{"query": question}, prompt=CYPHER_GENERATION_PROMPT, return_only_outputs=True
)

return f"{response}", 200
1 change: 0 additions & 1 deletion app/simple_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,6 @@
from langchain.prompts import PromptTemplate
from langchain.schema.runnable import Runnable
from langchain_openai import ChatOpenAI
from langchain.chains import ConversationChain
from langchain_core.prompts import PromptTemplate
import os

Expand Down
Loading

0 comments on commit ee451f3

Please sign in to comment.