Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
…js-template into brace/gen-ui-abstraction-updates
  • Loading branch information
bracesproul committed Jul 16, 2024
2 parents ab05ffa + 214efbf commit a7ac38d
Show file tree
Hide file tree
Showing 16 changed files with 1,202 additions and 255 deletions.
7 changes: 6 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -34,5 +34,10 @@ yarn-error.log*
*.tsbuildinfo
next-env.d.ts

.yarn
.yarn/*
!.yarn/patches
!.yarn/plugins
!.yarn/releases
!.yarn/sdks
!.yarn/versions
.env
873 changes: 873 additions & 0 deletions .yarn/releases/yarn-3.5.1.cjs

Large diffs are not rendered by default.

3 changes: 3 additions & 0 deletions .yarnrc.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
nodeLinker: node-modules

yarnPath: .yarn/releases/yarn-3.5.1.cjs
19 changes: 10 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,8 @@ use cases. Specifically:

Most of them use Vercel's [AI SDK](https://github.com/vercel-labs/ai) to stream tokens to the client and display the incoming messages.

The agents use [LangGraph.js](https://langchain-ai.github.io/langgraphjs/), LangChain's framework for building agentic workflows. They use preconfigured helper functions to minimize boilerplate, but you can replace them with custom graphs as desired.

![Demo GIF](/public/images/agent-convo.gif)

It's free-tier friendly too! Check out the [bundle size stats below](#-bundle-size).
Expand Down Expand Up @@ -53,7 +55,7 @@ Click the `Structured Output` link in the navbar to try it out:
The chain in this example uses a [popular library called Zod](https://zod.dev) to construct a schema, then formats it in the way OpenAI expects.
It then passes that schema as a function into OpenAI and passes a `function_call` parameter to force OpenAI to return arguments in the specified format.

For more details, [check out this documentation page](https://js.langchain.com/docs/modules/chains/popular/structured_output).
For more details, [check out this documentation page](https://js.langchain.com/v0.2/docs/how_to/structured_output).

## 🦜 Agents

Expand All @@ -64,16 +66,15 @@ You can then click the `Agent` example and try asking it more complex questions:

![A streaming conversation between the user and an AI agent](/public/images/agent-conversation.png)

This example uses the OpenAI Functions agent, but there are a few other options you can try as well.
See [this documentation page for more details](https://js.langchain.com/docs/modules/agents/agent_types/).
This example uses a [prebuilt LangGraph agent](https://langchain-ai.github.io/langgraphjs/tutorials/quickstart/), but you can customize your own as well.

## 🐶 Retrieval

The retrieval examples both use Supabase as a vector store. However, you can swap in
[another supported vector store](https://js.langchain.com/docs/modules/data_connection/vectorstores/integrations/) if preferred by changing
[another supported vector store](https://js.langchain.com/v0.2/docs/integrations/vectorstores) if preferred by changing
the code under `app/api/retrieval/ingest/route.ts`, `app/api/chat/retrieval/route.ts`, and `app/api/chat/retrieval_agents/route.ts`.

For Supabase, follow [these instructions](https://js.langchain.com/docs/modules/data_connection/vectorstores/integrations/supabase) to set up your
For Supabase, follow [these instructions](https://js.langchain.com/v0.2/docs/integrations/vectorstores/supabase) to set up your
database, then get your database URL and private key and paste them into `.env.local`.

You can then switch to the `Retrieval` and `Retrieval Agent` examples. The default document text is pulled from the LangChain.js retrieval
Expand All @@ -88,12 +89,12 @@ After splitting, embedding, and uploading some text, you're ready to ask questio

![A streaming conversation between the user and an AI retrieval agent](/public/images/retrieval-agent-conversation.png)

For more info on retrieval chains, [see this page](https://js.langchain.com/docs/use_cases/question_answering/).
For more info on retrieval chains, [see this page](https://js.langchain.com/v0.2/docs/tutorials/rag).
The specific variant of the conversational retrieval chain used here is composed using LangChain Expression Language, which you can
[read more about here](https://js.langchain.com/docs/guides/expression_language/cookbook). This chain example will also return cited sources
[read more about here](https://js.langchain.com/v0.2/docs/how_to/qa_sources/). This chain example will also return cited sources
via header in addition to the streaming response.

For more info on retrieval agents, [see this page](https://js.langchain.com/docs/use_cases/question_answering/conversational_retrieval_agents).
For more info on retrieval agents, [see this page](https://langchain-ai.github.io/langgraphjs/tutorials/rag/langgraph_agentic_rag/).

## 📦 Bundle size

Expand All @@ -110,7 +111,7 @@ $ ANALYZE=true yarn build
## 📚 Learn More

The example chains in the `app/api/chat/route.ts` and `app/api/chat/retrieval/route.ts` files use
[LangChain Expression Language](https://js.langchain.com/docs/guides/expression_language/interface) to
[LangChain Expression Language](https://js.langchain.com/v0.2/docs/concepts#langchain-expression-language) to
compose different LangChain.js modules together. You can integrate other retrievers, agents, preconfigured chains, and more too, though keep in mind
`HttpResponseOutputParser` is meant to be used directly with model output.

Expand Down
131 changes: 62 additions & 69 deletions app/api/chat/agents/route.ts
Original file line number Diff line number Diff line change
@@ -1,16 +1,17 @@
import { NextRequest, NextResponse } from "next/server";
import { Message as VercelChatMessage, StreamingTextResponse } from "ai";

import { AgentExecutor, createToolCallingAgent } from "langchain/agents";
import { createReactAgent } from "@langchain/langgraph/prebuilt";
import { ChatOpenAI } from "@langchain/openai";
import { SerpAPI } from "@langchain/community/tools/serpapi";
import { Calculator } from "@langchain/community/tools/calculator";
import { AIMessage, ChatMessage, HumanMessage } from "@langchain/core/messages";

import {
ChatPromptTemplate,
MessagesPlaceholder,
} from "@langchain/core/prompts";
AIMessage,
BaseMessage,
ChatMessage,
HumanMessage,
SystemMessage,
} from "@langchain/core/messages";

export const runtime = "edge";

Expand All @@ -24,99 +25,92 @@ const convertVercelMessageToLangChainMessage = (message: VercelChatMessage) => {
}
};

const convertLangChainMessageToVercelMessage = (message: BaseMessage) => {
if (message._getType() === "human") {
return { content: message.content, role: "user" };
} else if (message._getType() === "ai") {
return {
content: message.content,
role: "assistant",
tool_calls: (message as AIMessage).tool_calls,
};
} else {
return { content: message.content, role: message._getType() };
}
};

const AGENT_SYSTEM_TEMPLATE = `You are a talking parrot named Polly. All final responses must be how a talking parrot would respond. Squawk often!`;

/**
* This handler initializes and calls an OpenAI Functions agent.
* This handler initializes and calls an tool caling ReAct agent.
* See the docs for more information:
*
* https://js.langchain.com/docs/modules/agents/agent_types/openai_functions_agent
* https://langchain-ai.github.io/langgraphjs/tutorials/quickstart/
*/
export async function POST(req: NextRequest) {
try {
const body = await req.json();
const returnIntermediateSteps = body.show_intermediate_steps;
/**
* We represent intermediate steps as system messages for display purposes,
* but don't want them in the chat history.
*/
const messages = (body.messages ?? []).filter(
(message: VercelChatMessage) =>
message.role === "user" || message.role === "assistant",
);
const returnIntermediateSteps = body.show_intermediate_steps;
const previousMessages = messages
.slice(0, -1)
const messages = (body.messages ?? [])
.filter(
(message: VercelChatMessage) =>
message.role === "user" || message.role === "assistant",
)
.map(convertVercelMessageToLangChainMessage);
const currentMessageContent = messages[messages.length - 1].content;

// Requires process.env.SERPAPI_API_KEY to be set: https://serpapi.com/
// You can remove this or use a different tool instead.
const tools = [new Calculator(), new SerpAPI()];
const chat = new ChatOpenAI({
modelName: "gpt-3.5-turbo-1106",
model: "gpt-3.5-turbo-0125",
temperature: 0,
// IMPORTANT: Must "streaming: true" on OpenAI to enable final output streaming below.
streaming: true,
});

/**
* Based on https://smith.langchain.com/hub/hwchase17/openai-functions-agent
*
* This default prompt for the OpenAI functions agent has a placeholder
* where chat messages get inserted as "chat_history".
*
* You can customize this prompt yourself!
* Use a prebuilt LangGraph agent.
*/
const prompt = ChatPromptTemplate.fromMessages([
["system", AGENT_SYSTEM_TEMPLATE],
new MessagesPlaceholder("chat_history"),
["human", "{input}"],
new MessagesPlaceholder("agent_scratchpad"),
]);

const agent = await createToolCallingAgent({
const agent = createReactAgent({
llm: chat,
tools,
prompt,
});

const agentExecutor = new AgentExecutor({
agent,
tools,
// Set this if you want to receive all intermediate steps in the output of .invoke().
returnIntermediateSteps,
/**
* Modify the stock prompt in the prebuilt agent. See docs
* for how to customize your agent:
*
* https://langchain-ai.github.io/langgraphjs/tutorials/quickstart/
*/
messageModifier: new SystemMessage(AGENT_SYSTEM_TEMPLATE),
});

if (!returnIntermediateSteps) {
/**
* Agent executors also allow you to stream back all generated tokens and steps
* from their runs.
* Stream back all generated tokens and steps from their runs.
*
* This contains a lot of data, so we do some filtering of the generated log chunks
* and only stream back the final response.
* We do some filtering of the generated events and only stream back
* the final response as a string.
*
* This filtering is easiest with the OpenAI functions or tools agents, since final outputs
* are log chunk values from the model that contain a string instead of a function call object.
* For this specific type of tool calling ReAct agents with OpenAI, we can tell when
* the agent is ready to stream back final output when it no longer calls
* a tool and instead streams back content.
*
* See: https://js.langchain.com/docs/modules/agents/how_to/streaming#streaming-tokens
* See: https://langchain-ai.github.io/langgraphjs/how-tos/stream-tokens/
*/
const logStream = await agentExecutor.streamLog({
input: currentMessageContent,
chat_history: previousMessages,
});
const eventStream = await agent.streamEvents(
{ messages },
{ version: "v2" },
);

const textEncoder = new TextEncoder();
const transformStream = new ReadableStream({
async start(controller) {
for await (const chunk of logStream) {
if (chunk.ops?.length > 0 && chunk.ops[0].op === "add") {
const addOp = chunk.ops[0];
if (
addOp.path.startsWith("/logs/ChatOpenAI") &&
typeof addOp.value === "string" &&
addOp.value.length
) {
controller.enqueue(textEncoder.encode(addOp.value));
for await (const { event, data } of eventStream) {
if (event === "on_chat_model_stream") {
// Intermediate chat model generations will contain tool calls and no content
if (!!data.chunk.content) {
controller.enqueue(textEncoder.encode(data.chunk.content));
}
}
}
Expand All @@ -127,16 +121,15 @@ export async function POST(req: NextRequest) {
return new StreamingTextResponse(transformStream);
} else {
/**
* Intermediate steps are the default outputs with the executor's `.stream()` method.
* We could also pick them out from `streamLog` chunks.
* They are generated as JSON objects, so streaming them is a bit more complicated.
* We could also pick intermediate steps out from `streamEvents` chunks, but
* they are generated as JSON objects, so streaming and displaying them with
* the AI SDK is more complicated.
*/
const result = await agentExecutor.invoke({
input: currentMessageContent,
chat_history: previousMessages,
});
const result = await agent.invoke({ messages });
return NextResponse.json(
{ output: result.output, intermediate_steps: result.intermediateSteps },
{
messages: result.messages.map(convertLangChainMessageToVercelMessage),
},
{ status: 200 },
);
}
Expand Down
4 changes: 2 additions & 2 deletions app/api/chat/retrieval/route.ts
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ const answerPrompt = PromptTemplate.fromTemplate(ANSWER_TEMPLATE);
* This handler initializes and calls a retrieval chain. It composes the chain using
* LangChain Expression Language. See the docs for more information:
*
* https://js.langchain.com/docs/guides/expression_language/cookbook#conversational-retrieval-chain
* https://js.langchain.com/v0.2/docs/how_to/qa_chat_history_how_to/
*/
export async function POST(req: NextRequest) {
try {
Expand All @@ -75,7 +75,7 @@ export async function POST(req: NextRequest) {
const currentMessageContent = messages[messages.length - 1].content;

const model = new ChatOpenAI({
modelName: "gpt-3.5-turbo-1106",
model: "gpt-3.5-turbo-0125",
temperature: 0.2,
});

Expand Down
Loading

0 comments on commit a7ac38d

Please sign in to comment.