diff --git a/README.md b/README.md index a1f8bf6..98aa909 100644 --- a/README.md +++ b/README.md @@ -17,6 +17,7 @@ Tracing allows for seamless debugging and improvement of your LLM applications. - [Tracing without LangChain](./tracing-examples/traceable/tracing_without_langchain.ipynb): learn to trace applications independent of LangChain using the Python SDK's @traceable decorator. - [REST API](./tracing-examples/rest/rest.ipynb): get acquainted with the REST API's features for logging LLM and chat model runs, and understand nested runs. The run logging spec can be found in the [LangSmith SDK repository](https://github.com/langchain-ai/langsmith-sdk/blob/main/openapi/openapi.yaml). - [Customing Run Names](./tracing-examples/runnable-naming/run-naming.ipynb): improve UI clarity by assigning bespoke names to LangSmith chain runs—includes examples for chains, lambda functions, and agents. +- [Tracing Nested Calls within Tools](./tracing-examples/nesting-tools/nest_runs_within_tools.ipynb): include all nested tool subcalls in a single trace by using `run_manager.get_child()` and passing to the child `callbacks` ## LangChain Hub diff --git a/tracing-examples/README.md b/tracing-examples/README.md index 1fccd55..764a927 100644 --- a/tracing-examples/README.md +++ b/tracing-examples/README.md @@ -9,4 +9,5 @@ Tracing allows for seamless debugging and improvement of your LLM applications. - [Tracing without LangChain](./traceable/tracing_without_langchain.ipynb): learn to trace applications independent of LangChain using the Python SDK's @traceable decorator. - [REST API](./rest/rest.ipynb): get acquainted with the REST API's features for logging LLM and chat model runs, and understand nested runs. The run logging spec can be found in the [LangSmith SDK repository](https://github.com/langchain-ai/langsmith-sdk/blob/main/openapi/openapi.yaml). -- [Customing Run Names](./runnable-naming/run-naming.ipynb): improve UI clarity by assigning bespoke names to LangSmith chain runs—includes examples for chains, lambda functions, and agents. \ No newline at end of file +- [Customing Run Names](./runnable-naming/run-naming.ipynb): improve UI clarity by assigning bespoke names to LangSmith chain runs—includes examples for chains, lambda functions, and agents. +- [Tracing Nested Calls within Tools](./nesting-tools/nest_runs_within_tools.ipynb): include all nested tool subcalls in a single trace by using `run_manager.get_child()` and passing to the child `callbacks` \ No newline at end of file diff --git a/tracing-examples/nesting-tools/img/full_run.png b/tracing-examples/nesting-tools/img/full_run.png new file mode 100644 index 0000000..8323d9f Binary files /dev/null and b/tracing-examples/nesting-tools/img/full_run.png differ diff --git a/tracing-examples/nesting-tools/img/trace.png b/tracing-examples/nesting-tools/img/trace.png new file mode 100644 index 0000000..f86359a Binary files /dev/null and b/tracing-examples/nesting-tools/img/trace.png differ diff --git a/tracing-examples/nesting-tools/nest_runs_within_tools.ipynb b/tracing-examples/nesting-tools/nest_runs_within_tools.ipynb new file mode 100644 index 0000000..b9c6991 --- /dev/null +++ b/tracing-examples/nesting-tools/nest_runs_within_tools.ipynb @@ -0,0 +1,289 @@ +{ + "cells": [ + { + "attachments": {}, + "cell_type": "markdown", + "id": "ea796b03-1259-4742-b9db-25530cc93c2e", + "metadata": {}, + "source": [ + "# Tracing Nested Calls within Tools\n", + "\n", + "Tools are interfaces an agent can use to interact with any function. Often those functions include additional calls to a retriever, LLM, or other resource you'd like to trace. To ensure the nested calls are all grouped within the same trace, just use the `run_manager`!\n", + "\n", + "The following example shows how to pass callbacks through a custom tool to a nested chain. This ensures that the chain's trace is properly included within the agent trace rather than appearing separately. \n", + "\n", + "The resulting trace looks like the following:\n", + "\n", + "![Trace](./img/trace.png)" + ] + }, + { + "cell_type": "markdown", + "id": "57c8bbee-34c7-48ff-8338-203182759415", + "metadata": {}, + "source": [ + "## Prerequisites\n", + "\n", + "This example uses OpenAI and LangSmith. Please make sure the dependencies and API keys are configured appropriately before continuing." + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "id": "fca82ce5-4892-46f1-bda3-21fd14a0d7ae", + "metadata": {}, + "outputs": [], + "source": [ + "# %pip install -U langchain openai" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "6f747c6d-cbbf-449f-9470-c40ea825b6e6", + "metadata": {}, + "outputs": [], + "source": [ + "# import os\n", + "# os.environ[\"LANGCHAIN_API_KEY\"] = \"\"\n", + "# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n", + "# os.environ[\"OPENAI_API_KEY\"] = \"\"" + ] + }, + { + "cell_type": "markdown", + "id": "3157e9f9-0aa2-4746-b5bd-e313eaa78e6e", + "metadata": {}, + "source": [ + "## 1. Define nested tool\n", + "\n", + "Define your tool, taking care to call `run_manager.get_child()` to pass the callback manager through to any functionality you wish to invoke within the tool." + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "id": "f872f468-89fa-4d68-a1fc-11fe74263189", + "metadata": {}, + "outputs": [], + "source": [ + "import uuid\n", + "from typing import Any, Optional, Type\n", + "from langchain.callbacks.manager import (\n", + " AsyncCallbackManagerForToolRun,\n", + " CallbackManagerForToolRun,\n", + ")\n", + "from langchain.chat_models import ChatOpenAI\n", + "from langchain.prompts import ChatPromptTemplate\n", + "from langchain.tools import BaseTool\n", + "from langchain.schema.runnable import Runnable\n", + "from langchain.schema.output_parser import StrOutputParser\n", + "import requests\n", + "import aiohttp\n", + "\n", + "from pydantic import Field\n", + "\n", + "def _default_summary_chain():\n", + " \"\"\"An LLM chain that summarizes the input text\"\"\"\n", + " return (\n", + " ChatPromptTemplate.from_template(\n", + " \"Summarize the following text:\\n\\n\"\n", + " \"{text}\"\n", + " \"\\n\"\n", + " ).partial(uid=lambda: uuid.uuid4())\n", + " | ChatOpenAI(model=\"gpt-3.5-turbo-16k\")\n", + " | StrOutputParser()\n", + " ).with_config(run_name=\"Summarize Text\")\n", + "\n", + "class CustomSummarizer(BaseTool):\n", + " \n", + " name = \"summary_tool\"\n", + " description = \"summarize a website\"\n", + " summary_chain: Runnable = Field(default_factory=_default_summary_chain)\n", + " \n", + " def _run(\n", + " self,\n", + " url: str,\n", + " run_manager: Optional[CallbackManagerForToolRun] = None,\n", + " ) -> str:\n", + " \"\"\"Use the tool.\"\"\"\n", + " text = requests.get(url).text\n", + " callbacks = run_manager.get_child() if run_manager else None\n", + " return self.summary_chain.invoke(\n", + " {\"text\": text},\n", + " {\"callbacks\": callbacks},\n", + " )\n", + "\n", + " async def _arun(\n", + " self,\n", + " url: str,\n", + " run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n", + " ) -> str:\n", + " \"\"\"Use the tool asynchronously.\"\"\"\n", + " async with aiohttp.ClientSession() as session:\n", + " async with session.get(url) as response:\n", + " text = await response.text()\n", + " callbacks = run_manager.get_child() if run_manager else None\n", + " return await self.summary_chain.ainvoke(\n", + " {\"text\": text},\n", + " {\"callbacks\": callbacks},\n", + " )" + ] + }, + { + "cell_type": "markdown", + "id": "d0d9e507-14c6-4bce-b8e1-2fdc7553d412", + "metadata": {}, + "source": [ + "## 2. Define agent\n", + "\n", + "We will construct a simple agent using runnables and OpenAI functions, following the [Agents overview](https://python.langchain.com/docs/modules/agents/) in the LangChain documentation. The specifics aren't important to this example." + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "997babf2-41d4-4047-baa3-cac4f13fec84", + "metadata": {}, + "outputs": [], + "source": [ + "from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder\n", + "\n", + "prompt = ChatPromptTemplate.from_messages([\n", + " (\"system\", \"You are very powerful assistant, but bad at summarizing thingws.\"),\n", + " (\"user\", \"{input}\"),\n", + " MessagesPlaceholder(variable_name=\"agent_scratchpad\"),\n", + "])" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "id": "7e9398d7-d641-46e0-a81a-fa7fecd76a4b", + "metadata": {}, + "outputs": [], + "source": [ + "from langchain.tools.render import format_tool_to_openai_function\n", + "from langchain.chat_models import ChatOpenAI\n", + "\n", + "# Configure the LLM with access to the appropriate function definitions\n", + "llm = ChatOpenAI(temperature=0)\n", + "tools = [CustomSummarizer()]\n", + "llm_with_tools = llm.bind(\n", + " functions=[format_tool_to_openai_function(t) for t in tools]\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "id": "05b6b794-e359-486e-ae79-f26d771095dc", + "metadata": {}, + "outputs": [], + "source": [ + "from langchain.agents import AgentExecutor\n", + "from langchain.agents.format_scratchpad import format_to_openai_functions\n", + "from langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParser\n", + "\n", + "agent = (\n", + " {\n", + " \"input\": lambda x: x[\"input\"],\n", + " \"agent_scratchpad\": lambda x: format_to_openai_functions(x['intermediate_steps']),\n", + " \"chat_history\": lambda x: x.get(\"chat_history\") or []\n", + " } \n", + " | prompt \n", + " | llm_with_tools \n", + " | OpenAIFunctionsAgentOutputParser()\n", + ").with_config(run_name=\"Agent\")\n", + "\n", + "agent_executor = AgentExecutor(agent=agent, tools=tools)" + ] + }, + { + "cell_type": "markdown", + "id": "4eec4034-f866-4900-a008-7ca9a77584bc", + "metadata": {}, + "source": [ + "## 3. Invoke\n", + "\n", + "Now its time to call the agent. All callbacks, including the LangSmith tracer, will be passed to the child runnables." + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "id": "1f7fec7e-1773-4af4-aab4-ca83ff8fbf60", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "{'input': \"What's this about? https://blog.langchain.dev/langchain-expression-language/\",\n", + " 'output': 'The website you provided is the LangChain Blog, specifically a blog post about the LangChain Expression Language (LCEL). LCEL is a syntax for creating chains with composition, supporting batch, async, and streaming. The blog post provides information about LCEL, links to resources for learning and using it, and related articles. There is also a newsletter subscription form available on the website.'}" + ] + }, + "execution_count": 7, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "agent_executor.invoke(\n", + " {\"input\": f\"What's this about? https://blog.langchain.dev/langchain-expression-language/\"\n", + " }\n", + ")" + ] + }, + { + "cell_type": "markdown", + "id": "8a5ade81-7320-4934-80b4-a171097741ae", + "metadata": {}, + "source": [ + "[![Full run trace](./img/full_run.png)](https://smith.langchain.com/public/d0212988-a6d5-4d21-9751-89e77bfa58fa/r)" + ] + }, + { + "cell_type": "markdown", + "id": "d23004db-c044-4b37-b9be-0ca126767ecc", + "metadata": {}, + "source": [ + "## Conclusion\n", + "\n", + "In this example, you used `run_manager.get_child()` to pass a callback manager to a chain nested within a tool. This made sure the \"Summarize Text\" chain was traced correctly.\n", + "\n", + "LangSmith uses LangChain's callbacks system to trace the execution of your application. To trace nested components, the callbacks have to be passed to that component. Any time you see a trace show up on the top level when it ought to be nested, it's likely that somewhere the callbacks weren't correctly passed between components.\n", + "\n", + "This is all made easy when composing functions and other calls as runnables (i.e., [LangChain expression language](https://python.langchain.com/docs/expression_language/))." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "5fc2d6c9-5d12-4c90-82b3-384e29938677", + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.2" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +}