This repository has been archived by the owner on Oct 8, 2024. It is now read-only.
forked from langchain-ai/langchain
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Split sql use case docs (langchain-ai#10257)
Split sql use case into directory so we can add other structured data pages
- Loading branch information
Showing
8 changed files
with
183 additions
and
16 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
1 change: 1 addition & 0 deletions
1
docs/docs_skeleton/docs/use_cases/question_answering/_category_.yml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1 +1,2 @@ | ||
position: 0 | ||
collapsed: false |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
label: 'QA over structured data' | ||
collapsed: false | ||
position: 0.5 |
1 change: 1 addition & 0 deletions
1
docs/extras/use_cases/qa_structured/integrations/_category_.yml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
label: 'Integration-specific' |
158 changes: 158 additions & 0 deletions
158
docs/extras/use_cases/qa_structured/integrations/elasticsearch.ipynb
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,158 @@ | ||
{ | ||
"cells": [ | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"# Elasticsearch\n", | ||
"\n", | ||
"[![Open In Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/extras/use_cases/qa_structured/integrations/elasticsearch.ipynb)\n", | ||
"\n", | ||
"We can use LLMs to interact with Elasticsearch analytics databases in natural language.\n", | ||
"\n", | ||
"This chain builds search queries via the Elasticsearch DSL API (filters and aggregations).\n", | ||
"\n", | ||
"The Elasticsearch client must have permissions for index listing, mapping description and search queries.\n", | ||
"\n", | ||
"See [here](https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html) for instructions on how to run Elasticsearch locally." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 2, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"! pip install langchain langchain-experimental openai elasticsearch\n", | ||
"\n", | ||
"# Set env var OPENAI_API_KEY or load from a .env file\n", | ||
"# import dotenv\n", | ||
"\n", | ||
"# dotenv.load_dotenv()" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 15, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"from elasticsearch import Elasticsearch\n", | ||
"\n", | ||
"from langchain.chat_models import ChatOpenAI\n", | ||
"from langchain.chains.elasticsearch_database import ElasticsearchDatabaseChain" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"# Initialize Elasticsearch python client.\n", | ||
"# See https://elasticsearch-py.readthedocs.io/en/v8.8.2/api.html#elasticsearch.Elasticsearch\n", | ||
"ELASTIC_SEARCH_SERVER = \"https://elastic:pass@localhost:9200\"\n", | ||
"db = Elasticsearch(ELASTIC_SEARCH_SERVER)" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"Uncomment the next cell to initially populate your db." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"# customers = [\n", | ||
"# {\"firstname\": \"Jennifer\", \"lastname\": \"Walters\"},\n", | ||
"# {\"firstname\": \"Monica\",\"lastname\":\"Rambeau\"},\n", | ||
"# {\"firstname\": \"Carol\",\"lastname\":\"Danvers\"},\n", | ||
"# {\"firstname\": \"Wanda\",\"lastname\":\"Maximoff\"},\n", | ||
"# {\"firstname\": \"Jennifer\",\"lastname\":\"Takeda\"},\n", | ||
"# ]\n", | ||
"# for i, customer in enumerate(customers):\n", | ||
"# db.create(index=\"customers\", document=customer, id=i)" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"llm = ChatOpenAI(model_name=\"gpt-4\", temperature=0)\n", | ||
"chain = ElasticsearchDatabaseChain.from_llm(llm=llm, database=db, verbose=True)" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"question = \"What are the first names of all the customers?\"\n", | ||
"chain.run(question)" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"We can customize the prompt." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"from langchain.chains.elasticsearch_database.prompts import DEFAULT_DSL_TEMPLATE\n", | ||
"from langchain.prompts.prompt import PromptTemplate\n", | ||
"\n", | ||
"PROMPT_TEMPLATE = \"\"\"Given an input question, create a syntactically correct Elasticsearch query to run. Unless the user specifies in their question a specific number of examples they wish to obtain, always limit your query to at most {top_k} results. You can order the results by a relevant column to return the most interesting examples in the database.\n", | ||
"\n", | ||
"Unless told to do not query for all the columns from a specific index, only ask for a the few relevant columns given the question.\n", | ||
"\n", | ||
"Pay attention to use only the column names that you can see in the mapping description. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which index. Return the query as valid json.\n", | ||
"\n", | ||
"Use the following format:\n", | ||
"\n", | ||
"Question: Question here\n", | ||
"ESQuery: Elasticsearch Query formatted as json\n", | ||
"\"\"\"\n", | ||
"\n", | ||
"PROMPT = PromptTemplate.from_template(\n", | ||
" PROMPT_TEMPLATE,\n", | ||
")\n", | ||
"chain = ElasticsearchDatabaseChain.from_llm(llm=llm, database=db, query_prompt=PROMPT)" | ||
] | ||
} | ||
], | ||
"metadata": { | ||
"kernelspec": { | ||
"display_name": "Python 3 (ipykernel)", | ||
"language": "python", | ||
"name": "python3" | ||
}, | ||
"language_info": { | ||
"codemirror_mode": { | ||
"name": "ipython", | ||
"version": 3 | ||
}, | ||
"file_extension": ".py", | ||
"mimetype": "text/x-python", | ||
"name": "python", | ||
"nbconvert_exporter": "python", | ||
"pygments_lexer": "ipython3", | ||
"version": "3.9.1" | ||
} | ||
}, | ||
"nbformat": 4, | ||
"nbformat_minor": 4 | ||
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters