Skip to content

Commit

Permalink
docs: make docs mdxv2 compatible (langchain-ai#26798)
Browse files Browse the repository at this point in the history
prep for docusaurus migration
  • Loading branch information
efriis authored and Sheepsta300 committed Oct 1, 2024
1 parent f84c975 commit 0337ec1
Show file tree
Hide file tree
Showing 34 changed files with 107 additions and 94 deletions.
2 changes: 1 addition & 1 deletion docs/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ generate-files:

$(PYTHON) scripts/partner_pkg_table.py $(INTERMEDIATE_DIR)

wget -q https://raw.githubusercontent.com/langchain-ai/langserve/main/README.md -O $(INTERMEDIATE_DIR)/langserve.md
curl https://raw.githubusercontent.com/langchain-ai/langserve/main/README.md | sed 's/<=/\&lt;=/g' > $(INTERMEDIATE_DIR)/langserve.md
$(PYTHON) scripts/resolve_local_links.py $(INTERMEDIATE_DIR)/langserve.md https://github.com/langchain-ai/langserve/tree/main/

copy-infra:
Expand Down
3 changes: 1 addition & 2 deletions docs/docs/additional_resources/arxiv_references.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -451,8 +451,7 @@ steps of the thought-process and explore other directions from there. To verify
the effectiveness of the proposed technique, we implemented a ToT-based solver
for the Sudoku Puzzle. Experimental results show that the ToT framework can
significantly increase the success rate of Sudoku puzzle solving. Our
implementation of the ToT-based Sudoku solver is available on GitHub:
\url{https://github.com/jieyilong/tree-of-thought-puzzle-solver}.
implementation of the ToT-based Sudoku solver is available on [GitHub](https://github.com/jieyilong/tree-of-thought-puzzle-solver).

## Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models

Expand Down
4 changes: 2 additions & 2 deletions docs/docs/concepts.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -732,10 +732,10 @@ of the object.
If you're creating a custom chain or runnable, you need to remember to propagate request time
callbacks to any child objects.

:::important Async in Python<=3.10
:::important Async in Python&lt;=3.10

Any `RunnableLambda`, a `RunnableGenerator`, or `Tool` that invokes other runnables
and is running `async` in python<=3.10, will have to propagate callbacks to child
and is running `async` in python&lt;=3.10, will have to propagate callbacks to child
objects manually. This is because LangChain cannot automatically propagate
callbacks to child objects in this case.

Expand Down
6 changes: 3 additions & 3 deletions docs/docs/how_to/callbacks_custom_events.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -38,9 +38,9 @@
"\n",
"\n",
":::caution COMPATIBILITY\n",
"LangChain cannot automatically propagate configuration, including callbacks necessary for astream_events(), to child runnables if you are running async code in python<=3.10. This is a common reason why you may fail to see events being emitted from custom runnables or tools.\n",
"LangChain cannot automatically propagate configuration, including callbacks necessary for astream_events(), to child runnables if you are running async code in python&lt;=3.10. This is a common reason why you may fail to see events being emitted from custom runnables or tools.\n",
"\n",
"If you are running python<=3.10, you will need to manually propagate the `RunnableConfig` object to the child runnable in async environments. For an example of how to manually propagate the config, see the implementation of the `bar` RunnableLambda below.\n",
"If you are running python&lt;=3.10, you will need to manually propagate the `RunnableConfig` object to the child runnable in async environments. For an example of how to manually propagate the config, see the implementation of the `bar` RunnableLambda below.\n",
"\n",
"If you are running python>=3.11, the `RunnableConfig` will automatically propagate to child runnables in async environment. However, it is still a good idea to propagate the `RunnableConfig` manually if your code may run in other Python versions.\n",
":::"
Expand Down Expand Up @@ -115,7 +115,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"In python <= 3.10, you must propagate the config manually!"
"In python &lt;= 3.10, you must propagate the config manually!"
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/how_to/custom_chat_model.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@
"| `AIMessageChunk` / `HumanMessageChunk` / ... | Chunk variant of each type of message. |\n",
"\n",
"\n",
"::: {.callout-note}\n",
":::{.callout-note}\n",
"`ToolMessage` and `FunctionMessage` closely follow OpenAI's `function` and `tool` roles.\n",
"\n",
"This is a rapidly developing field and as more models add function calling capabilities. Expect that there will be additions to this schema.\n",
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/how_to/parallel.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,7 @@
"id": "392cd4c4-e7ed-4ab8-934d-f7a4eca55ee1",
"metadata": {},
"source": [
"::: {.callout-tip}\n",
":::{.callout-tip}\n",
"Note that when composing a RunnableParallel with another Runnable we don't even need to wrap our dictionary in the RunnableParallel class — the type conversion is handled for us. In the context of a chain, these are equivalent:\n",
":::\n",
"\n",
Expand Down
22 changes: 11 additions & 11 deletions docs/docs/how_to/streaming.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -517,7 +517,7 @@
"id": "d59823f5-9b9a-43c5-a213-34644e2f1d3d",
"metadata": {},
"source": [
":::{.callout-note}\n",
":::note\n",
"Because the code above is relying on JSON auto-completion, you may see partial names of countries (e.g., `Sp` and `Spain`), which is not what one would want for an extraction result!\n",
"\n",
"We're focusing on streaming concepts, not necessarily the results of the chains.\n",
Expand Down Expand Up @@ -689,27 +689,27 @@
"Below is a reference table that shows some events that might be emitted by the various Runnable objects.\n",
"\n",
"\n",
":::{.callout-note}\n",
":::note\n",
"When streaming is implemented properly, the inputs to a runnable will not be known until after the input stream has been entirely consumed. This means that `inputs` will often be included only for `end` events and rather than for `start` events.\n",
":::\n",
"\n",
"| event | name | chunk | input | output |\n",
"|----------------------|------------------|---------------------------------|-----------------------------------------------|-------------------------------------------------|\n",
"| on_chat_model_start | [model name] | | {\"messages\": [[SystemMessage, HumanMessage]]} | |\n",
"| on_chat_model_start | [model name] | | \\{\"messages\": [[SystemMessage, HumanMessage]]\\} | |\n",
"| on_chat_model_stream | [model name] | AIMessageChunk(content=\"hello\") | | |\n",
"| on_chat_model_end | [model name] | | {\"messages\": [[SystemMessage, HumanMessage]]} | AIMessageChunk(content=\"hello world\") |\n",
"| on_llm_start | [model name] | | {'input': 'hello'} | |\n",
"| on_chat_model_end | [model name] | | \\{\"messages\": [[SystemMessage, HumanMessage]]\\} | AIMessageChunk(content=\"hello world\") |\n",
"| on_llm_start | [model name] | | \\{'input': 'hello'\\} | |\n",
"| on_llm_stream | [model name] | 'Hello' | | |\n",
"| on_llm_end | [model name] | | 'Hello human!' | |\n",
"| on_chain_start | format_docs | | | |\n",
"| on_chain_stream | format_docs | \"hello world!, goodbye world!\" | | |\n",
"| on_chain_end | format_docs | | [Document(...)] | \"hello world!, goodbye world!\" |\n",
"| on_tool_start | some_tool | | {\"x\": 1, \"y\": \"2\"} | |\n",
"| on_tool_end | some_tool | | | {\"x\": 1, \"y\": \"2\"} |\n",
"| on_retriever_start | [retriever name] | | {\"query\": \"hello\"} | |\n",
"| on_retriever_end | [retriever name] | | {\"query\": \"hello\"} | [Document(...), ..] |\n",
"| on_prompt_start | [template_name] | | {\"question\": \"hello\"} | |\n",
"| on_prompt_end | [template_name] | | {\"question\": \"hello\"} | ChatPromptValue(messages: [SystemMessage, ...]) |"
"| on_tool_start | some_tool | | \\{\"x\": 1, \"y\": \"2\"\\} | |\n",
"| on_tool_end | some_tool | | | \\{\"x\": 1, \"y\": \"2\"\\} |\n",
"| on_retriever_start | [retriever name] | | \\{\"query\": \"hello\"\\} | |\n",
"| on_retriever_end | [retriever name] | | \\{\"query\": \"hello\"\\} | [Document(...), ..] |\n",
"| on_prompt_start | [template_name] | | \\{\"question\": \"hello\"\\} | |\n",
"| on_prompt_end | [template_name] | | \\{\"question\": \"hello\"\\} | ChatPromptValue(messages: [SystemMessage, ...]) |"
]
},
{
Expand Down
4 changes: 2 additions & 2 deletions docs/docs/how_to/tool_stream_events.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -20,9 +20,9 @@
"\n",
":::caution Compatibility\n",
"\n",
"LangChain cannot automatically propagate configuration, including callbacks necessary for `astream_events()`, to child runnables if you are running `async` code in `python<=3.10`. This is a common reason why you may fail to see events being emitted from custom runnables or tools.\n",
"LangChain cannot automatically propagate configuration, including callbacks necessary for `astream_events()`, to child runnables if you are running `async` code in `python&lt;=3.10`. This is a common reason why you may fail to see events being emitted from custom runnables or tools.\n",
"\n",
"If you are running python<=3.10, you will need to manually propagate the `RunnableConfig` object to the child runnable in async environments. For an example of how to manually propagate the config, see the implementation of the `bar` RunnableLambda below.\n",
"If you are running python&lt;=3.10, you will need to manually propagate the `RunnableConfig` object to the child runnable in async environments. For an example of how to manually propagate the config, see the implementation of the `bar` RunnableLambda below.\n",
"\n",
"If you are running python>=3.11, the `RunnableConfig` will automatically propagate to child runnables in async environment. However, it is still a good idea to propagate the `RunnableConfig` manually if your code may run in older Python versions.\n",
"\n",
Expand Down
6 changes: 3 additions & 3 deletions docs/docs/integrations/chat/anthropic_functions.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
"source": [
"# [Deprecated] Experimental Anthropic Tools Wrapper\n",
"\n",
"::: {.callout-warning}\n",
":::{.callout-warning}\n",
"\n",
"The Anthropic API officially supports tool-calling so this workaround is no longer needed. Please use [ChatAnthropic](/docs/integrations/chat/anthropic) with `langchain-anthropic>=0.1.15`.\n",
"\n",
Expand Down Expand Up @@ -118,7 +118,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": ".venv",
"language": "python",
"name": "python3"
},
Expand All @@ -132,7 +132,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.11.4"
}
},
"nbformat": 4,
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/integrations/document_loaders/blockchain.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@
"The output takes the following format:\n",
"\n",
"- pageContent= Individual NFT\n",
"- metadata={'source': '0x1a92f7381b9f03921564a437210bb9396471050c', 'blockchain': 'eth-mainnet', 'tokenId': '0x15'})"
"- metadata=\\{'source': '0x1a92f7381b9f03921564a437210bb9396471050c', 'blockchain': 'eth-mainnet', 'tokenId': '0x15'\\}"
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/integrations/document_loaders/confluence.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@
"\n",
"You can also specify a boolean `include_attachments` to include attachments, this is set to False by default, if set to True all attachments will be downloaded and ConfluenceReader will extract the text from the attachments and add it to the Document object. Currently supported attachment types are: `PDF`, `PNG`, `JPEG/JPG`, `SVG`, `Word` and `Excel`.\n",
"\n",
"Hint: `space_key` and `page_id` can both be found in the URL of a page in Confluence - https://yoursite.atlassian.com/wiki/spaces/<space_key>/pages/<page_id>\n"
"Hint: `space_key` and `page_id` can both be found in the URL of a page in Confluence - https://yoursite.atlassian.com/wiki/spaces/&lt;space_key&gt;/pages/&lt;page_id&gt;\n"
]
},
{
Expand Down
4 changes: 2 additions & 2 deletions docs/docs/integrations/document_loaders/figma.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -40,9 +40,9 @@
"source": [
"The Figma API Requires an access token, node_ids, and a file key.\n",
"\n",
"The file key can be pulled from the URL. https://www.figma.com/file/{filekey}/sampleFilename\n",
"The file key can be pulled from the URL. https://www.figma.com/file/\\{filekey\\}/sampleFilename\n",
"\n",
"Node IDs are also available in the URL. Click on anything and look for the '?node-id={node_id}' param.\n",
"Node IDs are also available in the URL. Click on anything and look for the '?node-id=\\{node_id\\}' param.\n",
"\n",
"Access token instructions are in the Figma help center article: https://help.figma.com/hc/en-us/articles/8085703771159-Manage-personal-access-tokens"
]
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/integrations/document_loaders/mintbase.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@
"The output takes the following format:\n",
"\n",
"- pageContent= Individual NFT\n",
"- metadata={'source': 'nft.yearofchef.near', 'blockchain': 'mainnet', 'tokenId': '1846'}"
"- metadata=\\{'source': 'nft.yearofchef.near', 'blockchain': 'mainnet', 'tokenId': '1846'\\}"
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/integrations/document_loaders/mongodb.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@
"The output takes the following format:\n",
"\n",
"- pageContent= Mongo Document\n",
"- metadata={'database': '[database_name]', 'collection': '[collection_name]'}"
"- metadata=\\{'database': '[database_name]', 'collection': '[collection_name]'\\}"
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/integrations/document_loaders/rspace.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@
"source": [
"It's best to store your RSpace API key as an environment variable. \n",
"\n",
" RSPACE_API_KEY=<YOUR_KEY>\n",
" RSPACE_API_KEY=&lt;YOUR_KEY&gt;\n",
"\n",
"You'll also need to set the URL of your RSpace installation e.g.\n",
"\n",
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/integrations/document_loaders/slack.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
"\n",
"## 🧑 Instructions for ingesting your own dataset\n",
"\n",
"Export your Slack data. You can do this by going to your Workspace Management page and clicking the Import/Export option ({your_slack_domain}.slack.com/services/export). Then, choose the right date range and click `Start export`. Slack will send you an email and a DM when the export is ready.\n",
"Export your Slack data. You can do this by going to your Workspace Management page and clicking the Import/Export option (\\{your_slack_domain\\}.slack.com/services/export). Then, choose the right date range and click `Start export`. Slack will send you an email and a DM when the export is ready.\n",
"\n",
"The download will produce a `.zip` file in your Downloads folder (or wherever your downloads can be found, depending on your OS configuration).\n",
"\n",
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/integrations/document_loaders/web_base.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@
"source": [
"To bypass SSL verification errors during fetching, you can set the \"verify\" option:\n",
"\n",
"loader.requests_kwargs = {'verify':False}\n",
"`loader.requests_kwargs = {'verify':False}`\n",
"\n",
"### Initialization with multiple pages\n",
"\n",
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/integrations/llms/runhouse.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -277,7 +277,7 @@
"id": "af08575f",
"metadata": {},
"source": [
"You can send your pipeline directly over the wire to your model, but this will only work for small models (<2 Gb), and will be pretty slow:"
"You can send your pipeline directly over the wire to your model, but this will only work for small models (&lt;2 Gb), and will be pretty slow:"
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/integrations/providers/cnosdb.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ pressure station temperature time visibility
*/
Thought:The "temperature" column in the "air" table is relevant to the question. I can query the average temperature between the specified dates.
Action: sql_db_query
Action Input: "SELECT AVG(temperature) FROM air WHERE station = 'XiaoMaiDao' AND time >= '2022-10-19' AND time <= '2022-10-20'"
Action Input: "SELECT AVG(temperature) FROM air WHERE station = 'XiaoMaiDao' AND time >= '2022-10-19' AND time &lt;= '2022-10-20'"
Observation: [(68.0,)]
Thought:The average temperature of air at station XiaoMaiDao between October 19, 2022 and October 20, 2022 is 68.0.
Final Answer: 68.0
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/integrations/providers/dspy.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -190,7 +190,7 @@
" \n",
"Let's use LangChain's expression language (LCEL) to illustrate this. Any prompt here will do, we will optimize the final prompt with DSPy.\n",
"\n",
"Considering that, let's just keep it to the barebones: **Given {context}, answer the question {question} as a tweet.**"
"Considering that, let's just keep it to the barebones: **Given \\{context\\}, answer the question \\{question\\} as a tweet.**"
]
},
{
Expand Down
4 changes: 2 additions & 2 deletions docs/docs/integrations/providers/figma.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,9 @@

The Figma API requires an `access token`, `node_ids`, and a `file key`.

The `file key` can be pulled from the URL. https://www.figma.com/file/{filekey}/sampleFilename
The `file key` can be pulled from the URL. https://www.figma.com/file/\{filekey\}/sampleFilename

`Node IDs` are also available in the URL. Click on anything and look for the '?node-id={node_id}' param.
`Node IDs` are also available in the URL. Click on anything and look for the '?node-id=\{node_id\}' param.

`Access token` [instructions](https://help.figma.com/hc/en-us/articles/8085703771159-Manage-personal-access-tokens).

Expand Down
2 changes: 1 addition & 1 deletion docs/docs/integrations/providers/xinference.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ Xinference client.
For local deployment, the endpoint will be http://localhost:9997.


For cluster deployment, the endpoint will be http://${supervisor_host}:9997.
For cluster deployment, the endpoint will be http://$\{supervisor_host\}:9997.


Then, you need to launch a model. You can specify the model names and other attributes
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/integrations/tools/github.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@
"\n",
"* **GITHUB_APP_ID**- A six digit number found in your app's general settings\n",
"* **GITHUB_APP_PRIVATE_KEY**- The location of your app's private key .pem file, or the full text of that file as a string.\n",
"* **GITHUB_REPOSITORY**- The name of the Github repository you want your bot to act upon. Must follow the format {username}/{repo-name}. *Make sure the app has been added to this repository first!*\n",
"* **GITHUB_REPOSITORY**- The name of the Github repository you want your bot to act upon. Must follow the format \\{username\\}/\\{repo-name\\}. *Make sure the app has been added to this repository first!*\n",
"* Optional: **GITHUB_BRANCH**- The branch where the bot will make its commits. Defaults to `repo.default_branch`.\n",
"* Optional: **GITHUB_BASE_BRANCH**- The base branch of your repo upon which PRs will based from. Defaults to `repo.default_branch`."
]
Expand Down
Loading

0 comments on commit 0337ec1

Please sign in to comment.