-
Notifications
You must be signed in to change notification settings - Fork 15.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
async mongo document loader #4285
async mongo document loader #4285
Commits on May 17, 2023
-
fix homepage typo (langchain-ai#4883)
# Fix Homepage Typo ## Who can review? Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested... not sure
Configuration menu - View commit details
-
Copy full SHA for d6e0b9a - Browse repository at this point
Copy the full SHA d6e0b9aView commit details -
Tiny code review and docs fix for Docugami DataLoader (langchain-ai#4877
) # Docs and code review fixes for Docugami DataLoader 1. I noticed a couple of hyperlinks that are not loading in the langchain docs (I guess need explicit anchor tags). Added those. 2. In code review @eyurtsev had a [suggestion](langchain-ai#4727 (comment)) to allow string paths. Turns out just updating the type works (I tested locally with string paths). # Pre-submission checks I ran `make lint` and `make tests` successfully. --------- Co-authored-by: Taqi Jaffri <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for ef8b5f6 - Browse repository at this point
Copy the full SHA ef8b5f6View commit details -
feat(Add FastAPI + Vercel deployment option): (langchain-ai#4520)
# Update deployments doc with langcorn API server API server example ```python from fastapi import FastAPI from langcorn import create_service app: FastAPI = create_service( "examples.ex1:chain", "examples.ex2:chain", "examples.ex3:chain", "examples.ex4:sequential_chain", "examples.ex5:conversation", "examples.ex6:conversation_with_summary", ) ``` More examples: https://github.com/msoedov/langcorn/tree/main/examples Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 4c3ab55 - Browse repository at this point
Copy the full SHA 4c3ab55View commit details -
Configuration menu - View commit details
-
Copy full SHA for 1ff7c95 - Browse repository at this point
Copy the full SHA 1ff7c95View commit details
Commits on May 18, 2023
-
ConversationalChatAgent: Allow customizing
TEMPLATE_TOOL_RESPONSE
(l……angchain-ai#2361) It's currently not possible to change the `TEMPLATE_TOOL_RESPONSE` prompt for ConversationalChatAgent, this PR changes that. --------- Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 5c9205d - Browse repository at this point
Copy the full SHA 5c9205dView commit details -
Faiss no avx2 (langchain-ai#4895)
Co-authored-by: Ali Mirlou <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for df0c33a - Browse repository at this point
Copy the full SHA df0c33aView commit details -
Add a generic document loader (langchain-ai#4875)
# Add generic document loader * This PR adds a generic document loader which can assemble a loader from a blob loader and a parser * Adds a registry for parsers * Populate registry with a default mimetype based parser ## Expected changes - Parsing involves loading content via IO so can be sped up via: * Threading in sync * Async - The actual parsing logic may be computatinoally involved: may need to figure out to add multi-processing support - May want to add suffix based parser since suffixes are easier to specify in comparison to mime types ## Before submitting No notebooks yet, we first need to get a few of the basic parsers up (prior to advertising the interface)
Configuration menu - View commit details
-
Copy full SHA for 8e41143 - Browse repository at this point
Copy the full SHA 8e41143View commit details -
Add html parsers (langchain-ai#4874)
# Add bs4 html parser * Some minor refactors * Extract the bs4 html parsing code from the bs html loader * Move some tests from integration tests to unit tests
Configuration menu - View commit details
-
Copy full SHA for 0dc304c - Browse repository at this point
Copy the full SHA 0dc304cView commit details -
Cadlabs/python tool sanitization (langchain-ai#4754)
Co-authored-by: BenSchZA <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for e28bdf4 - Browse repository at this point
Copy the full SHA e28bdf4View commit details -
Zep memory (langchain-ai#4898)
Co-authored-by: Daniel Chalef <[email protected]> Co-authored-by: Daniel Chalef <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 8966f61 - Browse repository at this point
Copy the full SHA 8966f61View commit details -
Configuration menu - View commit details
-
Copy full SHA for a4ac006 - Browse repository at this point
Copy the full SHA a4ac006View commit details -
Fix AzureOpenAI embeddings documentation example. model -> deployment (…
…langchain-ai#4389) # Documentation for Azure OpenAI embeddings model - OPENAI_API_VERSION environment variable is needed for the endpoint - The constructor does not work with model, it works with deployment. I fixed it in the notebook. (This is my first contribution) ## Who can review? @hwchase17 @agola Co-authored-by: Harrison Chase <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 41e2394 - Browse repository at this point
Copy the full SHA 41e2394View commit details -
Update getting_started.md (langchain-ai#4482)
# Added another helpful way for developers who want to set OpenAI API Key dynamically Previous methods like exporting environment variables are good for project-wide settings. But many use cases need to assign API keys dynamically, recently. ```python from langchain.llms import OpenAI llm = OpenAI(openai_api_key="OPENAI_API_KEY") ``` ## Before submitting ```bash export OPENAI_API_KEY="..." ``` Or, ```python import os os.environ["OPENAI_API_KEY"] = "..." ``` <hr> Thank you. Cheers, Bongsang
Configuration menu - View commit details
-
Copy full SHA for 613bf9b - Browse repository at this point
Copy the full SHA 613bf9bView commit details -
docs: text splitters improvements (langchain-ai#4490)
#docs: text splitters improvements Changes are only in the Jupyter notebooks. - added links to the source packages and a short description of these packages - removed " Text Splitters" suffixes from the TOC elements (they made the list of the text splitters messy) - moved text splitters, based on the length function into a separate list. They can be mixed with any classes from the "Text Splitters", so it is a different classification. ## Who can review? @hwchase17 - project lead @eyurtsev @vowelparrot NOTE: please, check out the results of the `Python code` text splitter example (text_splitters/examples/python.ipynb). It looks suboptimal.
Configuration menu - View commit details
-
Copy full SHA for c998569 - Browse repository at this point
Copy the full SHA c998569View commit details -
Harrison/serper api bug (langchain-ai#4902)
Co-authored-by: Jerry Luan <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 9e2227b - Browse repository at this point
Copy the full SHA 9e2227bView commit details -
Harrison/faiss norm (langchain-ai#4903)
Co-authored-by: Jiaxin Shan <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for ba023d5 - Browse repository at this point
Copy the full SHA ba023d5View commit details -
Configuration menu - View commit details
-
Copy full SHA for 9165267 - Browse repository at this point
Copy the full SHA 9165267View commit details -
Harrison/unified objectives (langchain-ai#4905)
Co-authored-by: Matthias Samwald <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for b8d4893 - Browse repository at this point
Copy the full SHA b8d4893View commit details -
Configuration menu - View commit details
-
Copy full SHA for dfbf45f - Browse repository at this point
Copy the full SHA dfbf45fView commit details -
Load specific file types from Google Drive (issue langchain-ai#4878) (l…
…angchain-ai#4926) # Load specific file types from Google Drive (issue langchain-ai#4878) Add the possibility to define what file types you want to load from Google Drive. ``` loader = GoogleDriveLoader( folder_id="1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5", file_types=["document", "pdf"] recursive=False ) ``` Fixes #langchain-ai#4878 ## Who can review? Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested: DataLoaders - @eyurtsev Twitter: [@UmerHAdil](https://twitter.com/@UmerHAdil) | Discord: RicChilligerDude#7589 --------- Co-authored-by: UmerHA <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for c06a47a - Browse repository at this point
Copy the full SHA c06a47aView commit details -
API update: Engines -> Models (langchain-ai#4915)
# API update: Engines -> Models see: https://community.openai.com/t/api-update-engines-models/18597 Co-authored-by: assert <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 8c28ad6 - Browse repository at this point
Copy the full SHA 8c28ad6View commit details -
feat langchain-ai#4479: TextLoader auto detect encoding and improved …
…exceptions (langchain-ai#4927) # TextLoader auto detect encoding and enhanced exception handling - Add an option to enable encoding detection on `TextLoader`. - The detection is done using `chardet` - The loading is done by trying all detected encodings by order of confidence or raise an exception otherwise. ### New Dependencies: - `chardet` Fixes langchain-ai#4479 ## Before submitting <!-- If you're adding a new integration, include an integration test and an example notebook showing its use! --> ## Who can review? Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested: - @eyurtsev --------- Co-authored-by: blob42 <spike@w530>
1Configuration menu - View commit details
-
Copy full SHA for e462028 - Browse repository at this point
Copy the full SHA e462028View commit details -
Fix bilibili (langchain-ai#4860)
# Fix bilibili api import error bilibili-api package is depracated and there is no sync module. <!-- Thank you for contributing to LangChain! Your PR will appear in our next release under the title you set. Please make sure it highlights your valuable contribution. Replace this with a description of the change, the issue it fixes (if applicable), and relevant context. List any dependencies required for this change. After you're done, someone will review your PR. They may suggest improvements. If no one reviews your PR within a few days, feel free to @-mention the same people again, as notifications can get lost. --> <!-- Remove if not applicable --> Fixes langchain-ai#2673 langchain-ai#2724 ## Before submitting <!-- If you're adding a new integration, include an integration test and an example notebook showing its use! --> ## Who can review? Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested: @vowelparrot @liaokongVFX <!-- For a quicker response, figure out the right person to tag with @ @hwchase17 - project lead Tracing / Callbacks - @agola11 Async - @agola11 DataLoaders - @eyurtsev Models - @hwchase17 - @agola11 Agents / Tools / Toolkits - @vowelparrot VectorStores / Retrievers / Memory - @dev2049 -->
Configuration menu - View commit details
-
Copy full SHA for 1ed4228 - Browse repository at this point
Copy the full SHA 1ed4228View commit details -
Add human message as input variable to chat agent prompt creation (la…
…ngchain-ai#4542) # Add human message as input variable to chat agent prompt creation This PR adds human message and system message input to `CHAT_ZERO_SHOT_REACT_DESCRIPTION` agent, similar to [conversational chat agent](https://github.com/hwchase17/langchain/blob/7bcf238a1acf40aef21a5a198cf0e62d76f93c15/langchain/agents/conversational_chat/base.py#L64-L71). I met this issue trying to use `create_prompt` function when using the [BabyAGI agent with tools notebook](https://python.langchain.com/en/latest/use_cases/autonomous_agents/baby_agi_with_agent.html), since BabyAGI uses “task” instead of “input” input variable. For normal zero shot react agent this is fine because I can manually change the suffix to “{input}/n/n{agent_scratchpad}” just like the notebook, but I cannot do this with conversational chat agent, therefore blocking me to use BabyAGI with chat zero shot agent. I tested this in my own project [Chrome-GPT](https://github.com/richardyc/Chrome-GPT) and this fix worked. ## Request for review Agents / Tools / Toolkits - @vowelparrot
Configuration menu - View commit details
-
Copy full SHA for 7642f21 - Browse repository at this point
Copy the full SHA 7642f21View commit details -
add alias for model (langchain-ai#4553)
Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for c9a362e - Browse repository at this point
Copy the full SHA c9a362eView commit details -
dont error on sql import (langchain-ai#4647)
this makes it so we dont throw errors when importing langchain when sqlalchemy==1.3.1 we dont really want to support 1.3.1 (seems like unneccessary maintance cost) BUT we would like it to not terribly error should someone decide to run on it
Configuration menu - View commit details
-
Copy full SHA for d5a0704 - Browse repository at this point
Copy the full SHA d5a0704View commit details -
docs: compound ecosystem and integrations (langchain-ai#4870)
# Docs: compound ecosystem and integrations **Problem statement:** We have a big overlap between the References/Integrations and Ecosystem/LongChain Ecosystem pages. It confuses users. It creates a situation when new integration is added only on one of these pages, which creates even more confusion. - removed References/Integrations page (but move all its information into the individual integration pages - in the next PR). - renamed Ecosystem/LongChain Ecosystem into Integrations/Integrations. I like the Ecosystem term. It is more generic and semantically richer than the Integration term. But it mentally overloads users. The `integration` term is more concrete. UPDATE: after discussion, the Ecosystem is the term. Ecosystem/Integrations is the page (in place of Ecosystem/LongChain Ecosystem). As a result, a user gets a single place to start with the individual integration.
Configuration menu - View commit details
-
Copy full SHA for e2d7677 - Browse repository at this point
Copy the full SHA e2d7677View commit details -
Update GPT4ALL integration (langchain-ai#4567)
# Update GPT4ALL integration GPT4ALL have completely changed their bindings. They use a bit odd implementation that doesn't fit well into base.py and it will probably be changed again, so it's a temporary solution. Fixes langchain-ai#3839, langchain-ai#4628
Configuration menu - View commit details
-
Copy full SHA for c9e2a01 - Browse repository at this point
Copy the full SHA c9e2a01View commit details -
FIX: GPTCache cache_obj creation loop (langchain-ai#4827)
_get_gptcache method keep creating new gptcache instance, here's the fix # Fix GPTCache cache_obj creation loop Fixes langchain-ai#4830 Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for a8ded21 - Browse repository at this point
Copy the full SHA a8ded21View commit details -
Configuration menu - View commit details
-
Copy full SHA for 440b876 - Browse repository at this point
Copy the full SHA 440b876View commit details -
Configuration menu - View commit details
-
Copy full SHA for 55baa0d - Browse repository at this point
Copy the full SHA 55baa0dView commit details -
docs supabase update (langchain-ai#4935)
# docs: updated `Supabase` notebook - the title of the notebook was inconsistent (included redundant "Vectorstore"). Removed this "Vectorstore" - added `Postgress` to the title. It is important. The `Postgres` name is much more popular than `Supabase`. - added description for the `Postrgress` - added more info to the `Supabase` description
Configuration menu - View commit details
-
Copy full SHA for c75c077 - Browse repository at this point
Copy the full SHA c75c077View commit details -
Correct typo in APIChain example notebook (Farenheit -> Fahrenheit) (l…
…angchain-ai#4938) Correct typo in APIChain example notebook (Farenheit -> Fahrenheit)
Configuration menu - View commit details
-
Copy full SHA for 7e8e21c - Browse repository at this point
Copy the full SHA 7e8e21cView commit details -
Configuration menu - View commit details
-
Copy full SHA for 3002c1d - Browse repository at this point
Copy the full SHA 3002c1dView commit details -
Update custom_multi_action_agent.ipynb (langchain-ai#4931)
Updated the docs from "An agent consists of three parts:" to "An agent consists of two parts:" since there are only two parts in the documentation
Configuration menu - View commit details
-
Copy full SHA for c9f963e - Browse repository at this point
Copy the full SHA c9f963eView commit details -
docs: added
ecosystem/dependents
page (langchain-ai#4941)# docs: added `ecosystem/dependents` page Added `ecosystem/dependents` page. Can we propose a better page name?
Configuration menu - View commit details
-
Copy full SHA for 8f8593a - Browse repository at this point
Copy the full SHA 8f8593aView commit details -
docs: vectorstores, different updates and fixes (langchain-ai#4939)
# docs: vectorstores, different updates and fixes Multiple updates: - added/improved descriptions - fixed header levels - added headers - fixed headers
Configuration menu - View commit details
-
Copy full SHA for a9bb314 - Browse repository at this point
Copy the full SHA a9bb314View commit details -
Chatconv agent: output parser exception (langchain-ai#4923)
the output parser form chat conversational agent now raises `OutputParserException` like the rest. The `raise OutputParserExeption(...) from e` form also carries through the original error details on what went wrong. I added the `ValueError` as a base class to `OutputParserException` to avoid breaking code that was relying on `ValueError` as a way to catch exceptions from the agent. So catching ValuError still works. Not sure if this is a good idea though ?
Configuration menu - View commit details
-
Copy full SHA for 5525b70 - Browse repository at this point
Copy the full SHA 5525b70View commit details -
Zep Retriever - Vector Search Over Chat History (langchain-ai#4533)
# Zep Retriever - Vector Search Over Chat History with the Zep Long-term Memory Service More on Zep: https://github.com/getzep/zep Note: This PR is related to and relies on langchain-ai#4834. I did not want to modify the `pyproject.toml` file to add the `zep-python` dependency a second time. Co-authored-by: Daniel Chalef <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for c8c2276 - Browse repository at this point
Copy the full SHA c8c2276View commit details -
Fix get_num_tokens for Anthropic models (langchain-ai#4911)
The Anthropic classes used `BaseLanguageModel.get_num_tokens` because of an issue with multiple inheritance. Fixed by moving the method from `_AnthropicCommon` to both its subclasses. This change will significantly speed up token counting for Anthropic users.
Configuration menu - View commit details
-
Copy full SHA for 3df2d83 - Browse repository at this point
Copy the full SHA 3df2d83View commit details
Commits on May 19, 2023
-
NIT: Instead of hardcoding k in each definition, define it as a param…
… above. (langchain-ai#2675) Co-authored-by: Dev 2049 <[email protected]> Co-authored-by: Davis Chase <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for e027a38 - Browse repository at this point
Copy the full SHA e027a38View commit details -
[nit] Simplify Spark Creation Validation Check A Little Bit (langchai…
…n-ai#4761) - simplify the validation check a little bit. - re-tested in jupyter notebook. Reviewer: @hwchase17
Configuration menu - View commit details
-
Copy full SHA for db6f7ed - Browse repository at this point
Copy the full SHA db6f7edView commit details -
Fix for syntax when setting search_path for Snowflake database (langc…
…hain-ai#4747) # Fixes syntax for setting Snowflake database search_path An error occurs when using a Snowflake database and providing a schema argument. I have updated the syntax to run a Snowflake specific query when the database dialect is 'snowflake'.
Configuration menu - View commit details
-
Copy full SHA for c069732 - Browse repository at this point
Copy the full SHA c069732View commit details -
Harrison/spell executor (langchain-ai#4914)
Co-authored-by: Jan Minar <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 5feb60f - Browse repository at this point
Copy the full SHA 5feb60fView commit details -
Add Spark SQL support (langchain-ai#4602) (langchain-ai#4956)
# Add Spark SQL support * Add Spark SQL support. It can connect to Spark via building a local/remote SparkSession. * Include a notebook example I tried some complicated queries (window function, table joins), and the tool works well. Compared to the [Spark Dataframe agent](https://python.langchain.com/en/latest/modules/agents/toolkits/examples/spark.html), this tool is able to generate queries across multiple tables. --------- # Your PR Title (What it does) <!-- Thank you for contributing to LangChain! Your PR will appear in our next release under the title you set. Please make sure it highlights your valuable contribution. Replace this with a description of the change, the issue it fixes (if applicable), and relevant context. List any dependencies required for this change. After you're done, someone will review your PR. They may suggest improvements. If no one reviews your PR within a few days, feel free to @-mention the same people again, as notifications can get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting <!-- If you're adding a new integration, include an integration test and an example notebook showing its use! --> ## Who can review? Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested: <!-- For a quicker response, figure out the right person to tag with @ @hwchase17 - project lead Tracing / Callbacks - @agola11 Async - @agola11 DataLoaders - @eyurtsev Models - @hwchase17 - @agola11 Agents / Tools / Toolkits - @vowelparrot VectorStores / Retrievers / Memory - @dev2049 --> --------- Co-authored-by: Gengliang Wang <[email protected]> Co-authored-by: Mike W <[email protected]> Co-authored-by: Eugene Yurtsev <[email protected]> Co-authored-by: UmerHA <[email protected]> Co-authored-by: 张城铭 <[email protected]> Co-authored-by: assert <[email protected]> Co-authored-by: blob42 <spike@w530> Co-authored-by: Yuekai Zhang <[email protected]> Co-authored-by: Richard He <[email protected]> Co-authored-by: Dev 2049 <[email protected]> Co-authored-by: Leonid Ganeline <[email protected]> Co-authored-by: Alexey Nominas <[email protected]> Co-authored-by: elBarkey <[email protected]> Co-authored-by: Davis Chase <[email protected]> Co-authored-by: Jeffrey D <[email protected]> Co-authored-by: so2liu <[email protected]> Co-authored-by: Viswanadh Rayavarapu <[email protected]> Co-authored-by: Chakib Ben Ziane <[email protected]> Co-authored-by: Daniel Chalef <[email protected]> Co-authored-by: Daniel Chalef <[email protected]> Co-authored-by: Jari Bakken <[email protected]> Co-authored-by: escafati <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 88a3a56 - Browse repository at this point
Copy the full SHA 88a3a56View commit details -
Support Databricks in SQLDatabase (langchain-ai#4702)
This PR adds support for Databricks runtime and Databricks SQL by using [Databricks SQL Connector for Python](https://docs.databricks.com/dev-tools/python-sql-connector.html). As a cloud data platform, accessing Databricks requires a URL as follows `databricks://token:{api_token}@{hostname}?http_path={http_path}&catalog={catalog}&schema={schema}`. **The URL is **complicated** and it may take users a while to figure it out**. Since the fields `api_token`/`hostname`/`http_path` fields are known in the Databricks notebook, I am proposing a new method `from_databricks` to simplify the connection to Databricks. ## In Databricks Notebook After changes, Databricks users only need to specify the `catalog` and `schema` field when using langchain. <img width="881" alt="image" src="https://github.com/hwchase17/langchain/assets/1097932/984b4c57-4c2d-489d-b060-5f4918ef2f37"> ## In Jupyter Notebook The method can be used on the local setup as well: <img width="678" alt="image" src="https://github.com/hwchase17/langchain/assets/1097932/142e8805-a6ef-4919-b28e-9796ca31ef19">
Configuration menu - View commit details
-
Copy full SHA for bf5a3c6 - Browse repository at this point
Copy the full SHA bf5a3c6View commit details -
Fixed assumptions misspelling (langchain-ai#4961)
Fixed assumptions misspelling in the link mentioned below:- https://python.langchain.com/en/latest/modules/chains/examples/llm_summarization_checker.html ![image](https://github.com/hwchase17/langchain/assets/16189966/94cf2be0-b3d0-495b-98ad-e1f44331727e) Fix for Issue:- langchain-ai#4959 @hwchase17
Configuration menu - View commit details
-
Copy full SHA for 13c3763 - Browse repository at this point
Copy the full SHA 13c3763View commit details -
Update tutorials.md (langchain-ai#4960)
# Added a YouTube Tutorial Added a LangChain tutorial playlist aimed at onboarding newcomers to LangChain and its use cases. I've shared the video in the #tutorials channel and it seemed to be well received. I think this could be useful to the greater community. ## Who can review? @dev2049
Configuration menu - View commit details
-
Copy full SHA for e80585b - Browse repository at this point
Copy the full SHA e80585bView commit details -
Update planner_prompt.py (langchain-ai#4967)
Typos in the OpenAPI agent Prompt.
Configuration menu - View commit details
-
Copy full SHA for e68dfa7 - Browse repository at this point
Copy the full SHA e68dfa7View commit details -
power bi api wrapper integration tests & bug fix (langchain-ai#4983)
# Powerbi API wrapper bug fix + integration tests - Bug fix by removing `TYPE_CHECKING` in in utilities/powerbi.py - Added integration test for power bi api in utilities/test_powerbi_api.py - Added integration test for power bi agent in agent/test_powerbi_agent.py - Edited .env.examples to help set up power bi related environment variables - Updated demo notebook with working code in docs../examples/powerbi.ipynb - AzureOpenAI -> ChatOpenAI Notes: Chat models (gpt3.5, gpt4) are much more capable than davinci at writing DAX queries, so that is important to getting the agent to work properly. Interestingly, gpt3.5-turbo needed the examples=DEFAULT_FEWSHOT_EXAMPLES to write consistent DAX queries, so gpt4 seems necessary as the smart llm. Fixes langchain-ai#4325 ## Before submitting Azure-core and Azure-identity are necessary dependencies check integration tests with the following: `pytest tests/integration_tests/utilities/test_powerbi_api.py` `pytest tests/integration_tests/agent/test_powerbi_agent.py` You will need a power bi account with a dataset id + table name in order to test. See .env.examples for details. ## Who can review? @hwchase17 @vowelparrot --------- Co-authored-by: aditya-pethe <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 06e5244 - Browse repository at this point
Copy the full SHA 06e5244View commit details -
Configuration menu - View commit details
-
Copy full SHA for 2abf6b9 - Browse repository at this point
Copy the full SHA 2abf6b9View commit details -
Remove autoreload in examples (langchain-ai#4994)
# Remove autoreload in examples Remove the `autoreload` in examples since it is not necessary for most users: ``` %load_ext autoreload, %autoreload 2 ```
Configuration menu - View commit details
-
Copy full SHA for a87a252 - Browse repository at this point
Copy the full SHA a87a252View commit details -
Bug fixes and error handling in Redis - Vectorstore (langchain-ai#4932)
# Bug fixes in Redis - Vectorstore (Added the version of redis to the error message and removed the cls argument from a classmethod) Co-authored-by: Tyler Hutcherson <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 616e9a9 - Browse repository at this point
Copy the full SHA 616e9a9View commit details -
Add async search with relevance score (langchain-ai#4558)
Add the async version for the search with relevance score Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 22d844d - Browse repository at this point
Copy the full SHA 22d844dView commit details -
Make test gha workflow manually runnable (langchain-ai#4998)
if https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#workflow_dispatch is to be believed this should make it possible to manually kick of test workflow, but i don't know much about these things
Configuration menu - View commit details
-
Copy full SHA for 56cb77a - Browse repository at this point
Copy the full SHA 56cb77aView commit details -
Adds 'IN' metadata filter for pgvector for checking set presence (lan…
…gchain-ai#4982) # Adds "IN" metadata filter for pgvector to all checking for set presence PGVector currently supports metadata filters of the form: ``` {"filter": {"key": "value"}} ``` which will return documents where the "key" metadata field is equal to "value". This PR adds support for metadata filters of the form: ``` {"filter": {"key": { "IN" : ["list", "of", "values"]}}} ``` Other vector stores support this via an "$in" syntax. I chose to use "IN" to match postgres' syntax, though happy to switch. Tested locally with PGVector and ChatVectorDBChain. @dev2049 --------- Co-authored-by: [email protected] <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 0ff5956 - Browse repository at this point
Copy the full SHA 0ff5956View commit details -
Configuration menu - View commit details
-
Copy full SHA for 62d0a01 - Browse repository at this point
Copy the full SHA 62d0a01View commit details -
PGVector logger message level (langchain-ai#4920)
# Change the logger message level The library is logging at `error` level a situation that is not an error. We noticed this error in our logs, but from our point of view it's an expected behavior and the log level should be `warning`.
Configuration menu - View commit details
-
Copy full SHA for 729e935 - Browse repository at this point
Copy the full SHA 729e935View commit details -
feature/4493 Improve Evernote Document Loader (langchain-ai#4577)
# Improve Evernote Document Loader When exporting from Evernote you may export more than one note. Currently the Evernote loader concatenates the content of all notes in the export into a single document and only attaches the name of the export file as metadata on the document. This change ensures that each note is loaded as an independent document and all available metadata on the note e.g. author, title, created, updated are added as metadata on each document. It also uses an existing optional dependency of `html2text` instead of `pypandoc` to remove the need to download the pandoc application via `download_pandoc()` to be able to use the `pypandoc` python bindings. Fixes langchain-ai#4493 Co-authored-by: Mike McGarry <[email protected]> Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for ddd595f - Browse repository at this point
Copy the full SHA ddd595fView commit details -
Fix graphql tool (langchain-ai#4984)
Fix construction and add unit test.
Configuration menu - View commit details
-
Copy full SHA for 080eb1b - Browse repository at this point
Copy the full SHA 080eb1bView commit details -
changed ValueError to ImportError (langchain-ai#5006)
# changed ValueError to ImportError in except Several places with this bug. ValueError does not catch ImportError.
Configuration menu - View commit details
-
Copy full SHA for 2ab0e1d - Browse repository at this point
Copy the full SHA 2ab0e1dView commit details -
docs: Big Mendable Improvements (langchain-ai#4964)
- Higher accuracy on the responses - New redesigned UI - Pretty Sources: display the sources by title / sub-section instead of long URL. - Fixed Reset Button bugs and some other UI issues - Other tweaks
Configuration menu - View commit details
-
Copy full SHA for 02632d5 - Browse repository at this point
Copy the full SHA 02632d5View commit details -
added instruction about pip install google-gerativeai (langchain-ai#5004
Configuration menu - View commit details
-
Copy full SHA for ddc2d4c - Browse repository at this point
Copy the full SHA ddc2d4cView commit details -
Update the GPTCache example (langchain-ai#4985)
# Update the GPTCache example Fixes langchain-ai#4757
Configuration menu - View commit details
-
Copy full SHA for f07b9fd - Browse repository at this point
Copy the full SHA f07b9fdView commit details -
Revert "API update: Engines -> Models (langchain-ai#4915)" (langchain…
…-ai#5008) This reverts commit 8c28ad6. Seems to be causing langchain-ai#5001
Configuration menu - View commit details
-
Copy full SHA for 9928fb2 - Browse repository at this point
Copy the full SHA 9928fb2View commit details -
Add self query translator for weaviate vectorstore (langchain-ai#4804)
# Add self query translator for weaviate vectorstore Adds support for the EQ comparator and the AND/OR operators. Co-authored-by: Dominic Chan <[email protected]> Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 6c60251 - Browse repository at this point
Copy the full SHA 6c60251View commit details -
Check for single prompt in __call__ method of the BaseLLM class (lang…
…chain-ai#4892) # Ensuring that users pass a single prompt when calling a LLM - This PR adds a check to the `__call__` method of the `BaseLLM` class to ensure that it is called with a single prompt - Raises a `ValueError` if users try to call a LLM with a list of prompt and instructs them to use the `generate` method instead ## Why this could be useful I stumbled across this by accident. I accidentally called the OpenAI LLM with a list of prompts instead of a single string and still got a result: ``` >>> from langchain.llms import OpenAI >>> llm = OpenAI() >>> llm(["Tell a joke"]*2) "\n\nQ: Why don't scientists trust atoms?\nA: Because they make up everything!" ``` It might be better to catch such a scenario preventing unnecessary costs and irritation for the user. ## Proposed behaviour ``` >>> from langchain.llms import OpenAI >>> llm = OpenAI() >>> llm(["Tell a joke"]*2) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/marcus/Projects/langchain/langchain/llms/base.py", line 291, in __call__ raise ValueError( ValueError: Argument `prompt` is expected to be a single string, not a list. If you want to run the LLM on multiple prompts, use `generate` instead. ```
Configuration menu - View commit details
-
Copy full SHA for 2aa3754 - Browse repository at this point
Copy the full SHA 2aa3754View commit details
Commits on May 20, 2023
-
Configuration menu - View commit details
-
Copy full SHA for 27e63b9 - Browse repository at this point
Copy the full SHA 27e63b9View commit details -
Configuration menu - View commit details
-
Copy full SHA for 3bc0bf0 - Browse repository at this point
Copy the full SHA 3bc0bf0View commit details -
Streaming only final output of agent (langchain-ai#2483) (langchain-a…
…i#4630) # Streaming only final output of agent (langchain-ai#2483) As requested in issue langchain-ai#2483, this Callback allows to stream only the final output of an agent (ie not the intermediate steps). Fixes langchain-ai#2483 Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 7388248 - Browse repository at this point
Copy the full SHA 7388248View commit details -
Configuration menu - View commit details
-
Copy full SHA for 9d1280d - Browse repository at this point
Copy the full SHA 9d1280dView commit details
Commits on May 21, 2023
-
Fix annoying typo in docs (langchain-ai#5029)
# Fixes an annoying typo in docs <!-- Thank you for contributing to LangChain! Your PR will appear in our next release under the title you set. Please make sure it highlights your valuable contribution. Replace this with a description of the change, the issue it fixes (if applicable), and relevant context. List any dependencies required for this change. After you're done, someone will review your PR. They may suggest improvements. If no one reviews your PR within a few days, feel free to @-mention the same people again, as notifications can get lost. --> <!-- Remove if not applicable --> Fixes Annoying typo in docs - "Therefor" -> "Therefore". It's so annoying to read that I just had to make this PR.
Configuration menu - View commit details
-
Copy full SHA for a6ef20d - Browse repository at this point
Copy the full SHA a6ef20dView commit details -
Add documentation for Databricks integration (langchain-ai#5013)
# Add documentation for Databricks integration This is a follow-up of langchain-ai#4702 It documents the details of how to integrate Databricks using langchain. It also provides examples in a notebook. ## Who can review? @dev2049 @hwchase17 since you are aware of the context. We will promote the integration after this doc is ready. Thanks in advance!
Configuration menu - View commit details
-
Copy full SHA for f9f08c4 - Browse repository at this point
Copy the full SHA f9f08c4View commit details -
DOC: Misspelling in agents.rst documentation (langchain-ai#5038)
# Corrected Misspelling in agents.rst Documentation <!-- Thank you for contributing to LangChain! Your PR will appear in our next release under the title you set. Please make sure it highlights your valuable contribution. Replace this with a description of the change, the issue it fixes (if applicable), and relevant context. List any dependencies required for this change. After you're done, someone will review your PR. They may suggest improvements. If no one reviews your PR within a few days, feel free to @-mention the same people again, as notifications can get --> In the [documentation](https://python.langchain.com/en/latest/modules/agents.html) it says "in fact, it is often best to have an Action Agent be in **change** of the execution for the Plan and Execute agent." **Suggested Change:** I propose correcting change to charge. Fix for issue: langchain-ai#5039
Configuration menu - View commit details
-
Copy full SHA for 424a573 - Browse repository at this point
Copy the full SHA 424a573View commit details -
Configuration menu - View commit details
-
Copy full SHA for 8c661ba - Browse repository at this point
Copy the full SHA 8c661baView commit details -
Harrison/psychic (langchain-ai#5063)
Co-authored-by: Ayan Bandyopadhyay <[email protected]> Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for b0431c6 - Browse repository at this point
Copy the full SHA b0431c6View commit details -
Configuration menu - View commit details
-
Copy full SHA for 6c25f86 - Browse repository at this point
Copy the full SHA 6c25f86View commit details -
Configuration menu - View commit details
-
Copy full SHA for 224f73e - Browse repository at this point
Copy the full SHA 224f73eView commit details -
Configuration menu - View commit details
-
Copy full SHA for 0c3de0a - Browse repository at this point
Copy the full SHA 0c3de0aView commit details
Commits on May 22, 2023
-
feat: batch multiple files in a single Unstructured API request (lang…
…chain-ai#4525) ### Submit Multiple Files to the Unstructured API Enables batching multiple files into a single Unstructured API requests. Support for requests with multiple files was added to both `UnstructuredAPIFileLoader` and `UnstructuredAPIFileIOLoader`. Note that if you submit multiple files in "single" mode, the result will be concatenated into a single document. We recommend using this feature in "elements" mode. ### Testing The following should load both documents, using two of the example docs from the integration tests folder. ```python from langchain.document_loaders import UnstructuredAPIFileLoader file_paths = ["examples/layout-parser-paper.pdf", "examples/whatsapp_chat.txt"] loader = UnstructuredAPIFileLoader( file_paths=file_paths, api_key="FAKE_API_KEY", strategy="fast", mode="elements", ) docs = loader.load() ```
Configuration menu - View commit details
-
Copy full SHA for bf3f554 - Browse repository at this point
Copy the full SHA bf3f554View commit details -
preserve language in conversation retrieval (langchain-ai#4969)
Without the addition of 'in its original language', the condensing response, more often than not, outputs the rephrased question in English, even when the conversation is in another language. This question in English then transfers to the question in the retrieval prompt and the chatbot is stuck in English. I'm sometimes surprised that this does not happen more often, but apparently the GPT models are smart enough to understand that when the template contains Question: .... Answer: then the answer should be in in the language of the question.
Configuration menu - View commit details
-
Copy full SHA for a395ff7 - Browse repository at this point
Copy the full SHA a395ff7View commit details -
docs:
Deployments
page moved intoEcosystem/
(langchain-ai#4949)# docs: `deployments` page moved into `ecosystem/` The `Deployments` page moved into the `Ecosystem/` group Small fixes: - `index` page: fixed order of items in the `Modules` list, in the `Use Cases` list - item `References/Installation` was lost in the `index` page (not on the Navbar!). Restored it. - added `|` marker in several places. NOTE: I also thought about moving the `Additional Resources/Gallery` page into the `Ecosystem` group but decided to leave it unchanged. Please, advise on this. ## Who can review? Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested: @dev2049
Configuration menu - View commit details
-
Copy full SHA for 443ebe2 - Browse repository at this point
Copy the full SHA 443ebe2View commit details -
Separate Runner Functions from Client (langchain-ai#5079)
Extract the methods specific to running an LLM or Chain on a dataset to separate utility functions. This simplifies the client a bit and lets us separate concerns of LCP details from running examples (e.g., for evals)
Configuration menu - View commit details
-
Copy full SHA for ef7d015 - Browse repository at this point
Copy the full SHA ef7d015View commit details -
Add 'get_token_ids' method (langchain-ai#4784)
Let user inspect the token ids in addition to getting th enumber of tokens --------- Co-authored-by: Zach Schillaci <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 785502e - Browse repository at this point
Copy the full SHA 785502eView commit details -
Improved query, print & exception handling in REPL Tool (langchain-ai…
…#4997) Update to pull request langchain-ai#3215 Summary: 1) Improved the sanitization of query (using regex), by removing python command (since gpt-3.5-turbo sometimes assumes python console as a terminal, and runs python command first which causes error). Also sometimes 1 line python codes contain single backticks. 2) Added 7 new test cases. For more details, view the previous pull request. --------- Co-authored-by: Deepak S V <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 49ca027 - Browse repository at this point
Copy the full SHA 49ca027View commit details -
Harrison/neo4j (langchain-ai#5078)
Co-authored-by: Tomaz Bratanic <[email protected]> Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 10ba201 - Browse repository at this point
Copy the full SHA 10ba201View commit details -
Configuration menu - View commit details
-
Copy full SHA for fcd88bc - Browse repository at this point
Copy the full SHA fcd88bcView commit details -
fix: revert docarray explicit transitive dependencies and use extras …
…instead (langchain-ai#5015) tldr: The docarray [integration PR](langchain-ai#4483) introduced a pinned dependency to protobuf. This is a docarray dependency, not a langchain dependency. Since this is handled by the docarray dependencies, it is unnecessary here. Further, as a pinned dependency, this quickly leads to incompatibilities with application code that consumes the library. Much less with a heavily used library like protobuf. Detail: as we see in the [docarray integration](https://github.com/hwchase17/langchain/pull/4483/files#diff-50c86b7ed8ac2cf95bd48334961bf0530cdc77b5a56f852c5c61b89d735fd711R81-R83), the transitive dependencies of docarray were also listed as langchain dependencies. This is unnecessary as the docarray project has an appropriate [extras](https://github.com/docarray/docarray/blob/a01a05542d17264b8a164bec783633658deeedb8/pyproject.toml#L70). The docarray project also does not require this _pinned_ version of protobuf, rather [a minimum version](https://github.com/docarray/docarray/blob/a01a05542d17264b8a164bec783633658deeedb8/pyproject.toml#L41). So this pinned version was likely in error. To fix this, this PR reverts the explicit hnswlib and protobuf dependencies and adds the hnswlib extras install for docarray (which installs hnswlib and protobuf, as originally intended). Because version `0.32.0` of the docarray hnswlib extras added protobuf, we bump the docarray dependency from `^0.31.0` to `^0.32.0`. # revert docarray explicit transitive dependencies and use extras instead ## Who can review? @dev2049 -- reviewed the original PR @eyurtsev -- bumped the pinned protobuf dependency a few days ago --------- Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 6eacd88 - Browse repository at this point
Copy the full SHA 6eacd88View commit details -
Improving Resilience of MRKL Agent (langchain-ai#5014)
This is a highly optimized update to the pull request langchain-ai#3269 Summary: 1) Added ability to MRKL agent to self solve the ValueError(f"Could not parse LLM output: `{llm_output}`") error, whenever llm (especially gpt-3.5-turbo) does not follow the format of MRKL Agent, while returning "Action:" & "Action Input:". 2) The way I am solving this error is by responding back to the llm with the messages "Invalid Format: Missing 'Action:' after 'Thought:'" & "Invalid Format: Missing 'Action Input:' after 'Action:'" whenever Action: and Action Input: are not present in the llm output respectively. For a detailed explanation, look at the previous pull request. New Updates: 1) Since @hwchase17 , requested in the previous PR to communicate the self correction (error) message, using the OutputParserException, I have added new ability to the OutputParserException class to store the observation & previous llm_output in order to communicate it to the next Agent's prompt. This is done, without breaking/modifying any of the functionality OutputParserException previously performs (i.e. OutputParserException can be used in the same way as before, without passing any observation & previous llm_output too). --------- Co-authored-by: Deepak S V <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 5cd1210 - Browse repository at this point
Copy the full SHA 5cd1210View commit details -
Improve pinecone hybrid search retriever adding metadata support (la…
…ngchain-ai#5098) # Improve pinecone hybrid search retriever adding metadata support I simply remove the hardwiring of metadata to the existing implementation allowing one to pass `metadatas` attribute to the constructors and in `get_relevant_documents`. I also add one missing pip install to the accompanying notebook (I am not adding dependencies, they were pre-existing). First contribution, just hoping to help, feel free to critique :) my twitter username is `@andreliebschner` While looking at hybrid search I noticed langchain-ai#3043 and langchain-ai#1743. I think the former can be closed as following the example right now (even prior to my improvements) works just fine, the latter I think can be also closed safely, maybe pointing out the relevant classes and example. Should I reply those issues mentioning someone? @dev2049, @hwchase17 --------- Co-authored-by: Andreas Liebschner <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 44dc959 - Browse repository at this point
Copy the full SHA 44dc959View commit details -
Add the usage of SSL certificates for Elasticsearch and user password…
… authentication (langchain-ai#5058) Enhance the code to support SSL authentication for Elasticsearch when using the VectorStore module, as previous versions did not provide this capability. @dev2049 --------- Co-authored-by: caidong <[email protected]> Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 039f8f1 - Browse repository at this point
Copy the full SHA 039f8f1View commit details -
add get_top_k_cosine_similarity method to get max top k score and ind…
…ex (langchain-ai#5059) # Row-wise cosine similarity between two equal-width matrices and return the max top_k score and index, the score all greater than threshold_score. Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for e57ebf3 - Browse repository at this point
Copy the full SHA e57ebf3View commit details -
PowerBI major refinement in working of tool and tweaks in the rest (l…
…angchain-ai#5090) # PowerBI major refinement in working of tool and tweaks in the rest I've gained some experience with more complex sets and the earlier implementation had too many tries by the agent to create DAX, so refactored the code to run the LLM to create dax based on a question and then immediately run the same against the dataset, with retries and a prompt that includes the error for the retry. This works much better! Also did some other refactoring of the inner workings, making things clearer, more concise and faster.
Configuration menu - View commit details
-
Copy full SHA for 1cb04f2 - Browse repository at this point
Copy the full SHA 1cb04f2View commit details -
fix: add_texts method of Weaviate vector store creats wrong embeddings (
langchain-ai#4933) # fix a bug in the add_texts method of Weaviate vector store that creats wrong embeddings The following is the original code in the `add_texts` method of the Weaviate vector store, from line 131 to 153, which contains a bug. The code here includes some extra explanations in the form of comments and some omissions. ```python for i, doc in enumerate(texts): # some code omitted if self._embedding is not None: # variable texts is a list of string and doc here is just a string. # list(doc) actually breaks up the string into characters. # so, embeddings[0] is just the embedding of the first character embeddings = self._embedding.embed_documents(list(doc)) batch.add_data_object( data_object=data_properties, class_name=self._index_name, uuid=_id, vector=embeddings[0], ) ``` To fix this bug, I pulled the embedding operation out of the for loop and embed all texts at once. Co-authored-by: Shawn91 <[email protected]> Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 9e64946 - Browse repository at this point
Copy the full SHA 9e64946View commit details -
update langchainplus client and docker file to reflect port changes (l…
…angchain-ai#5005) # Currently, only the dev images are updated
Configuration menu - View commit details
-
Copy full SHA for 467ca6f - Browse repository at this point
Copy the full SHA 467ca6fView commit details -
Fixed import error for AutoGPT e.g. from langchain.experimental.auton… (
langchain-ai#5101) `from langchain.experimental.autonomous_agents.autogpt.agent import AutoGPT` results in an import error as AutoGPT is not defined in the __init__.py file https://python.langchain.com/en/latest/use_cases/autonomous_agents/marathon_times.html An Alternate, way would be to be directly update the import statement to be `from langchain.experimental import AutoGPT` Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 5b2b436 - Browse repository at this point
Copy the full SHA 5b2b436View commit details -
Update serpapi.py (langchain-ai#4947)
Added link option in _process_response <!-- In _process_respons "snippet" provided non working links for the case that "links" had the correct answer. Thus added an elif statement before snippet --> <!-- Remove if not applicable --> Fixes # (issue) In _process_response link provided correct answers while the snippet reply provided non working links @vowelparrot ## Before submitting <!-- If you're adding a new integration, include an integration test and an example notebook showing its use! --> ## Who can review? Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested: <!-- For a quicker response, figure out the right person to tag with @ @hwchase17 - project lead Tracing / Callbacks - @agola11 Async - @agola11 DataLoaders - @eyurtsev Models - @hwchase17 - @agola11 Agents / Tools / Toolkits - @vowelparrot VectorStores / Retrievers / Memory - @dev2049 --> --------- Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 5e47c64 - Browse repository at this point
Copy the full SHA 5e47c64View commit details -
changed ValueError to ImportError (langchain-ai#5103)
# changed ValueError to ImportError Code cleaning. Fixed inconsistencies in ImportError handling. Sometimes it raises ImportError and sometime ValueError. I've changed all cases to the `raise ImportError` Also: - added installation instruction in the error message, where it missed; - fixed several installation instructions in the error message; - fixed several error handling in regards to the ImportError
Configuration menu - View commit details
-
Copy full SHA for c28cc0f - Browse repository at this point
Copy the full SHA c28cc0fView commit details -
fix: assign current_time to datetime.now() if current_time is None (l…
…angchain-ai#5045) # Assign `current_time` to `datetime.now()` if it `current_time is None` in `time_weighted_retriever` Fixes langchain-ai#4825 As implemented, `add_documents` in `TimeWeightedVectorStoreRetriever` assigns `doc.metadata["last_accessed_at"]` and `doc.metadata["created_at"]` to `datetime.datetime.now()` if `current_time` is not in `kwargs`. ```python def add_documents(self, documents: List[Document], **kwargs: Any) -> List[str]: """Add documents to vectorstore.""" current_time = kwargs.get("current_time", datetime.datetime.now()) # Avoid mutating input documents dup_docs = [deepcopy(d) for d in documents] for i, doc in enumerate(dup_docs): if "last_accessed_at" not in doc.metadata: doc.metadata["last_accessed_at"] = current_time if "created_at" not in doc.metadata: doc.metadata["created_at"] = current_time doc.metadata["buffer_idx"] = len(self.memory_stream) + i self.memory_stream.extend(dup_docs) return self.vectorstore.add_documents(dup_docs, **kwargs) ``` However, from the way `add_documents` is being called from `GenerativeAgentMemory`, `current_time` is set as a `kwarg`, but it is given a value of `None`: ```python def add_memory( self, memory_content: str, now: Optional[datetime] = None ) -> List[str]: """Add an observation or memory to the agent's memory.""" importance_score = self._score_memory_importance(memory_content) self.aggregate_importance += importance_score document = Document( page_content=memory_content, metadata={"importance": importance_score} ) result = self.memory_retriever.add_documents([document], current_time=now) ``` The default of `now` was set in langchain-ai#4658 to be None. The proposed fix is the following: ```python def add_documents(self, documents: List[Document], **kwargs: Any) -> List[str]: """Add documents to vectorstore.""" current_time = kwargs.get("current_time", datetime.datetime.now()) # `current_time` may exist in kwargs, but may still have the value of None. if current_time is None: current_time = datetime.datetime.now() ``` Alternatively, we could just set the default of `now` to be `datetime.datetime.now()` everywhere instead. Thoughts @hwchase17? If we still want to keep the default to be `None`, then this PR should fix the above issue. If we want to set the default to be `datetime.datetime.now()` instead, I can update this PR with that alternative fix. EDIT: seems like from langchain-ai#5018 it looks like we would prefer to keep the default to be `None`, in which case this PR should fix the error.
Configuration menu - View commit details
-
Copy full SHA for e173e03 - Browse repository at this point
Copy the full SHA e173e03View commit details -
Add Mastodon toots loader (langchain-ai#5036)
# Add Mastodon toots loader. Loader works either with public toots, or Mastodon app credentials. Toot text and user info is loaded. I've also added integration test for this new loader as it works with public data, and a notebook with example output run now. --------- Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 69de33e - Browse repository at this point
Copy the full SHA 69de33eView commit details
Commits on May 23, 2023
-
Add OpenLM LLM multi-provider (langchain-ai#4993)
OpenLM is a zero-dependency OpenAI-compatible LLM provider that can call different inference endpoints directly via HTTP. It implements the OpenAI Completion class so that it can be used as a drop-in replacement for the OpenAI API. This changeset utilizes BaseOpenAI for minimal added code. --------- Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for de6a401 - Browse repository at this point
Copy the full SHA de6a401View commit details -
Pass Dataset Name by Name not Position (langchain-ai#5108)
Pass dataset name by name
Configuration menu - View commit details
-
Copy full SHA for 87bba2e - Browse repository at this point
Copy the full SHA 87bba2eView commit details -
Fixes issue langchain-ai#5072 - adds additional support to Weaviate (l…
…angchain-ai#5085) Implementation is similar to search_distance and where_filter # adds 'additional' support to Weaviate queries Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for b950022 - Browse repository at this point
Copy the full SHA b950022View commit details -
Improve effeciency of TextSplitter.split_documents, iterate once (lan…
…gchain-ai#5111) # Improve TextSplitter.split_documents, collect page_content and metadata in one iteration ## Who can review? Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested: @eyurtsev In the case where documents is a generator that can only be iterated once making this change is a huge help. Otherwise a silent issue happens where metadata is empty for all documents when documents is a generator. So we expand the argument from `List[Document]` to `Union[Iterable[Document], Sequence[Document]]` --------- Co-authored-by: Steven Tartakovsky <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for d56313a - Browse repository at this point
Copy the full SHA d56313aView commit details -
WhyLabs callback (langchain-ai#4906)
# Add a WhyLabs callback handler * Adds a simple WhyLabsCallbackHandler * Add required dependencies as optional * protect against missing modules with imports * Add docs/ecosystem basic example based on initial prototype from @andrewelizondo > this integration gathers privacy preserving telemetry on text with whylogs and sends stastical profiles to WhyLabs platform to monitoring these metrics over time. For more information on what WhyLabs is see: https://whylabs.ai After you run the notebook (if you have env variables set for the API Keys, org_id and dataset_id) you get something like this in WhyLabs: ![Screenshot (443)](https://github.com/hwchase17/langchain/assets/88007022/6bdb3e1c-4243-4ae8-b974-23a8bb12edac) Co-authored-by: Andre Elizondo <[email protected]> Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for d4fd589 - Browse repository at this point
Copy the full SHA d4fd589View commit details -
Add AzureCognitiveServicesToolkit to call Azure Cognitive Services API (
langchain-ai#5012) # Add AzureCognitiveServicesToolkit to call Azure Cognitive Services API: achieve some multimodal capabilities This PR adds a toolkit named AzureCognitiveServicesToolkit which bundles the following tools: - AzureCogsImageAnalysisTool: calls Azure Cognitive Services image analysis API to extract caption, objects, tags, and text from images. - AzureCogsFormRecognizerTool: calls Azure Cognitive Services form recognizer API to extract text, tables, and key-value pairs from documents. - AzureCogsSpeech2TextTool: calls Azure Cognitive Services speech to text API to transcribe speech to text. - AzureCogsText2SpeechTool: calls Azure Cognitive Services text to speech API to synthesize text to speech. This toolkit can be used to process image, document, and audio inputs. --------- Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for d7f807b - Browse repository at this point
Copy the full SHA d7f807bView commit details -
Add link to Psychic from document loaders documentation page (langcha…
…in-ai#5115) # Add link to Psychic from document loaders documentation page In my previous PR I forgot to update `document_loaders.rst` to link to `psychic.ipynb` to make it discoverable from the main documentation.
Configuration menu - View commit details
-
Copy full SHA for 5c87dbf - Browse repository at this point
Copy the full SHA 5c87dbfView commit details -
Configuration menu - View commit details
-
Copy full SHA for 753f4cf - Browse repository at this point
Copy the full SHA 753f4cfView commit details -
docs: fix minor typo + add wikipedia package installation part in hum…
…an_input_llm.ipynb (langchain-ai#5118) # Fix typo + add wikipedia package installation part in human_input_llm.ipynb This PR 1. Fixes typo ("the the human input LLM"), 2. Addes wikipedia package installation part (in accordance with `WikipediaQueryRun` [documentation](https://python.langchain.com/en/latest/modules/agents/tools/examples/wikipedia.html)) in `human_input_llm.ipynb` (`docs/modules/models/llms/examples/human_input_llm.ipynb`)
Configuration menu - View commit details
-
Copy full SHA for 7a75bb2 - Browse repository at this point
Copy the full SHA 7a75bb2View commit details -
solving langchain-ai#2887 (langchain-ai#5127)
# Allowing openAI fine-tuned models Very simple fix that checks whether a openAI `model_name` is a fine-tuned model when loading `context_size` and when computing call's cost in the `openai_callback`. Fixes langchain-ai#2887 --------- Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 5002f3a - Browse repository at this point
Copy the full SHA 5002f3aView commit details -
Improve PlanningOutputParser whitespace handling (langchain-ai#5143)
Some LLM's will produce numbered lists with leading whitespace, i.e. in response to "What is the sum of 2 and 3?": ``` Plan: 1. Add 2 and 3. 2. Given the above steps taken, please respond to the users original question. ``` This commit updates the PlanningOutputParser regex to ignore leading whitespace before the step number, enabling it to correctly parse this format.
Configuration menu - View commit details
-
Copy full SHA for 754b513 - Browse repository at this point
Copy the full SHA 754b513View commit details -
Add ElasticsearchEmbeddings class for generating embeddings using Ela…
…sticsearch models (langchain-ai#3401) This PR introduces a new module, `elasticsearch_embeddings.py`, which provides a wrapper around Elasticsearch embedding models. The new ElasticsearchEmbeddings class allows users to generate embeddings for documents and query texts using a [model deployed in an Elasticsearch cluster](https://www.elastic.co/guide/en/machine-learning/current/ml-nlp-model-ref.html#ml-nlp-model-ref-text-embedding). ### Main features: 1. The ElasticsearchEmbeddings class initializes with an Elasticsearch connection object and a model_id, providing an interface to interact with the Elasticsearch ML client through [infer_trained_model](https://elasticsearch-py.readthedocs.io/en/v8.7.0/api.html?highlight=trained%20model%20infer#elasticsearch.client.MlClient.infer_trained_model) . 2. The `embed_documents()` method generates embeddings for a list of documents, and the `embed_query()` method generates an embedding for a single query text. 3. The class supports custom input text field names in case the deployed model expects a different field name than the default `text_field`. 4. The implementation is compatible with any model deployed in Elasticsearch that generates embeddings as output. ### Benefits: 1. Simplifies the process of generating embeddings using Elasticsearch models. 2. Provides a clean and intuitive interface to interact with the Elasticsearch ML client. 3. Allows users to easily integrate Elasticsearch-generated embeddings. Related issue langchain-ai#3400 --------- Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 0b542a9 - Browse repository at this point
Copy the full SHA 0b542a9View commit details -
Adding Weather Loader (langchain-ai#5056)
Co-authored-by: Tyler Hutcherson <[email protected]> Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 68f0d45 - Browse repository at this point
Copy the full SHA 68f0d45View commit details -
Add MosaicML inference endpoints (langchain-ai#4607)
# Add MosaicML inference endpoints This PR adds support in langchain for MosaicML inference endpoints. We both serve a select few open source models, and allow customers to deploy their own models using our inference service. Docs are here (https://docs.mosaicml.com/en/latest/inference.html), and sign up form is here (https://forms.mosaicml.com/demo?utm_source=langchain). I'm not intimately familiar with the details of langchain, or the contribution process, so please let me know if there is anything that needs fixing or this is the wrong way to submit a new integration, thanks! I'm also not sure what the procedure is for integration tests. I have tested locally with my api key. ## Who can review? @hwchase17 --------- Co-authored-by: Harrison Chase <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for de6e6c7 - Browse repository at this point
Copy the full SHA de6e6c7View commit details -
Empty check before pop (langchain-ai#4929)
# Check whether 'other' is empty before popping This PR could fix a potential 'popping empty set' error. Co-authored-by: Junlin Zhou <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 9242998 - Browse repository at this point
Copy the full SHA 9242998View commit details
Commits on May 24, 2023
-
Add async versions of predict() and predict_messages() (langchain-ai#…
…4867) # Add async versions of predict() and predict_messages() langchain-ai#4615 introduced a unifying interface for "base" and "chat" LLM models via the new `predict()` and `predict_messages()` methods that allow both types of models to operate on string and message-based inputs, respectively. This PR adds async versions of the same (`apredict()` and `apredict_messages()`) that are identical except for their use of `agenerate()` in place of `generate()`, which means they repurpose all existing work on the async backend. ## Who can review? Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested: @hwchase17 (follows his work on langchain-ai#4615) @agola11 (async) --------- Co-authored-by: Harrison Chase <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 925dd3e - Browse repository at this point
Copy the full SHA 925dd3eView commit details -
fix: fix current_time=Now bug for aadd_documents in TimeWeightedRetri…
…ever (langchain-ai#5155) # Same as PR langchain-ai#5045, but for async <!-- Thank you for contributing to LangChain! Your PR will appear in our next release under the title you set. Please make sure it highlights your valuable contribution. Replace this with a description of the change, the issue it fixes (if applicable), and relevant context. List any dependencies required for this change. After you're done, someone will review your PR. They may suggest improvements. If no one reviews your PR within a few days, feel free to @-mention the same people again, as notifications can get lost. --> <!-- Remove if not applicable --> Fixes langchain-ai#4825 I had forgotten to update the asynchronous counterpart `aadd_documents` with the bug fix from PR langchain-ai#5045, so this PR also fixes `aadd_documents` too. ## Who can review? Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested: @dev2049 <!-- For a quicker response, figure out the right person to tag with @ @hwchase17 - project lead Tracing / Callbacks - @agola11 Async - @agola11 DataLoaders - @eyurtsev Models - @hwchase17 - @agola11 Agents / Tools / Toolkits - @vowelparrot VectorStores / Retrievers / Memory - @dev2049 -->
Configuration menu - View commit details
-
Copy full SHA for b1b7f35 - Browse repository at this point
Copy the full SHA b1b7f35View commit details -
Docs: updated getting_started.md (langchain-ai#5151)
# Docs: updated getting_started.md Just accommodating some unnecessary spaces in the example of "pass few shot examples to a prompt template". @vowelparrot
Configuration menu - View commit details
-
Copy full SHA for de4ef24 - Browse repository at this point
Copy the full SHA de4ef24View commit details -
Clarification of the reference to the "get_text_legth" function in ge… (
langchain-ai#5154) # Clarification of the reference to the "get_text_legth" function in getting_started.md Reference to the function "get_text_legth" in the documentation did not make sense. Comment added for clarification. @hwchase17
Configuration menu - View commit details
-
Copy full SHA for c111134 - Browse repository at this point
Copy the full SHA c111134View commit details -
docs: added missed
document_loaders
examples (langchain-ai#5150)# DOCS added missed document_loader examples Added missed examples: `JSON`, `Open Document Format (ODT)`, `Wikipedia`, `tomarkdown`. Updated them to a consistent format. ## Who can review? @hwchase17 @dev2049
Configuration menu - View commit details
-
Copy full SHA for 3392948 - Browse repository at this point
Copy the full SHA 3392948View commit details -
Add Typesense vector store (langchain-ai#1674)
Closes langchain-ai#931. --------- Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 9c4b43b - Browse repository at this point
Copy the full SHA 9c4b43bView commit details -
# Vectara Integration This PR provides integration with Vectara. Implemented here are: * langchain/vectorstore/vectara.py * tests/integration_tests/vectorstores/test_vectara.py * langchain/retrievers/vectara_retriever.py And two IPYNB notebooks to do more testing: * docs/modules/chains/index_examples/vectara_text_generation.ipynb * docs/modules/indexes/vectorstores/examples/vectara.ipynb --------- Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for c81fb88 - Browse repository at this point
Copy the full SHA c81fb88View commit details -
# Beam Calls the Beam API wrapper to deploy and make subsequent calls to an instance of the gpt2 LLM in a cloud deployment. Requires installation of the Beam library and registration of Beam Client ID and Client Secret. Additional calls can then be made through the instance of the large language model in your code or by calling the Beam API. --------- Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for faa2665 - Browse repository at this point
Copy the full SHA faa2665View commit details -
Update rellm_experimental.ipynb (langchain-ai#5189)
# Your PR Title (What it does) HuggingFace -> Hugging Face
Configuration menu - View commit details
-
Copy full SHA for fff21a0 - Browse repository at this point
Copy the full SHA fff21a0View commit details -
example usage (langchain-ai#5182)
Adding example usage for elasticsearch knn embeddings [per](langchain-ai#3401 (comment)) https://github.com/hwchase17/langchain/blob/master/langchain/embeddings/elasticsearch.py
Configuration menu - View commit details
-
Copy full SHA for cf19a2a - Browse repository at this point
Copy the full SHA cf19a2aView commit details -
adjust docarray docstrings (langchain-ai#5185)
Follow up of langchain-ai#5015 Thanks for catching this! Just a small PR to adjust couple of strings to these changes Signed-off-by: jupyterjazz <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 47e4ee4 - Browse repository at this point
Copy the full SHA 47e4ee4View commit details -
Configuration menu - View commit details
-
Copy full SHA for 2d5588c - Browse repository at this point
Copy the full SHA 2d5588cView commit details -
Harrison/modelscope (langchain-ai#5156)
Co-authored-by: thomas-yanxin <[email protected]> Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 11c26eb - Browse repository at this point
Copy the full SHA 11c26ebView commit details -
Reuse
length_func
inMapReduceDocumentsChain
(langchain-ai#5181)# Reuse `length_func` in `MapReduceDocumentsChain` Pretty straightforward refactor in `MapReduceDocumentsChain`. Reusing the local variable `length_func`, instead of the longer alternative `self.combine_document_chain.prompt_length`. @hwchase17
Configuration menu - View commit details
-
Copy full SHA for aa14e22 - Browse repository at this point
Copy the full SHA aa14e22View commit details -
Update Cypher QA prompt (langchain-ai#5173)
# Improve Cypher QA prompt The current QA prompt is optimized for networkX answer generation, which returns all the possible triples. However, Cypher search is a bit more focused and doesn't necessary return all the context information. Due to that reason, the model sometimes refuses to generate an answer even though the information is provided: ![Screenshot from 2023-05-24 08-36-23](https://github.com/hwchase17/langchain/assets/19948365/351cf9c1-2567-447c-91fd-284ae3fa1ccf) To fix this issue, I have updated the prompt. Interestingly, I tried many variations with less instructions and they didn't work properly. However, the current fix works nicely. ![Screenshot from 2023-05-24 08-37-25](https://github.com/hwchase17/langchain/assets/19948365/fc830603-e6ec-4a23-8a86-eaf572996014)
Configuration menu - View commit details
-
Copy full SHA for fd866d1 - Browse repository at this point
Copy the full SHA fd866d1View commit details -
Improve weaviate vectorstore docs (langchain-ai#5201)
# Improve weaviate vectorstore docs
Configuration menu - View commit details
-
Copy full SHA for b00c77d - Browse repository at this point
Copy the full SHA b00c77dView commit details -
tfidf retriever (langchain-ai#5114)
Co-authored-by: vempaliakhil96 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 2b2176a - Browse repository at this point
Copy the full SHA 2b2176aView commit details -
standardize json parsing (langchain-ai#5168)
Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 94cf391 - Browse repository at this point
Copy the full SHA 94cf391View commit details -
fixing total cost finetuned model giving zero (langchain-ai#5144)
# OpanAI finetuned model giving zero tokens cost Very simple fix to the previously committed solution to allowing finetuned Openai models. Improves langchain-ai#5127 --------- Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 52714ce - Browse repository at this point
Copy the full SHA 52714ceView commit details -
Fixes scope of query Session in PGVector (langchain-ai#5194)
`vectorstore.PGVector`: The transactional boundary should be increased to cover the query itself Currently, within the `similarity_search_with_score_by_vector` the transactional boundary (created via the `Session` call) does not include the select query being made. This can result in un-intended consequences when interacting with the PGVector instance methods directly --------- Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for c173bf1 - Browse repository at this point
Copy the full SHA c173bf1View commit details -
Output parsing variation allowance (langchain-ai#5178)
# Output parsing variation allowance for self-ask with search This change makes self-ask with search easier for Llama models to follow, as they tend toward returning 'Followup:' instead of 'Follow up:' despite an otherwise valid remaining output. Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for d8eed60 - Browse repository at this point
Copy the full SHA d8eed60View commit details -
Allow readthedoc loader to pass custom html tag (langchain-ai#5175)
## Description The html structure of readthedocs can differ. Currently, the html tag is hardcoded in the reader, and unable to fit into some cases. This pr includes the following changes: 1. Replace `find_all` with `find` because we just want one tag. 2. Provide `custom_html_tag` to the loader. 3. Add tests for readthedoc loader 4. Refactor code ## Issues See more in langchain-ai#2609. The problem was not completely fixed in that pr. --------- Signed-off-by: byhsu <[email protected]> Co-authored-by: byhsu <[email protected]> Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for f0730c6 - Browse repository at this point
Copy the full SHA f0730c6View commit details -
Add Iugu document loader (langchain-ai#5162)
Create IUGU loader --------- Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for f10be07 - Browse repository at this point
Copy the full SHA f10be07View commit details -
Add Joplin document loader (langchain-ai#5153)
# Add Joplin document loader [Joplin](https://joplinapp.org/) is an open source note-taking app. Joplin has a [REST API](https://joplinapp.org/api/references/rest_api/) for accessing its local database. The proposed `JoplinLoader` uses the API to retrieve all notes in the database and their metadata. Joplin needs to be installed and running locally, and an access token is required. - The PR includes an integration test. - The PR includes an example notebook. --------- Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 44abe92 - Browse repository at this point
Copy the full SHA 44abe92View commit details -
Configuration menu - View commit details
-
Copy full SHA for dcee893 - Browse repository at this point
Copy the full SHA dcee893View commit details -
Configuration menu - View commit details
-
Copy full SHA for b7fcb35 - Browse repository at this point
Copy the full SHA b7fcb35View commit details -
Log warning (langchain-ai#5192)
Changes debug log to warning log when LC Tracer fails to instantiate
Configuration menu - View commit details
-
Copy full SHA for 66113c2 - Browse repository at this point
Copy the full SHA 66113c2View commit details -
Configuration menu - View commit details
-
Copy full SHA for e76e68b - Browse repository at this point
Copy the full SHA e76e68bView commit details -
Add 'status' command to get server status (langchain-ai#5197)
Example: ``` $ langchain plus start --expose ... $ langchain plus status The LangChainPlus server is currently running. Service Status Published Ports langchain-backend Up 40 seconds 1984 langchain-db Up 41 seconds 5433 langchain-frontend Up 40 seconds 80 ngrok Up 41 seconds 4040 To connect, set the following environment variables in your LangChain application: LANGCHAIN_TRACING_V2=true LANGCHAIN_ENDPOINT=https://5cef-70-23-89-158.ngrok.io $ langchain plus stop $ langchain plus status The LangChainPlus server is not running. $ langchain plus start The LangChainPlus server is currently running. Service Status Published Ports langchain-backend Up 5 seconds 1984 langchain-db Up 6 seconds 5433 langchain-frontend Up 5 seconds 80 To connect, set the following environment variables in your LangChain application: LANGCHAIN_TRACING_V2=true LANGCHAIN_ENDPOINT=http://localhost:1984 ```
Configuration menu - View commit details
-
Copy full SHA for e6c4571 - Browse repository at this point
Copy the full SHA e6c4571View commit details -
Harrison/vertex (langchain-ai#5049)
Co-authored-by: Leonid Kuligin <[email protected]> Co-authored-by: Leonid Kuligin <[email protected]> Co-authored-by: sasha-gitg <[email protected]> Co-authored-by: Justin Flick <[email protected]> Co-authored-by: Justin Flick <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for a775aa6 - Browse repository at this point
Copy the full SHA a775aa6View commit details
Commits on May 25, 2023
-
fix a mistake in concepts.md (langchain-ai#5222)
# fix a mistake in concepts.md ## Who can review? Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested:
Configuration menu - View commit details
-
Copy full SHA for 2ad29f4 - Browse repository at this point
Copy the full SHA 2ad29f4View commit details -
Create async copy of from_text() inside GraphIndexCreator. (langchain…
…-ai#5214) Copies `GraphIndexCreator.from_text()` to make an async version called `GraphIndexCreator.afrom_text()`. This is (should be) a trivial change: it just adds a copy of `GraphIndexCreator.from_text()` which is async and awaits a call to `chain.apredict()` instead of `chain.predict()`. There is no unit test for GraphIndexCreator, and I did not create one, but this code works for me locally. @agola11 @hwchase17
Configuration menu - View commit details
-
Copy full SHA for 95c9aa1 - Browse repository at this point
Copy the full SHA 95c9aa1View commit details -
Remove API key from docs (langchain-ai#5223)
I found an API key for `serpapi_api_key` while reading the docs. It seems to have been modified very recently. Removed it in this PR @hwchase17 - project lead
Configuration menu - View commit details
-
Copy full SHA for eff31a3 - Browse repository at this point
Copy the full SHA eff31a3View commit details -
Change Default GoogleDriveLoader Behavior to not Load Trashed Files (…
…issue langchain-ai#5104) (langchain-ai#5220) # Change Default GoogleDriveLoader Behavior to not Load Trashed Files (issue langchain-ai#5104) Fixes langchain-ai#5104 If the previous behavior of loading files that used to live in the folder, but are now trashed, you can use the `load_trashed_files` parameter: ``` loader = GoogleDriveLoader( folder_id="1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5", recursive=False, load_trashed_files=True ) ``` As not loading trashed files should be expected behavior, should we 1. even provide the `load_trashed_files` parameter? 2. add documentation? Feels most users will stick with default behavior ## Who can review? Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested: DataLoaders - @eyurtsev Twitter: [@nicholasliu77](https://twitter.com/nicholasliu77)
Configuration menu - View commit details
-
Copy full SHA for f0ea093 - Browse repository at this point
Copy the full SHA f0ea093View commit details -
Allow to specify ID when adding to the FAISS vectorstore. (langchain-…
…ai#5190) # Allow to specify ID when adding to the FAISS vectorstore This change allows unique IDs to be specified when adding documents / embeddings to a faiss vectorstore. - This reflects the current approach with the chroma vectorstore. - It allows rejection of inserts on duplicate IDs - will allow deletion / update by searching on deterministic ID (such as a hash). - If not specified, a random UUID is generated (as per previous behaviour, so non-breaking). This commit fixes langchain-ai#5065 and langchain-ai#3896 and should fix langchain-ai#2699 indirectly. I've tested adding and merging. Kindly tagging @Xmaster6y @dev2049 for review. --------- Co-authored-by: Ati Sharma <[email protected]> Co-authored-by: Harrison Chase <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 40b086d - Browse repository at this point
Copy the full SHA 40b086dView commit details -
Bibtex integration for document loader and retriever (langchain-ai#5137)
# Bibtex integration Wrap bibtexparser to retrieve a list of docs from a bibtex file. * Get the metadata from the bibtex entries * `page_content` get from the local pdf referenced in the `file` field of the bibtex entry using `pymupdf` * If no valid pdf file, `page_content` set to the `abstract` field of the bibtex entry * Support Zotero flavour using regex to get the file path * Added usage example in `docs/modules/indexes/document_loaders/examples/bibtex.ipynb` --------- Co-authored-by: Sébastien M. Popoff <[email protected]> Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 5cfa72a - Browse repository at this point
Copy the full SHA 5cfa72aView commit details -
Add MiniMax embeddings (langchain-ai#5174)
- Add support for MiniMax embeddings Doc: [MiniMax embeddings](https://api.minimax.chat/document/guides/embeddings?id=6464722084cdc277dfaa966a) --------- Co-authored-by: Archon <[email protected]> Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 5cdd9ab - Browse repository at this point
Copy the full SHA 5cdd9abView commit details -
Weaviate: Add QnA with sources example (langchain-ai#5247)
# Add QnA with sources example <!-- Thank you for contributing to LangChain! Your PR will appear in our next release under the title you set. Please make sure it highlights your valuable contribution. Replace this with a description of the change, the issue it fixes (if applicable), and relevant context. List any dependencies required for this change. After you're done, someone will review your PR. They may suggest improvements. If no one reviews your PR within a few days, feel free to @-mention the same people again, as notifications can get lost. --> <!-- Remove if not applicable --> Fixes: see https://stackoverflow.com/questions/76207160/langchain-doesnt-work-with-weaviate-vector-database-getting-valueerror/76210017#76210017 ## Before submitting <!-- If you're adding a new integration, include an integration test and an example notebook showing its use! --> ## Who can review? Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested: <!-- For a quicker response, figure out the right person to tag with @ @hwchase17 - project lead Tracing / Callbacks - @agola11 Async - @agola11 DataLoaders - @eyurtsev Models - @hwchase17 - @agola11 Agents / Tools / Toolkits - @vowelparrot VectorStores / Retrievers / Memory - @dev2049 --> @dev2049
Configuration menu - View commit details
-
Copy full SHA for 09e246f - Browse repository at this point
Copy the full SHA 09e246fView commit details -
Configuration menu - View commit details
-
Copy full SHA for 9e57be4 - Browse repository at this point
Copy the full SHA 9e57be4View commit details -
Configuration menu - View commit details
-
Copy full SHA for 15b17f9 - Browse repository at this point
Copy the full SHA 15b17f9View commit details -
remove extra "\n" to ensure that the format of the description, examp… (
langchain-ai#5232) remove extra "\n" to ensure that the format of the description, example, and prompt&generation are completely consistent.
Configuration menu - View commit details
-
Copy full SHA for c7e2151 - Browse repository at this point
Copy the full SHA c7e2151View commit details -
Resolve error in StructuredOutputParser docs (langchain-ai#5240)
# Resolve error in StructuredOutputParser docs Documentation for `StructuredOutputParser` currently not reproducible, that is, `output_parser.parse(output)` raises an error because the LLM returns a response with an invalid format ```python _input = prompt.format_prompt(question="what's the capital of france") output = model(_input.to_string()) output # ? # # ```json # { # "answer": "Paris", # "source": "https://www.worldatlas.com/articles/what-is-the-capital-of-france.html" # } # ``` ``` Was fixed by adding a question mark to the prompt
Configuration menu - View commit details
-
Copy full SHA for 9c0cb90 - Browse repository at this point
Copy the full SHA 9c0cb90View commit details -
Added the option of specifying a proxy for the OpenAI API (langchain-…
…ai#5246) # Added the option of specifying a proxy for the OpenAI API Fixes langchain-ai#5243 Co-authored-by: Yves Maurer <>
Configuration menu - View commit details
-
Copy full SHA for 88ed8e1 - Browse repository at this point
Copy the full SHA 88ed8e1View commit details -
OpenSearch top k parameter fix (langchain-ai#5216)
For most queries it's the `size` parameter that determines final number of documents to return. Since our abstractions refer to this as `k`, set this to be `k` everywhere instead of expecting a separate param. Would be great to have someone more familiar with OpenSearch validate that this is reasonable (e.g. that having `size` and what OpenSearch calls `k` be the same won't lead to any strange behavior). cc @naveentatikonda Closes langchain-ai#5212
Configuration menu - View commit details
-
Copy full SHA for 3be9ba1 - Browse repository at this point
Copy the full SHA 3be9ba1View commit details -
Fixed regression in JoplinLoader's get note url (langchain-ai#5265)
Configuration menu - View commit details
-
Copy full SHA for d3cd21c - Browse repository at this point
Copy the full SHA d3cd21cView commit details -
Docs link custom agent page in getting started (langchain-ai#5250)
# Docs: link custom agent page in getting started
Configuration menu - View commit details
-
Copy full SHA for 5525602 - Browse repository at this point
Copy the full SHA 5525602View commit details -
Zep sdk version (langchain-ai#5267)
zep-python's sync methods no longer need an asyncio wrapper. This was causing issues with FastAPI deployment. Zep also now supports putting and getting of arbitrary message metadata. Bump zep-python version to v0.30 Remove nest-asyncio from Zep example notebooks. Modify tests to include metadata. --------- Co-authored-by: Daniel Chalef <[email protected]> Co-authored-by: Daniel Chalef <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for ca88b25 - Browse repository at this point
Copy the full SHA ca88b25View commit details -
Add C Transformers for GGML Models (langchain-ai#5218)
# Add C Transformers for GGML Models I created Python bindings for the GGML models: https://github.com/marella/ctransformers Currently it supports GPT-2, GPT-J, GPT-NeoX, LLaMA, MPT, etc. See [Supported Models](https://github.com/marella/ctransformers#supported-models). It provides a unified interface for all models: ```python from langchain.llms import CTransformers llm = CTransformers(model='/path/to/ggml-gpt-2.bin', model_type='gpt2') print(llm('AI is going to')) ``` It can be used with models hosted on the Hugging Face Hub: ```py llm = CTransformers(model='marella/gpt-2-ggml') ``` It supports streaming: ```py from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler llm = CTransformers(model='marella/gpt-2-ggml', callbacks=[StreamingStdOutCallbackHandler()]) ``` Please see [README](https://github.com/marella/ctransformers#readme) for more details. --------- Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for b398862 - Browse repository at this point
Copy the full SHA b398862View commit details -
Add visible_only and strict_mode options to ClickTool (langchain-ai#4088
) Partially addresses: langchain-ai#4066
Configuration menu - View commit details
-
Copy full SHA for 3223a97 - Browse repository at this point
Copy the full SHA 3223a97View commit details -
Add Multi-CSV/DF support in CSV and DataFrame Toolkits (langchain-ai#…
…5009) Add Multi-CSV/DF support in CSV and DataFrame Toolkits * CSV and DataFrame toolkits now accept list of CSVs/DFs * Add default prompts for many dataframes in `pandas_dataframe` toolkit Fixes langchain-ai#1958 Potentially fixes langchain-ai#4423 ## Testing * Add single and multi-dataframe integration tests for `pandas_dataframe` toolkit with permutations of `include_df_in_prompt` * Add single and multi-CSV integration tests for csv toolkit --------- Co-authored-by: Harrison Chase <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 7652d2a - Browse repository at this point
Copy the full SHA 7652d2aView commit details -
OpenAI lint (langchain-ai#5273)
Causing lint issues if you have openai installed, annoying for local dev
Configuration menu - View commit details
-
Copy full SHA for f01dfe8 - Browse repository at this point
Copy the full SHA f01dfe8View commit details
Commits on May 26, 2023
-
Added pipline args to
HuggingFacePipeline.from_model_id
(langchain-……ai#5268) The current `HuggingFacePipeline.from_model_id` does not allow passing of pipeline arguments to the transformer pipeline. This PR enables adding important pipeline parameters like setting `max_new_tokens` for example. Previous to this PR it would be necessary to manually create the pipeline through huggingface transformers then handing it to langchain. For example instead of this ```py model_id = "gpt2" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=10 ) hf = HuggingFacePipeline(pipeline=pipe) ``` You can write this ```py hf = HuggingFacePipeline.from_model_id( model_id="gpt2", task="text-generation", pipeline_kwargs={"max_new_tokens": 10} ) ``` Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 2ef5579 - Browse repository at this point
Copy the full SHA 2ef5579View commit details -
Support bigquery dialect - SQL (langchain-ai#5261)
# Your PR Title (What it does) Adding an if statement to deal with bigquery sql dialect. When I use bigquery dialect before, it failed while using SET search_path TO. So added a condition to set dataset as the schema parameter which is equivalent to SET search_path TO . I have tested and it works. ## Who can review? Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested: @dev2049
Configuration menu - View commit details
-
Copy full SHA for 56ad56c - Browse repository at this point
Copy the full SHA 56ad56cView commit details -
feat: add Momento as a standard cache and chat message history provid…
…er (langchain-ai#5221) # Add Momento as a standard cache and chat message history provider This PR adds Momento as a standard caching provider. Implements the interface, adds integration tests, and documentation. We also add Momento as a chat history message provider along with integration tests, and documentation. [Momento](https://www.gomomento.com/) is a fully serverless cache. Similar to S3 or DynamoDB, it requires zero configuration, infrastructure management, and is instantly available. Users sign up for free and get 50GB of data in/out for free every month. ## Before submitting ✅ We have added documentation, notebooks, and integration tests demonstrating usage. Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 7047a2c - Browse repository at this point
Copy the full SHA 7047a2cView commit details -
Fixed typo: 'ouput' to 'output' in all documentation (langchain-ai#5272)
# Fixed typo: 'ouput' to 'output' in all documentation In this instance, the typo 'ouput' was amended to 'output' in all occurrences within the documentation. There are no dependencies required for this change.
Configuration menu - View commit details
-
Copy full SHA for a0281f5 - Browse repository at this point
Copy the full SHA a0281f5View commit details -
Tedma4/twilio tool (langchain-ai#5136)
# Add twilio sms tool --------- Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 1cb6498 - Browse repository at this point
Copy the full SHA 1cb6498View commit details -
LLM wrapper for Databricks (langchain-ai#5142)
This PR adds LLM wrapper for Databricks. It supports two endpoint types: * serving endpoint * cluster driver proxy app An integration notebook is included to show how it works. Co-authored-by: Davis Chase <[email protected]> Co-authored-by: Gengliang Wang <[email protected]> Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for aec642f - Browse repository at this point
Copy the full SHA aec642fView commit details -
Add an example to make the prompt more robust (langchain-ai#5291)
# Add example to LLMMath to help with power operator Add example to LLMMath that helps the model to interpret `^` as the power operator rather than the python xor operator.
Configuration menu - View commit details
-
Copy full SHA for d481d88 - Browse repository at this point
Copy the full SHA d481d88View commit details -
Update CONTRIBUTION guidelines and PR Template (langchain-ai#5140)
# Update contribution guidelines and PR template This PR updates the contribution guidelines to include more information on how to handle optional dependencies. The PR template is updated to include a link to the contribution guidelines document.
Configuration menu - View commit details
-
Copy full SHA for a669abf - Browse repository at this point
Copy the full SHA a669abfView commit details -
Fixed passing creds to VertexAI LLM (langchain-ai#5297)
# Fixed passing creds to VertexAI LLM Fixes langchain-ai#5279 It looks like we should drop a type annotation for Credentials. Co-authored-by: Leonid Kuligin <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for aa3c7b3 - Browse repository at this point
Copy the full SHA aa3c7b3View commit details -
Configuration menu - View commit details
-
Copy full SHA for 641303a - Browse repository at this point
Copy the full SHA 641303aView commit details -
Better docs for weaviate hybrid search (langchain-ai#5290)
# Better docs for weaviate hybrid search <!-- Thank you for contributing to LangChain! Your PR will appear in our next release under the title you set. Please make sure it highlights your valuable contribution. Replace this with a description of the change, the issue it fixes (if applicable), and relevant context. List any dependencies required for this change. After you're done, someone will review your PR. They may suggest improvements. If no one reviews your PR within a few days, feel free to @-mention the same people again, as notifications can get lost. --> <!-- Remove if not applicable --> Fixes: NA ## Before submitting <!-- If you're adding a new integration, include an integration test and an example notebook showing its use! --> ## Who can review? Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested: <!-- For a quicker response, figure out the right person to tag with @ @hwchase17 - project lead Tracing / Callbacks - @agola11 Async - @agola11 DataLoaders - @eyurtsev Models - @hwchase17 - @agola11 Agents / Tools / Toolkits - @vowelparrot VectorStores / Retrievers / Memory - @dev2049 --> @dev2049
Configuration menu - View commit details
-
Copy full SHA for 58e95cd - Browse repository at this point
Copy the full SHA 58e95cdView commit details -
Add instructions to pyproject.toml (langchain-ai#5138)
# Add instructions to pyproject.toml * Add instructions to pyproject.toml about how to handle optional dependencies. ## Before submitting ## Who can review? --------- Co-authored-by: Davis Chase <[email protected]> Co-authored-by: Zander Chase <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 0a8d6bc - Browse repository at this point
Copy the full SHA 0a8d6bcView commit details -
docs: improve flow of llm caching notebook (langchain-ai#5309)
# docs: improve flow of llm caching notebook The notebook `llm_caching` demos various caching providers. In the previous version, there was setup common to all examples but under the `In Memory Caching` heading. If a user comes and only wants to try a particular example, they will run the common setup, then the cells for the specific provider they are interested in. Then they will get import and variable reference errors. This commit moves the common setup to the top to avoid this. ## Who can review? Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested: @dev2049
Configuration menu - View commit details
-
Copy full SHA for f75f0db - Browse repository at this point
Copy the full SHA f75f0dbView commit details
Commits on May 27, 2023
-
# Documentation typo fixes Fixes # (issue) Simple typos in the blockchain .ipynb documentation
Configuration menu - View commit details
-
Copy full SHA for 6e974b5 - Browse repository at this point
Copy the full SHA 6e974b5View commit details
Commits on May 28, 2023
-
docs: added link to LangChain Handbook (langchain-ai#5311)
# added a link to LangChain Handbook ## Who can review? Community members can review the PR once tests pass.
Configuration menu - View commit details
-
Copy full SHA for 465a970 - Browse repository at this point
Copy the full SHA 465a970View commit details -
Configuration menu - View commit details
-
Copy full SHA for 179ddbe - Browse repository at this point
Copy the full SHA 179ddbeView commit details -
Configuration menu - View commit details
-
Copy full SHA for 5292e85 - Browse repository at this point
Copy the full SHA 5292e85View commit details -
Add Chainlit to deployment options (langchain-ai#5314)
# Add Chainlit to deployment options Add [Chainlit](https://github.com/Chainlit/chainlit) as deployment options Used links to Github examples and Chainlit doc on the LangChain integration Co-authored-by: Dan Constantini <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for c49c6ac - Browse repository at this point
Copy the full SHA c49c6acView commit details -
Fixing blank thoughts in verbose for "_Exception" Action (langchain-a…
…i#5331) Fixed the issue of blank Thoughts being printed in verbose when `handle_parsing_errors=True`, as below: Before Fix: ``` Observation: There are 38175 accounts available in the dataframe. Thought: Observation: Invalid or incomplete response Thought: Observation: Invalid or incomplete response Thought: ``` After Fix: ``` Observation: There are 38175 accounts available in the dataframe. Thought:AI: { "action": "Final Answer", "action_input": "There are 38175 accounts available in the dataframe." } Observation: Invalid Action or Action Input format Thought:AI: { "action": "Final Answer", "action_input": "The number of available accounts is 38175." } Observation: Invalid Action or Action Input format ``` @vowelparrot currently I have set the colour of thought to green (same as the colour when `handle_parsing_errors=False`). If you want to change the colour of this "_Exception" case to red or something else (when `handle_parsing_errors=True`), feel free to change it in line 789.
Configuration menu - View commit details
-
Copy full SHA for c6e5d90 - Browse repository at this point
Copy the full SHA c6e5d90View commit details -
fix: remove empty lines that cause InvalidRequestError (langchain-ai#…
…5320) # remove empty lines in GenerativeAgentMemory that cause InvalidRequestError in OpenAIEmbeddings <!-- Thank you for contributing to LangChain! Your PR will appear in our release under the title you set. Please make sure it highlights your valuable contribution. Replace this with a description of the change, the issue it fixes (if applicable), and relevant context. List any dependencies required for this change. After you're done, someone will review your PR. They may suggest improvements. If no one reviews your PR within a few days, feel free to @-mention the same people again, as notifications can get lost. --> <!-- Remove if not applicable --> Let's say the text given to `GenerativeAgent._parse_list` is ``` text = """ Insight 1: <insight 1> Insight 2: <insight 2> """ ``` This creates an `openai.error.InvalidRequestError: [''] is not valid under any of the given schemas - 'input'` because `GenerativeAgent.add_memory()` tries to add an empty string to the vectorstore. This PR fixes the issue by removing the empty line between `Insight 1` and `Insight 2` ## Before submitting <!-- If you're adding a new integration, please include: 1. a test for the integration - favor unit tests that does not rely on network access. 2. an example notebook showing its use See contribution guidelines for more information on how to write tests, lint etc: https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md --> ## Who can review? Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested: <!-- For a quicker response, figure out the right person to tag with @ @hwchase17 - project lead Tracing / Callbacks - @agola11 Async - @agola11 DataLoaders - @eyurtsev Models - @hwchase17 - @agola11 Agents / Tools / Toolkits - @vowelparrot VectorStores / Retrievers / Memory - @dev2049 --> @hwchase17 @vowelparrot @dev2049
Configuration menu - View commit details
-
Copy full SHA for f079cdf - Browse repository at this point
Copy the full SHA f079cdfView commit details -
Sample Notebook for DynamoDB Chat Message History (langchain-ai#5351)
# Sample Notebook for DynamoDB Chat Message History @dev2049 Adding a sample notebook for the DynamoDB Chat Message History class. <!-- For a quicker response, figure out the right person to tag with @ @hwchase17 - project lead Tracing / Callbacks - @agola11 Async - @agola11 DataLoaders - @eyurtsev Models - @hwchase17 - @agola11 Agents / Tools / Toolkits - @vowelparrot VectorStores / Retrievers / Memory - @dev2049 -->
Configuration menu - View commit details
-
Copy full SHA for 881dfe8 - Browse repository at this point
Copy the full SHA 881dfe8View commit details -
added cosmos kwargs option (langchain-ai#5292)
# Added the ability to pass kwargs to cosmos client constructor The cosmos client has a ton of options that can be set, so allowing those to be passed to the constructor from the chat memory constructor with this PR.
Configuration menu - View commit details
-
Copy full SHA for 1daa706 - Browse repository at this point
Copy the full SHA 1daa706View commit details -
feat: support for shopping search in SerpApi (langchain-ai#5259)
# Support for shopping search in SerpApi ## Who can review? @vowelparrot
Configuration menu - View commit details
-
Copy full SHA for e274295 - Browse repository at this point
Copy the full SHA e274295View commit details -
Add SKLearnVectorStore (langchain-ai#5305)
# Add SKLearnVectorStore This PR adds SKLearnVectorStore, a simply vector store based on NearestNeighbors implementations in the scikit-learn package. This provides a simple drop-in vector store implementation with minimal dependencies (scikit-learn is typically installed in a data scientist / ml engineer environment). The vector store can be persisted and loaded from json, bson and parquet format. SKLearnVectorStore has soft (dynamic) dependency on the scikit-learn, numpy and pandas packages. Persisting to bson requires the bson package, persisting to parquet requires the pyarrow package. ## Before submitting Integration tests are provided under `tests/integration_tests/vectorstores/test_sklearn.py` Sample usage notebook is provided under `docs/modules/indexes/vectorstores/examples/sklear.ipynb` Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 5f45523 - Browse repository at this point
Copy the full SHA 5f45523View commit details -
Configuration menu - View commit details
-
Copy full SHA for b705f26 - Browse repository at this point
Copy the full SHA b705f26View commit details -
Fixes iter error in FAISS add_embeddings call (langchain-ai#5367)
# Remove re-use of iter within add_embeddings causing error As reported in langchain-ai#5336 there is an issue currently involving the atempted re-use of an iterator within the FAISS vectorstore adapter Fixes # langchain-ai#5336 ## Who can review? Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested: VectorStores / Retrievers / Memory - @dev2049
Matt Wells authoredMay 28, 2023 Configuration menu - View commit details
-
Copy full SHA for 9a5c9df - Browse repository at this point
Copy the full SHA 9a5c9dfView commit details -
Configuration menu - View commit details
-
Copy full SHA for b692797 - Browse repository at this point
Copy the full SHA b692797View commit details -
Configuration menu - View commit details
-
Copy full SHA for ad7f4c0 - Browse repository at this point
Copy the full SHA ad7f4c0View commit details -
Add path validation to DirectoryLoader (langchain-ai#5327)
# Add path validation to DirectoryLoader This PR introduces a minor adjustment to the DirectoryLoader by adding validation for the path argument. Previously, if the provided path didn't exist or wasn't a directory, DirectoryLoader would return an empty document list due to the behavior of the `glob` method. This could potentially cause confusion for users, as they might expect a file-loading error instead. So, I've added two validations to the load method of the DirectoryLoader: - Raise a FileNotFoundError if the provided path does not exist - Raise a ValueError if the provided path is not a directory Due to the relatively small scope of these changes, a new issue was not created. ## Before submitting <!-- If you're adding a new integration, please include: 1. a test for the integration - favor unit tests that does not rely on network access. 2. an example notebook showing its use See contribution guidelines for more information on how to write tests, lint etc: https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md --> ## Who can review? Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested: @eyurtsev
Configuration menu - View commit details
-
Copy full SHA for 1366d07 - Browse repository at this point
Copy the full SHA 1366d07View commit details -
Fix: Handle empty documents in ContextualCompressionRetriever (Issue l…
…angchain-ai#5304) (langchain-ai#5306) # Fix: Handle empty documents in ContextualCompressionRetriever (Issue langchain-ai#5304) Fixes langchain-ai#5304 Prevent cohere.error.CohereAPIError caused by an empty list of documents by adding a condition to check if the input documents list is empty in the compress_documents method. If the list is empty, return an empty list immediately, avoiding the error and unnecessary processing. @dev2049 --------- Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 99a1e3f - Browse repository at this point
Copy the full SHA 99a1e3fView commit details
Commits on May 29, 2023
-
handle json parsing errors (langchain-ai#5371)
adds tests cases, consolidates a lot of PRs
Configuration menu - View commit details
-
Copy full SHA for 6df90ad - Browse repository at this point
Copy the full SHA 6df90adView commit details -
Use Default Factory (langchain-ai#5380)
We shouldn't be calling a constructor for a default value - should use default_factory instead. This is especially ad in this case since it requires an optional dependency and an API key to be set. Resolves langchain-ai#5361
Configuration menu - View commit details
-
Copy full SHA for 14099f1 - Browse repository at this point
Copy the full SHA 14099f1View commit details -
Update PR template with Twitter handle request (langchain-ai#5382)
# Updates PR template to request Twitter handle for shoutouts! Makes it easier for maintainers to show their appreciation 😄
Configuration menu - View commit details
-
Copy full SHA for f77f271 - Browse repository at this point
Copy the full SHA f77f271View commit details -
fix: Blob.from_data mimetype is lost (langchain-ai#5395)
# Fix lost mimetype when using Blob.from_data method The mimetype is lost due to a typo in the class attribue name Fixes # - (no issue opened but I can open one if needed) ## Changes * Fixed typo in name * Added unit-tests to validate the output Blob ## Review @eyurtsev
Configuration menu - View commit details
-
Copy full SHA for 8b7721e - Browse repository at this point
Copy the full SHA 8b7721eView commit details -
Add async support to routing chains (langchain-ai#5373)
# Add async support for (LLM) routing chains <!-- Thank you for contributing to LangChain! Your PR will appear in our release under the title you set. Please make sure it highlights your valuable contribution. Replace this with a description of the change, the issue it fixes (if applicable), and relevant context. List any dependencies required for this change. After you're done, someone will review your PR. They may suggest improvements. If no one reviews your PR within a few days, feel free to @-mention the same people again, as notifications can get lost. --> <!-- Remove if not applicable --> Add asynchronous LLM calls support for the routing chains. More specifically: - Add async `aroute` function (i.e. async version of `route`) to the `RouterChain` which calls the routing LLM asynchronously - Implement the async `_acall` for the `LLMRouterChain` - Implement the async `_acall` function for `MultiRouteChain` which first calls asynchronously the routing chain with its new `aroute` function, and then calls asynchronously the relevant destination chain. <!-- If you're adding a new integration, please include: 1. a test for the integration - favor unit tests that does not rely on network access. 2. an example notebook showing its use See contribution guidelines for more information on how to write tests, lint etc: https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md --> ## Who can review? - @agola11 <!-- For a quicker response, figure out the right person to tag with @ @hwchase17 - project lead Async - @agola11 -->
Configuration menu - View commit details
-
Copy full SHA for e455ba4 - Browse repository at this point
Copy the full SHA e455ba4View commit details -
Fix update_document function, add test and documentation. (langchain-…
…ai#5359) # Fix for `update_document` Function in Chroma ## Summary This pull request addresses an issue with the `update_document` function in the Chroma class, as described in [langchain-ai#5031](langchain-ai#5031 (comment)). The issue was identified as an `AttributeError` raised when calling `update_document` due to a missing corresponding method in the `Collection` object. This fix refactors the `update_document` method in `Chroma` to correctly interact with the `Collection` object. ## Changes 1. Fixed the `update_document` method in the `Chroma` class to correctly call methods on the `Collection` object. 2. Added the corresponding test `test_chroma_update_document` in `tests/integration_tests/vectorstores/test_chroma.py` to reflect the updated method call. 3. Added an example and explanation of how to use the `update_document` function in the Jupyter notebook tutorial for Chroma. ## Test Plan All existing tests pass after this change. In addition, the `test_chroma_update_document` test case now correctly checks the functionality of `update_document`, ensuring that the function works as expected and updates the content of documents correctly. ## Reviewers @dev2049 This fix will ensure that users are able to use the `update_document` function as expected, without encountering the previous `AttributeError`. This will enhance the usability and reliability of the Chroma class for all users. Thank you for considering this pull request. I look forward to your feedback and suggestions.
Configuration menu - View commit details
-
Copy full SHA for 44b48d9 - Browse repository at this point
Copy the full SHA 44b48d9View commit details -
Update llamacpp demonstration notebook (langchain-ai#5344)
# Update llamacpp demonstration notebook Add instructions to install with BLAS backend, and update the example of model usage. Fixes langchain-ai#5071. However, it is more like a prevention of similar issues in the future, not a fix, since there was no problem in the framework functionality ## Who can review? Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested: - @hwchase17 - @agola11
Configuration menu - View commit details
-
Copy full SHA for f6615ca - Browse repository at this point
Copy the full SHA f6615caView commit details -
Removed deprecated llm attribute for load_chain (langchain-ai#5343)
# Removed deprecated llm attribute for load_chain Currently `load_chain` for some chain types expect `llm` attribute to be present but `llm` is deprecated attribute for those chains and might not be persisted during their `chain.save`. Fixes langchain-ai#5224 [(issue)](langchain-ai#5224) ## Who can review? @hwchase17 @dev2049 --------- Co-authored-by: imeckr <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 642ae83 - Browse repository at this point
Copy the full SHA 642ae83View commit details -
Harrison/llamacpp (langchain-ai#5402)
Co-authored-by: Gavin S <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 3e16468 - Browse repository at this point
Copy the full SHA 3e16468View commit details -
Add pagination for Vertex AI embeddings (langchain-ai#5325)
Fixes langchain-ai#5316 --------- Co-authored-by: Justin Flick <[email protected]> Co-authored-by: Harrison Chase <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for c09f8e4 - Browse repository at this point
Copy the full SHA c09f8e4View commit details -
Reformat openai proxy setting as code (langchain-ai#5330)
# Reformat the openai proxy setting as code Only affect the doc for openai Model - @hwchase17 - @agola11
Configuration menu - View commit details
-
Copy full SHA for 100d665 - Browse repository at this point
Copy the full SHA 100d665View commit details -
Harrison/deep infra (langchain-ai#5403)
Co-authored-by: Yessen Kanapin <[email protected]> Co-authored-by: Yessen Kanapin <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 416c8b1 - Browse repository at this point
Copy the full SHA 416c8b1View commit details -
Harrison/prediction guard update (langchain-ai#5404)
Co-authored-by: Daniel Whitenack <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for d6fb25c - Browse repository at this point
Copy the full SHA d6fb25cView commit details -
Implemented appending arbitrary messages (langchain-ai#5293)
# Implemented appending arbitrary messages to the base chat message history, the in-memory and cosmos ones. <!-- Thank you for contributing to LangChain! Your PR will appear in our next release under the title you set. Please make sure it highlights your valuable contribution. Replace this with a description of the change, the issue it fixes (if applicable), and relevant context. List any dependencies required for this change. After you're done, someone will review your PR. They may suggest improvements. If no one reviews your PR within a few days, feel free to @-mention the same people again, as notifications can get lost. --> As discussed this is the alternative way instead of langchain-ai#4480, with a add_message method added that takes a BaseMessage as input, so that the user can control what is in the base message like kwargs. <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting <!-- If you're adding a new integration, include an integration test and an example notebook showing its use! --> ## Who can review? Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested: @hwchase17 --------- Co-authored-by: Harrison Chase <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for ccb6238 - Browse repository at this point
Copy the full SHA ccb6238View commit details -
docs:
ecosystem/integrations
update 2 (langchain-ai#5282)# docs: ecosystem/integrations update 2 langchain-ai#5219 - part 1 The second part of this update (parts are independent of each other! no overlap): - added diffbot.md - updated confluence.ipynb; added confluence.md - updated college_confidential.md - updated openai.md - added blackboard.md - added bilibili.md - added azure_blob_storage.md - added azlyrics.md - added aws_s3.md ## Who can review? @hwchase17@agola11 @agola11 @vowelparrot @dev2049
Configuration menu - View commit details
-
Copy full SHA for a359819 - Browse repository at this point
Copy the full SHA a359819View commit details -
docs:
ecosystem/integrations
update 1 (langchain-ai#5219)# docs: ecosystem/integrations update It is the first in a series of `ecosystem/integrations` updates. The ecosystem/integrations list is missing many integrations. I'm adding the missing integrations in a consistent format: 1. description of the integrated system 2. `Installation and Setup` section with 'pip install ...`, Key setup, and other necessary settings 3. Sections like `LLM`, `Text Embedding Models`, `Chat Models`... with links to correspondent examples and imports of the used classes. This PR keeps new docs, that are presented in the `docs/modules/models/text_embedding/examples` but missed in the `ecosystem/integrations`. The next PRs will cover the next example sections. Also updated `integrations.rst`: added the `Dependencies` section with a link to the packages used in LangChain. ## Who can review? @hwchase17 @eyurtsev @dev2049
Configuration menu - View commit details
-
Copy full SHA for 1837caa - Browse repository at this point
Copy the full SHA 1837caaView commit details -
Harrison/datetime parser (langchain-ai#4693)
Co-authored-by: Jacob Valdez <[email protected]> Co-authored-by: Jacob Valdez <[email protected]> Co-authored-by: Eugene Yurtsev <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 2da8c48 - Browse repository at this point
Copy the full SHA 2da8c48View commit details -
Configuration menu - View commit details
-
Copy full SHA for cce731c - Browse repository at this point
Copy the full SHA cce731cView commit details -
Add ToolException that a tool can throw. (langchain-ai#5050)
# Add ToolException that a tool can throw This is an optional exception that tool throws when execution error occurs. When this exception is thrown, the agent will not stop working,but will handle the exception according to the handle_tool_error variable of the tool,and the processing result will be returned to the agent as observation,and printed in pink on the console.It can be used like this: ```python from langchain.schema import ToolException from langchain import LLMMathChain, SerpAPIWrapper, OpenAI from langchain.agents import AgentType, initialize_agent from langchain.chat_models import ChatOpenAI from langchain.tools import BaseTool, StructuredTool, Tool, tool from langchain.chat_models import ChatOpenAI llm = ChatOpenAI(temperature=0) llm_math_chain = LLMMathChain(llm=llm, verbose=True) class Error_tool: def run(self, s: str): raise ToolException('The current search tool is not available.') def handle_tool_error(error) -> str: return "The following errors occurred during tool execution:"+str(error) search_tool1 = Error_tool() search_tool2 = SerpAPIWrapper() tools = [ Tool.from_function( func=search_tool1.run, name="Search_tool1", description="useful for when you need to answer questions about current events.You should give priority to using it.", handle_tool_error=handle_tool_error, ), Tool.from_function( func=search_tool2.run, name="Search_tool2", description="useful for when you need to answer questions about current events", return_direct=True, ) ] agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True, handle_tool_errors=handle_tool_error) agent.run("Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?") ``` ![image](https://github.com/hwchase17/langchain/assets/32786500/51930410-b26e-4f85-a1e1-e6a6fb450ada) ## Who can review? - @vowelparrot --------- Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for cf5803e - Browse repository at this point
Copy the full SHA cf5803eView commit details -
Harrison/text splitter (langchain-ai#5417)
adds support for keeping separators around when using recursive text splitter
Configuration menu - View commit details
-
Copy full SHA for 72f99ff - Browse repository at this point
Copy the full SHA 72f99ffView commit details
Commits on May 30, 2023
-
New Trello document loader (langchain-ai#4767)
# Added New Trello loader class and documentation Simple Loader on top of py-trello wrapper. With a board name you can pull cards and to do some field parameter tweaks on load operation. I included documentation and examples. Included unit test cases using patch and a fixture for py-trello client class. --------- Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 0b3e0dd - Browse repository at this point
Copy the full SHA 0b3e0ddView commit details -
DocumentLoader for GitHub (langchain-ai#5408)
# Creates GitHubLoader (langchain-ai#5257) GitHubLoader is a DocumentLoader that loads issues and PRs from GitHub. Fixes langchain-ai#5257 --------- Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 8259f9b - Browse repository at this point
Copy the full SHA 8259f9bView commit details -
Harrison/spark reader (langchain-ai#5405)
Co-authored-by: Rithwik Ediga Lakhamsani <[email protected]> Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 760632b - Browse repository at this point
Copy the full SHA 760632bView commit details -
Set old LCTracer to default to port 8000 (langchain-ai#5381)
Configuration menu - View commit details
-
Copy full SHA for 26ff185 - Browse repository at this point
Copy the full SHA 26ff185View commit details -
Rename and fix typo in lancedb (langchain-ai#5425)
# Fix typo in LanceDB notebook filename
Configuration menu - View commit details
-
Copy full SHA for ee57054 - Browse repository at this point
Copy the full SHA ee57054View commit details -
Configuration menu - View commit details
-
Copy full SHA for c4b502a - Browse repository at this point
Copy the full SHA c4b502aView commit details -
adding MongoDBAtlasVectorSearch (langchain-ai#5338)
# Add MongoDBAtlasVectorSearch for the python library Fixes langchain-ai#5337 --------- Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for a61b7f7 - Browse repository at this point
Copy the full SHA a61b7f7View commit details -
Add more code splitters (go, rst, js, java, cpp, scala, ruby, php, sw…
…ift, rust) (langchain-ai#5171) As the title says, I added more code splitters. The implementation is trivial, so i don't add separate tests for each splitter. Let me know if any concerns. Fixes # (issue) langchain-ai#5170 ## Who can review? Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested: @eyurtsev @hwchase17 --------- Signed-off-by: byhsu <[email protected]> Co-authored-by: byhsu <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 9d658aa - Browse repository at this point
Copy the full SHA 9d658aaView commit details -
Configuration menu - View commit details
-
Copy full SHA for 64b4165 - Browse repository at this point
Copy the full SHA 64b4165View commit details -
Configuration menu - View commit details
-
Copy full SHA for 2649b63 - Browse repository at this point
Copy the full SHA 2649b63View commit details -
Configuration menu - View commit details
-
Copy full SHA for 4379bd4 - Browse repository at this point
Copy the full SHA 4379bd4View commit details -
Fixed docstring in faiss.py for load_local (langchain-ai#5440)
# Fix for docstring in faiss.py vectorstore (load_local) The doctring should reflect that load_local loads something FROM the disk.
Configuration menu - View commit details
-
Copy full SHA for 0d3a9d4 - Browse repository at this point
Copy the full SHA 0d3a9d4View commit details -
Removes duplicated call from langchain/client/langchain.py (langchain…
…-ai#5449) This removes duplicate code presumably introduced by a cut-and-paste error, spotted while reviewing the code in ```langchain/client/langchain.py```. The original code had back to back occurrences of the following code block: ``` response = self._get( path, params=params, ) raise_for_status_with_text(response) ```
Configuration menu - View commit details
-
Copy full SHA for e09afb4 - Browse repository at this point
Copy the full SHA e09afb4View commit details -
encoding_kwargs
for InstructEmbeddings (langchain-ai#5450)# What does this PR do? Bring support of `encode_kwargs` for ` HuggingFaceInstructEmbeddings`, change the docstring example and add a test to illustrate with `normalize_embeddings`. Fixes langchain-ai#3605 (Similar to langchain-ai#3914) Use case: ```python from langchain.embeddings import HuggingFaceInstructEmbeddings model_name = "hkunlp/instructor-large" model_kwargs = {'device': 'cpu'} encode_kwargs = {'normalize_embeddings': True} hf = HuggingFaceInstructEmbeddings( model_name=model_name, model_kwargs=model_kwargs, encode_kwargs=encode_kwargs ) ```
Configuration menu - View commit details
-
Copy full SHA for c1807d8 - Browse repository at this point
Copy the full SHA c1807d8View commit details -
MRKL output parser no longer breaks well formed queries (langchain-ai…
…#5432) # Handles the edge scenario in which the action input is a well formed SQL query which ends with a quoted column There may be a cleaner option here (or indeed other edge scenarios) but this seems to robustly determine if the action input is likely to be a well formed SQL query in which we don't want to arbitrarily trim off `"` characters Fixes langchain-ai#5423 ## Who can review? Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested: For a quicker response, figure out the right person to tag with @ @hwchase17 - project lead Agents / Tools / Toolkits - @vowelparrot
Matt Wells authoredMay 30, 2023 Configuration menu - View commit details
-
Copy full SHA for 1d861dc - Browse repository at this point
Copy the full SHA 1d861dcView commit details -
docs: cleaning (langchain-ai#5413)
# docs cleaning Changed docs to consistent format (probably, we need an official doc integration template): - ClearML - added product descriptions; changed title/headers - Rebuff - added product descriptions; changed title/headers - WhyLabs - added product descriptions; changed title/headers - Docugami - changed title/headers/structure - Airbyte - fixed title - Wolfram Alpha - added descriptions, fixed title - OpenWeatherMap - - added product descriptions; changed title/headers - Unstructured - changed description ## Who can review? Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested: @hwchase17 @dev2049
Configuration menu - View commit details
-
Copy full SHA for 1f11f80 - Browse repository at this point
Copy the full SHA 1f11f80View commit details -
Added async _acall to FakeListLLM (langchain-ai#5439)
# Added Async _acall to FakeListLLM FakeListLLM is handy when unit testing apps built with langchain. This allows the use of FakeListLLM inside concurrent code with [asyncio](https://docs.python.org/3/library/asyncio.html). I also changed the pydocstring which was out of date. ## Who can review? @hwchase17 - project lead @agola11 - async
Configuration menu - View commit details
-
Copy full SHA for 80e133f - Browse repository at this point
Copy the full SHA 80e133fView commit details -
Feat: Add batching to Qdrant (langchain-ai#5443)
# Add batching to Qdrant Several people requested a batching mechanism while uploading data to Qdrant. It is important, as there are some limits for the maximum size of the request payload, and without batching implemented in Langchain, users need to implement it on their own. This PR exposes a new optional `batch_size` parameter, so all the documents/texts are loaded in batches of the expected size (64, by default). The integration tests of Qdrant are extended to cover two cases: 1. Documents are sent in separate batches. 2. All the documents are sent in a single request.
Configuration menu - View commit details
-
Copy full SHA for f93d256 - Browse repository at this point
Copy the full SHA f93d256View commit details -
Update psychicapi version (langchain-ai#5471)
Update [psychicapi](https://pypi.org/project/psychicapi/) python package dependency to the latest version 0.5. The newest python package version addresses breaking changes in the Psychic http api.
Configuration menu - View commit details
-
Copy full SHA for 8181f9e - Browse repository at this point
Copy the full SHA 8181f9eView commit details -
Add maximal relevance search to SKLearnVectorStore (langchain-ai#5430)
# Add maximal relevance search to SKLearnVectorStore This PR implements the maximum relevance search in SKLearnVectorStore. Twitter handle: jtolgyesi (I submitted also the original implementation of SKLearnVectorStore) ## Before submitting Unit tests are included. Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 1111f18 - Browse repository at this point
Copy the full SHA 1111f18View commit details -
add simple test for imports (langchain-ai#5461)
Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for eab4b4c - Browse repository at this point
Copy the full SHA eab4b4cView commit details -
Ability to specify credentials wihen using Google BigQuery as a data …
…loader (langchain-ai#5466) # Adds ability to specify credentials when using Google BigQuery as a data loader Fixes langchain-ai#5465 . Adds ability to set credentials which must be of the `google.auth.credentials.Credentials` type. This argument is optional and will default to `None. Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 199cc70 - Browse repository at this point
Copy the full SHA 199cc70View commit details -
convert the parameter 'text' to uppercase in the function 'parse' of …
…the class BooleanOutputParser (langchain-ai#5397) when the LLMs output 'yes|no',BooleanOutputParser can parse it to 'True|False', fix the ValueError in parse(). <!-- when use the BooleanOutputParser in the chain_filter.py, the LLMs output 'yes|no',the function 'parse' will throw ValueError。 --> Fixes # (issue) langchain-ai#5396 langchain-ai#5396 --------- Co-authored-by: gaofeng27692 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for e31705b - Browse repository at this point
Copy the full SHA e31705bView commit details -
added n_threads functionality for gpt4all (langchain-ai#5427)
# Added support for modifying the number of threads in the GPT4All model I have added the capability to modify the number of threads used by the GPT4All model. This allows users to adjust the model's parallel processing capabilities based on their specific requirements. ## Changes Made - Updated the `validate_environment` method to set the number of threads for the GPT4All model using the `values["n_threads"]` parameter from the `GPT4All` class constructor. ## Context Useful in scenarios where users want to optimize the model's performance by leveraging multi-threading capabilities. Please note that the `n_threads` parameter was included in the `GPT4All` class constructor but was previously unused. This change ensures that the specified number of threads is utilized by the model . ## Dependencies There are no new dependencies introduced by this change. It only utilizes existing functionality provided by the GPT4All package. ## Testing Since this is a minor change testing is not required. --------- Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 8121e04 - Browse repository at this point
Copy the full SHA 8121e04View commit details
Commits on May 31, 2023
-
Allow for async use of SelfAskWithSearchChain (langchain-ai#5394)
# Allow for async use of SelfAskWithSearchChain Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 0a44bfd - Browse repository at this point
Copy the full SHA 0a44bfdView commit details -
Allow ElasticsearchEmbeddings to create a connection with ES Client o…
…bject (langchain-ai#5321) This PR adds a new method `from_es_connection` to the `ElasticsearchEmbeddings` class allowing users to use Elasticsearch clusters outside of Elastic Cloud. Users can create an Elasticsearch Client object and pass that to the new function. The returned object is identical to the one returned by calling `from_credentials` ``` # Create Elasticsearch connection es_connection = Elasticsearch( hosts=['https://es_cluster_url:port'], basic_auth=('user', 'password') ) # Instantiate ElasticsearchEmbeddings using es_connection embeddings = ElasticsearchEmbeddings.from_es_connection( model_id, es_connection, ) ``` I also added examples to the elasticsearch jupyter notebook Fixes # langchain-ai#5239 --------- Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 46e181a - Browse repository at this point
Copy the full SHA 46e181aView commit details -
SQLite-backed Entity Memory (langchain-ai#5129)
# SQLite-backed Entity Memory Following the initiative of langchain-ai#2397 I think it would be helpful to be able to persist Entity Memory on disk by default Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for ce8b7a2 - Browse repository at this point
Copy the full SHA ce8b7a2View commit details -
Configuration menu - View commit details
-
Copy full SHA for 1671c2a - Browse repository at this point
Copy the full SHA 1671c2aView commit details -
Harrison/html splitter (langchain-ai#5468)
Co-authored-by: David Revillas <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for f72bb96 - Browse repository at this point
Copy the full SHA f72bb96View commit details -
Feature: Qdrant filters supports (langchain-ai#5446)
# Support Qdrant filters Qdrant has an [extensive filtering system](https://qdrant.tech/documentation/concepts/filtering/) with rich type support. This PR makes it possible to use the filters in Langchain by passing an additional param to both the `similarity_search_with_score` and `similarity_search` methods. ## Who can review? @dev2049 @hwchase17 --------- Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 8bcaca4 - Browse repository at this point
Copy the full SHA 8bcaca4View commit details -
Add matching engine vectorstore (langchain-ai#3350)
Co-authored-by: Tom Piaggio <[email protected]> Co-authored-by: scafati98 <[email protected]> Co-authored-by: scafati98 <[email protected]> Co-authored-by: Dev 2049 <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 470b282 - Browse repository at this point
Copy the full SHA 470b282View commit details -
Configuration menu - View commit details
-
Copy full SHA for b39c069 - Browse repository at this point
Copy the full SHA b39c069View commit details
Commits on Sep 13, 2023
-
Configuration menu - View commit details
-
Copy full SHA for 272c63c - Browse repository at this point
Copy the full SHA 272c63cView commit details
Commits on Sep 14, 2023
-
Merge branch 'mongo-document-loader' of https://github.com/saginawj/l…
…angchain into mongo-document-loader
Configuration menu - View commit details
-
Copy full SHA for 922e147 - Browse repository at this point
Copy the full SHA 922e147View commit details