Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

adding MongoDBAtlasVectorSearch #5338

Merged
merged 13 commits into from
May 30, 2023
Merged

adding MongoDBAtlasVectorSearch #5338

merged 13 commits into from
May 30, 2023

Conversation

P-E-B
Copy link
Contributor

@P-E-B P-E-B commented May 27, 2023

Add MongoDBAtlasVectorSearch for the python library

Fixes #5337

Who can review?

@dev2049

langchain/vectorstores/mongodb_atlas.py Outdated Show resolved Hide resolved
langchain/vectorstores/mongodb_atlas.py Outdated Show resolved Hide resolved
langchain/vectorstores/mongodb_atlas.py Outdated Show resolved Hide resolved
langchain/vectorstores/mongodb_atlas.py Show resolved Hide resolved
tests/integration_tests/vectorstores/test_mongodb_atlas.py Outdated Show resolved Hide resolved
tests/integration_tests/vectorstores/test_mongodb_atlas.py Outdated Show resolved Hide resolved
langchain/vectorstores/mongodb_atlas.py Outdated Show resolved Hide resolved
langchain/vectorstores/mongodb_atlas.py Show resolved Hide resolved
)
"""
if not connection_string or not namespace:
raise ValueError(
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I find it weird to define default None values for mandatory parameters and then making sure here that they have to be passed as non None values.
This is why I used kwargs in my initial commit since the vectorstore abstractmethod is a blocking element. I would recommend to make this abstractmethod more flexible to avoid such situation.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

changing abstract method at this point would be a large (potentially breaking) change, so would want to be very thoughtful about that and would warrant it's own pr. what would the ideal interface look like in your opinion?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

metadatas: Optional[List[dict]] = None could be metadatas: Optional[List[dict]]which would allow for other following parameters without default values

Copy link
Contributor

@dev2049 dev2049 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for comments @P-E-B!

langchain/vectorstores/mongodb_atlas.py Outdated Show resolved Hide resolved
langchain/vectorstores/mongodb_atlas.py Show resolved Hide resolved
)
"""
if not connection_string or not namespace:
raise ValueError(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

changing abstract method at this point would be a large (potentially breaking) change, so would want to be very thoughtful about that and would warrant it's own pr. what would the ideal interface look like in your opinion?

langchain/vectorstores/mongodb_atlas.py Outdated Show resolved Hide resolved
Copy link
Contributor

@izzymsft izzymsft left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a fantastic contribution. I would like to see it adapted to

  • include support for other MongoDB server types
  • include support for multiple options of search configurations.

@P-E-B
Copy link
Contributor Author

P-E-B commented May 29, 2023

[DONE] I'll check tonight all the code again and modify it to implement my observations. Please wait before merging

@izzymsft
Copy link
Contributor

@P-E-B @hwchase17 I have created #5419 for the Azure Cosmos DB for MongoDB vCore vector store. Thank you for your feedback. You may disregard my prior suggestion to merge the type vector store types here.

@dev2049
Copy link
Contributor

dev2049 commented May 30, 2023

@P-E-B so is knnBeta only available to certain users at the moment?

@P-E-B
Copy link
Contributor Author

P-E-B commented May 30, 2023

@P-E-B so is knnBeta only available to certain users at the moment?

@dev2049 No it's available to all users having a MongoDB Atlas cluster

@dev2049
Copy link
Contributor

dev2049 commented May 30, 2023

@P-E-B so is knnBeta only available to certain users at the moment?

@dev2049 No it's available to all users having a MongoDB Atlas cluster

i haven't been able to get the notebook to work but i'm probably doing something wrong, am happy to land if you've validated it

@P-E-B
Copy link
Contributor Author

P-E-B commented May 30, 2023

@dev2049 the integration tests are passing. You need:

  1. A MongoDB Atlas cluster (pick one free)
  2. Create a namespace
  3. Create an Atlas Search index (Lucene under the hood)
  4. Create the client and instantiate the vectorstore

@dev2049
Copy link
Contributor

dev2049 commented May 30, 2023

@dev2049 the integration tests are passing. You need:

  1. A MongoDB Atlas cluster (pick one free)
  2. Create a namespace
  3. Create an Atlas Search index (Lucene under the hood)
  4. Create the client and instantiate the vectorstore

ah just needed to fix my network access settings, works now!

@P-E-B
Copy link
Contributor Author

P-E-B commented May 30, 2023

@dev2049 Cool! Happy to jump on a call should you have any questions

@dev2049
Copy link
Contributor

dev2049 commented May 30, 2023

@P-E-B would love to mention this feature on twitter and am happy to tag you if you'd like, is there a twitter handle you'd want tagged?

@dev2049 dev2049 added 03 enhancement Enhancement of existing functionality lgtm PR looks good. Use to confirm that a PR is ready for merging. Ɑ: vector store Related to vector store module labels May 30, 2023
@P-E-B
Copy link
Contributor Author

P-E-B commented May 30, 2023

@P-E-B would love to mention this feature on twitter and am happy to tag you if you'd like, is there a twitter handle you'd want tagged?

@dev2049 this should not yet be mentioned: we are going to announce this at .Local NYC (June 22, 2023). It would be awesome if you could amplify on June 23rd!
Harrison knows about this.

@dev2049 dev2049 merged commit a61b7f7 into langchain-ai:master May 30, 2023
@dev2049
Copy link
Contributor

dev2049 commented May 30, 2023

@P-E-B would love to mention this feature on twitter and am happy to tag you if you'd like, is there a twitter handle you'd want tagged?

@dev2049 this should not yet be mentioned: we are going to announce this at .Local NYC (June 22, 2023). It would be awesome if you could amplify on June 23rd! Harrison knows about this.

roger that 👍

@P-E-B
Copy link
Contributor Author

P-E-B commented May 30, 2023

@P-E-B would love to mention this feature on twitter and am happy to tag you if you'd like, is there a twitter handle you'd want tagged?

@dev2049 this should not yet be mentioned: we are going to announce this at .Local NYC (June 22, 2023). It would be awesome if you could amplify on June 23rd! Harrison knows about this.

roger that 👍

I don't have twitter, but here is my linkedin profile if you need it: www.linkedin.com/in/paul-emile-brotons

@P-E-B P-E-B deleted the adding_mongodb_atlas_vector_search branch May 30, 2023 20:57
vowelparrot pushed a commit that referenced this pull request May 31, 2023
# Add MongoDBAtlasVectorSearch for the python library

Fixes #5337
---------

Co-authored-by: Dev 2049 <[email protected]>
@SimplyJuanjo
Copy link
Contributor

I've installed the last langchain release and tried both the vectorstore for store and search.

# We will use mongodb to store the embeddings
mongo_client = MongoClient(MONGO_CONNECTION_STRING)
print("Mongo client created", mongo_client)
vectorstore = MongoDBAtlasVectorSearch.from_texts(
[t.page_content for t in texts],
embeddings,
client=mongo_client,
namespace=NAMESPACE,
index_name=INDEX_NAME,
metadatas=metadatas,
)
print("Index created", vectorstore)

# retriever = VectorStoreRetriever(vectorstore=vectorstore)#, search_kwargs={"filter":{"doc_id": doc_id}})#, "include_metadata": True})
# # docs = retriever.get_relevant_documents("patient's name?")
# model = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-4", temperature=0, request_timeout=500), chain_type="stuff", retriever=retriever)

answer = vectorstore.similarity_search("patient's name")

print(answer)

Store is working:

Index created <langchain.vectorstores.mongodb_atlas.MongoDBAtlasVectorSearch object at 0x7fb054d0a7c0>

But the search both via "similarity_search" or "RetrievalQA" is giving this error:

answer = vectorstore.similarity_search("patient's name")
  File "/usr/local/lib/python3.8/site-packages/langchain/vectorstores/mongodb_atlas.py", line 221, in similarity_search
    docs_and_scores = self.similarity_search_with_score(
  File "/usr/local/lib/python3.8/site-packages/langchain/vectorstores/mongodb_atlas.py", line 185, in similarity_search_with_score
    cursor = self._collection.aggregate(pipeline)
  File "/usr/local/lib/python3.8/site-packages/pymongo/collection.py", line 2436, in aggregate
    return self._aggregate(
  File "/usr/local/lib/python3.8/site-packages/pymongo/_csot.py", line 105, in csot_wrapper
    return func(self, *args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/pymongo/collection.py", line 2343, in _aggregate
    return self.__database.client._retryable_read(
  File "/usr/local/lib/python3.8/site-packages/pymongo/_csot.py", line 105, in csot_wrapper
    return func(self, *args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/pymongo/mongo_client.py", line 1448, in _retryable_read
    return func(session, server, sock_info, read_pref)
  File "/usr/local/lib/python3.8/site-packages/pymongo/aggregation.py", line 142, in get_cursor
    result = sock_info.command(
  File "/usr/local/lib/python3.8/site-packages/pymongo/pool.py", line 767, in command
    return command(
  File "/usr/local/lib/python3.8/site-packages/pymongo/network.py", line 166, in command
    helpers._check_command_response(
  File "/usr/local/lib/python3.8/site-packages/pymongo/helpers.py", line 181, in _check_command_response
    raise OperationFailure(errmsg, code, response, max_wire_version)
pymongo.errors.OperationFailure: Unrecognized pipeline stage name: $search, full error: {'ok': 0.0, 'errmsg': 'Unrecognized pipeline stage name: $search', 'code': 40324, 'codeName': 'UnrecognizedCommand'}

Any advice @hwchase17 @dev2049 @P-E-B ??

@P-E-B
Copy link
Contributor Author

P-E-B commented Jun 4, 2023

I've installed the last langchain release and tried both the vectorstore for store and search.

# We will use mongodb to store the embeddings
mongo_client = MongoClient(MONGO_CONNECTION_STRING)
print("Mongo client created", mongo_client)
vectorstore = MongoDBAtlasVectorSearch.from_texts(
[t.page_content for t in texts],
embeddings,
client=mongo_client,
namespace=NAMESPACE,
index_name=INDEX_NAME,
metadatas=metadatas,
)
print("Index created", vectorstore)

# retriever = VectorStoreRetriever(vectorstore=vectorstore)#, search_kwargs={"filter":{"doc_id": doc_id}})#, "include_metadata": True})
# # docs = retriever.get_relevant_documents("patient's name?")
# model = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-4", temperature=0, request_timeout=500), chain_type="stuff", retriever=retriever)

answer = vectorstore.similarity_search("patient's name")

print(answer)

Store is working:

Index created <langchain.vectorstores.mongodb_atlas.MongoDBAtlasVectorSearch object at 0x7fb054d0a7c0>

But the search both via "similarity_search" or "RetrievalQA" is giving this error:

answer = vectorstore.similarity_search("patient's name")
  File "/usr/local/lib/python3.8/site-packages/langchain/vectorstores/mongodb_atlas.py", line 221, in similarity_search
    docs_and_scores = self.similarity_search_with_score(
  File "/usr/local/lib/python3.8/site-packages/langchain/vectorstores/mongodb_atlas.py", line 185, in similarity_search_with_score
    cursor = self._collection.aggregate(pipeline)
  File "/usr/local/lib/python3.8/site-packages/pymongo/collection.py", line 2436, in aggregate
    return self._aggregate(
  File "/usr/local/lib/python3.8/site-packages/pymongo/_csot.py", line 105, in csot_wrapper
    return func(self, *args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/pymongo/collection.py", line 2343, in _aggregate
    return self.__database.client._retryable_read(
  File "/usr/local/lib/python3.8/site-packages/pymongo/_csot.py", line 105, in csot_wrapper
    return func(self, *args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/pymongo/mongo_client.py", line 1448, in _retryable_read
    return func(session, server, sock_info, read_pref)
  File "/usr/local/lib/python3.8/site-packages/pymongo/aggregation.py", line 142, in get_cursor
    result = sock_info.command(
  File "/usr/local/lib/python3.8/site-packages/pymongo/pool.py", line 767, in command
    return command(
  File "/usr/local/lib/python3.8/site-packages/pymongo/network.py", line 166, in command
    helpers._check_command_response(
  File "/usr/local/lib/python3.8/site-packages/pymongo/helpers.py", line 181, in _check_command_response
    raise OperationFailure(errmsg, code, response, max_wire_version)
pymongo.errors.OperationFailure: Unrecognized pipeline stage name: $search, full error: {'ok': 0.0, 'errmsg': 'Unrecognized pipeline stage name: $search', 'code': 40324, 'codeName': 'UnrecognizedCommand'}

Any advice @hwchase17 @dev2049 @P-E-B ??

The vectorstore instantiation has been changed. Try:

vectorstore = MongoDBAtlasVectorSearch.from_texts(
[t.page_content for t in texts],
embeddings,
collection=collection, # THIS instead of client + namespace
index_name=INDEX_NAME,
metadatas=metadatas,
)

Also, did you create a cluster on MongoDB Atlas with an Atlas Search Index? It is only working in Atlas.

@SimplyJuanjo
Copy link
Contributor

ohhh @P-E-B i think i was my bad because I mixed #5419 instead of this #5337

though I was working with CosmosDB vCore for Azure

so 2 questions, I can use then Atlas Search for managing vector stores in Azure?

or you have knowledge about the Cosmos Vector Search progress development?

Thank you for your time and sorry for my confusion.

Going to build an Atlas search index now and try it that way then.

@P-E-B
Copy link
Contributor Author

P-E-B commented Jun 5, 2023

@SimplyJuanjo MongoDB and CosmosDB are two different technologies under the hood, even if they share some core parts of the MongoDB API.
To be fully clear, this PR has nothing to do with CosmosDB. You can deploy a MongoDB Atlas cluster at mongodb.com deployed on Azure VMs.

@SimplyJuanjo
Copy link
Contributor

SimplyJuanjo commented Jun 5, 2023

@P-E-B
Copy link
Contributor Author

P-E-B commented Jun 5, 2023 via email

@SimplyJuanjo
Copy link
Contributor

@P-E-B same code u provided, is working in the MongoDB Atlas cluster

but I'm receiving this output

Index created <langchain.vectorstores.mongodb_atlas.MongoDBAtlasVectorSearch object at 0x7f537d3ddeb0>
[]

So the doc list is empty for the query, but if i go to the mongo dashboard for atlas and I do the same query there in Search Index, i'm getting the docs retrieved correctly for that query

Also, doing some changes I ended up with this new error:

File "/home/create_index.py", line 273, in process_data
    raise e
  File "/home/create_index.py", line 260, in process_data
    answer = vectorstore.similarity_search("diagnosis", k=3)
  File "/usr/local/lib/python3.8/site-packages/langchain/vectorstores/mongodb_atlas.py", line 221, in similarity_search
    docs_and_scores = self.similarity_search_with_score(
  File "/usr/local/lib/python3.8/site-packages/langchain/vectorstores/mongodb_atlas.py", line 185, in similarity_search_with_score
    cursor = self._collection.aggregate(pipeline)
  File "/usr/local/lib/python3.8/site-packages/pymongo/collection.py", line 2436, in aggregate
    return self._aggregate(
  File "/usr/local/lib/python3.8/site-packages/pymongo/_csot.py", line 105, in csot_wrapper
    return func(self, *args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/pymongo/collection.py", line 2343, in _aggregate
    return self.__database.client._retryable_read(
  File "/usr/local/lib/python3.8/site-packages/pymongo/_csot.py", line 105, in csot_wrapper
    return func(self, *args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/pymongo/mongo_client.py", line 1448, in _retryable_read
    return func(session, server, sock_info, read_pref)
  File "/usr/local/lib/python3.8/site-packages/pymongo/aggregation.py", line 142, in get_cursor
    result = sock_info.command(
  File "/usr/local/lib/python3.8/site-packages/pymongo/pool.py", line 767, in command
    return command(
  File "/usr/local/lib/python3.8/site-packages/pymongo/network.py", line 166, in command
    helpers._check_command_response(
  File "/usr/local/lib/python3.8/site-packages/pymongo/helpers.py", line 181, in _check_command_response
    raise OperationFailure(errmsg, code, response, max_wire_version)
pymongo.errors.OperationFailure: embedding is not indexed as kNN, full error: {'ok': 0.0, 'errmsg': 'embedding is not indexed as kNN', 'code': 8, 'codeName': 'UnknownError', '$clusterTime': {'clusterTime': Timestamp(1685974549, 114), 'signature': {'hash': b'D\x1f\xca\x8c\xa4\x9f\xd5\xf9\xc9\xa1\x99\x97{\xdc*gY\x0f)\xb8', 'keyId': 7184364024207769602}}, 'operationTime': Timestamp(1685974549, 114)}

"pymongo.errors.OperationFailure: embedding is not indexed as kNN"

I might be doing something wrong, because I had to manually create the Altas Search Index in the Visual Editor in Mongo Dashboard web

@danielchalef danielchalef mentioned this pull request Jun 5, 2023
@izzymsft
Copy link
Contributor

izzymsft commented Jun 5, 2023

ohhh @P-E-B i think i was my bad because I mixed #5419 instead of this #5337

though I was working with CosmosDB vCore for Azure

so 2 questions, I can use then Atlas Search for managing vector stores in Azure?

or you have knowledge about the Cosmos Vector Search progress development?

Thank you for your time and sorry for my confusion.

Going to build an Atlas search index now and try it that way then.

@SimplyJuanjo the work on the vector store integration is happening in #5419

This is currently in progress. We will have an update in about 2 weeks from today. If you are looking for MongoDB Atlas on Azure, you can take a look at this for some guidance and steps on how to deploy it and get started.

https://www.mongodb.com/mongodb-on-azure

@SimplyJuanjo
Copy link
Contributor

Thx for your comments @izzymsft!

I've tried both Atlas and Cosmos yesterday and finally concluded that Cosmos might be interesting to transition out of Pinecone expensive stuff hahaha

I've manage to integrate it manually thx to other guy in this repo: https://github.com/flo7up/relataly-public-python-tutorials/blob/master/07%20OpenAI/604%20Custom%20ChatGPT%20with%20Azure%20Cosmos%20DB%20Vector%20Database%20and%20Embeddings.ipynb

And I would like to contribute helping you integrating it (Cosmos vCore) with langChain if possible. And maybe then it's ready faster?

Best regards, Juanjo do Olmo

@izzymsft
Copy link
Contributor

izzymsft commented Jun 6, 2023

@SimplyJuanjo thank you for reaching out. We can sync offline for next steps.

Undertone0809 pushed a commit to Undertone0809/langchain that referenced this pull request Jun 19, 2023
# Add MongoDBAtlasVectorSearch for the python library

Fixes langchain-ai#5337
---------

Co-authored-by: Dev 2049 <[email protected]>
This was referenced Jun 25, 2023
baskaryan added a commit that referenced this pull request Sep 28, 2023
Adds support for the `$vectorSearch` operator for
MongoDBAtlasVectorSearch, which was announced at .Local London
(September 26th, 2023). This change maintains breaks compatibility
support for the existing `$search` operator used by the original
integration (#5338) due to
incompatibilities in the Atlas search implementations.

---------

Co-authored-by: Bagatur <[email protected]>
ShorthillsAI added a commit to shorthills-ai/langchain that referenced this pull request Oct 3, 2023
* Support using async callback handlers with sync callback manager (langchain-ai#10945)

The current behaviour just calls the handler without awaiting the
coroutine, which results in exceptions/warnings, and obviously doesn't
actually execute whatever the callback handler does

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

* LangServe (langchain-ai#11046)

Adds LangServe package

* Integrate Runnables with Fast API creating Server and a RemoteRunnable
client
* Support multiple runnables for a given server
* Support sync/async/batch/abatch/stream/astream/astream_log on the
client side (using async implementations on server)
* Adds validation using annotations (relying on pydantic under the hood)
-- this still has some rough edges -- e.g., open api docs do NOT
generate correctly at the moment
* Uses pydantic v1 namespace

Known issues: type translation code doesn't handle a lot of types (e.g.,
TypedDicts)

---------

Co-authored-by: Bagatur <[email protected]>

* Add input/output schemas to runnables (langchain-ai#11063)

This adds `input_schema` and `output_schema` properties to all
runnables, which are Pydantic models for the input and output types
respectively. These are inferred from the structure of the Runnable as
much as possible, the only manual typing needed is
- optionally add type hints to lambdas (which get translated to
input/output schemas)
- optionally add type hint to RunnablePassthrough

These schemas can then be used to create JSON Schema descriptions of
input and output types, see the tests

- [x] Ensure no InputType and OutputType in our classes use abstract
base classes (replace with union of subclasses)
- [x] Implement in BaseChain and LLMChain
- [x] Implement in RunnableBranch
- [x] Implement in RunnableBinding, RunnableMap, RunnablePassthrough,
RunnableEach, RunnableRouter
- [x] Implement in LLM, Prompt, Chat Model, Output Parser, Retriever
- [x] Implement in RunnableLambda from function signature
- [x] Implement in Tool

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

* Expose loads and dumps in load namespace

* Async support for OpenAIFunctionsAgentOutputParser (langchain-ai#11140)

* milvus collections (langchain-ai#11148)

Description: There was no information about Milvus collections in the
documentation, so I am adding that.
Maintainer: @eyurtsev

* Xata chat memory FIX (langchain-ai#11145)

- **Description:** Changed data type from `text` to `json` in xata for
improved performance. Also corrected the `additionalKwargs` key in the
`messages()` function to `additional_kwargs` to adhere to `BaseMessage`
requirements.
- **Issue:** The Chathisroty.messages() will return {} of
`additional_kwargs`, as the name is wrong for `additionalKwargs` .
  - **Dependencies:**  N/A
  - **Tag maintainer:** N/A
  - **Twitter handle:** N/A

My PR is passing linting and testing before submitting.

* Fixed Typo Error in Update get_started.mdx file by addressing a minor typographical error. (langchain-ai#11154)

Fixed Typo Error in Update get_started.mdx file by addressing a minor
typographical error.

This improvement enhances the readability and correctness of the
notebook, making it easier for users to understand and follow the
demonstration. The commit aims to maintain the quality and accuracy of
the content within the repository.
please review the change at your convenience.

@baskaryan , @hwaking

* Implement better reprs for Runnables

* x

* x

* x

* x

* Fix stop key of TextGen. (langchain-ai#11109)

The key of stopping strings used in text-generation-webui api is
[`stopping_strings`](https://github.com/oobabooga/text-generation-webui/blob/main/api-examples/api-example.py#L51),
not `stop`.
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

* LangServe: Clean up init files (langchain-ai#11174)

Clean up init files

* mypy

* Lint

* Lint

* Expose lc_id as a classmethod (langchain-ai#11176)

* Expose LC id as a class method 
* User should not need to know that the last part of the id is the class
name

* Update Bedrock service name to "bedrock-runtime" and model identifiers (langchain-ai#11161)

- **Description:** Bedrock updated boto service name to
"bedrock-runtime" for the InvokeModel and InvokeModelWithResponseStream
APIs. This update also includes new model identifiers for Titan text,
embedding and Anthropic.

Co-authored-by: Mani Kumar Adari <[email protected]>

* LangServe: Add release workflow (langchain-ai#11178)

Add release workflow to langserve

* LangServe: Update langchain requirement for publishing (langchain-ai#11186)

Update langchain requirement for publishing

* temporarily skip embedding empty string test (langchain-ai#11187)

* Fix anthropic secret key when passed in via init (langchain-ai#11185)

Fixes anthropic secret key when passed via init

langchain-ai#11182

* add anthropic scheduled tests and unit tests (langchain-ai#11188)

* Rm additional file check for scheduled tests (langchain-ai#11192)

cc @obi1kenobi Causing issues with GHA creds
https://github.com/langchain-ai/langchain/actions/runs/6342674950/job/17228926776

* Add source metadata to OutlookMessageLoader (langchain-ai#11183)

Description: Add "source" metadata to OutlookMessageLoader

This pull request adds the "source" metadata to the OutlookMessageLoader
class in the load method. The "source" metadata is required when
indexing with RecordManager in order to sync the index documents with a
source.

Issue: None

Dependencies: None

Twitter handle: @ATelders

Co-authored-by: Arthur Telders <[email protected]>

* [OpenSearch] Add Self Query Retriever Support to OpenSearch (langchain-ai#11184)

### Description
Add Self Query Retriever Support to OpenSearch

### Maintainers
@rlancemartin, @eyurtsev, @navneet1v

### Twitter Handle
@OpenSearchProj

Signed-off-by: Naveen Tatikonda <[email protected]>

* [ElasticsearchStore] Improve migration text to ElasticsearchStore (langchain-ai#11158)

We noticed that as we have been moving developers to the new
`ElasticsearchStore` implementation, we want to keep the
ElasticVectorSearch class still available as developers transition
slowly to the new store.

To speed up this process, I updated the blurb giving them a better
recommendation of why they should use ElasticsearchStore.

* update docs nav (langchain-ai#11146)

* Add langserve version (langchain-ai#11195)

Add langserve version

* [Feat] Add optional client-side encryption to DynamoDB chat history memory (langchain-ai#11115)

**Description:** Added optional client-side encryption to the Amazon
DynamoDB chat history memory with an AWS KMS Key ID using the [AWS
Database Encryption SDK for
Python](https://docs.aws.amazon.com/database-encryption-sdk/latest/devguide/python.html)
**Issue:** langchain-ai#7886
**Dependencies:**
[dynamodb-encryption-sdk](https://pypi.org/project/dynamodb-encryption-sdk/)
**Tag maintainer:**  @hwchase17 
**Twitter handle:** [@jplock](https://twitter.com/jplock/)

---------

Co-authored-by: Bagatur <[email protected]>

* Shared Executor (langchain-ai#11028)

* LLMonitor Callback handler: fix bug (langchain-ai#11128)

Here is a small bug fix for the LLMonitor callback handler. I've also
added user identification capabilities.

* Add support for MongoDB Atlas $vectorSearch vector search (langchain-ai#11139)

Adds support for the `$vectorSearch` operator for
MongoDBAtlasVectorSearch, which was announced at .Local London
(September 26th, 2023). This change maintains breaks compatibility
support for the existing `$search` operator used by the original
integration (langchain-ai#5338) due to
incompatibilities in the Atlas search implementations.

---------

Co-authored-by: Bagatur <[email protected]>

* add from_existing_graph to neo4j vector (langchain-ai#11124)

This PR adds the option to create a Neo4jvector instance from existing
graph, which embeds existing text in the database and creates relevant
indices.

* Add `add_graph_documents` support for FalkorDBGraph  (langchain-ai#11122)

Adding `add_graph_documents` support for FalkorDBGraph and extending the
`Neo4JGraph` api so it can support `cypher.py`

* FIx eval prompt (langchain-ai#11087)

**Description:** fixes a common typo in some of the eval criteria.

* Expanded version range for networkx, fixed sample notebook (langchain-ai#11094)

## Description
Expanded the upper bound for `networkx` dependency to allow installation
of latest stable version. Tested the included sample notebook with
version 3.1, and all steps ran successfully.
---------

Co-authored-by: Bagatur <[email protected]>

* docs: Mendable Search Improvements (langchain-ai#11199)

Improvements to the Mendable UI, more accurate responses, and bug fixes.

* Change type annotations from LLMChain to Chain in MultiPromptChain (langchain-ai#11082)

- **Description:** The types of 'destination_chains' and 'default_chain'
in 'MultiPromptChain' were changed from 'LLMChain' to 'Chain'. and
removed variables declared overlapping with the parent class
- **Issue:** When a class that inherits only Chain and not LLMChain,
such as 'SequentialChain' or 'RetrievalQA', is entered in
'destination_chains' and 'default_chain', a pydantic validation error is
raised.
-  -  codes
```
retrieval_chain = ConversationalRetrievalChain(
        retriever=doc_retriever,
        combine_docs_chain=combine_docs_chain,
        question_generator=question_gen_chain,
    )
    
    destination_chains = {
        'retrieval': retrieval_chain,
    }
    
    main_chain = MultiPromptChain(
        router_chain=router_chain,
        destination_chains=destination_chains,
        default_chain=default_chain,
        verbose=True,
    )
```

✅ `make format`, `make lint` and `make test`

* fix: short-circuit black and mypy calls when no changes made (langchain-ai#11051)

Both black and mypy expect a list of files or directories as input.
As-is the Makefile computes a list files changed relative to the last
commit; these are passed to black and mypy in the `format_diff` and
`lint_diff` targets. This is done by way of the Makefile variable
`PYTHON_FILES`. This is to save time by skipping running mypy and black
over the whole source tree.

When no changes have been made, this variable is empty, so the call to
black (and mypy) lacks input files. The call exits with error causing
the Makefile target to error out with:

```bash
$ make format_diff
poetry run black
Usage: black [OPTIONS] SRC ...

One of 'SRC' or 'code' is required.
make: *** [format_diff] Error 1
```

This is unexpected and undesirable, as the naive caller (that's me! 😄 )
will think something else is wrong. This commit smooths over this by
short circuiting when `PYTHON_FILES` is empty.

* Callback integration for Trubrics (langchain-ai#11059)

After contributing to some examples in the
[langsmith-cookbook](https://github.com/langchain-ai/langsmith-cookbook)
with @hinthornw, here is a PR that adds a callback handler to use
LangChain with [Trubrics](https://github.com/trubrics/trubrics-sdk).

* Support add_embeddings for opensearch (langchain-ai#11050)

- **Description:**
      -  Make running integration test for opensearch easy
- Provide a way to use different text for embedding: refer to langchain-ai#11002 for
more of the use case and design decision.
  - **Issue:** N/A
  - **Dependencies:** None other than the existing ones.

* chore: add support for TypeScript code splitting (langchain-ai#11160)


- **Description:** Adds typescript language to `TextSplitter`

---------

Co-authored-by: Jacob Lee <[email protected]>

* fix trubrics lint issue (langchain-ai#11202)

* SearchApi integration (langchain-ai#11023)

Based on the customers' requests for native langchain integration,
SearchApi is ready to invest in AI and LLM space, especially in
open-source development.

- This is our initial PR and later we want to improve it based on
customers' and langchain users' feedback. Most likely changes will
affect how the final results string is being built.
- We are creating similar native integration in Python and JavaScript.
- The next plan is to integrate into Java, Ruby, Go, and others.
- Feel free to assign @SebastjanPrachovskij as a main reviewer for any
SearchApi-related searches. We will be glad to help and support
langchain development.

* Synthetic Data generation (langchain-ai#9472)

---------

Co-authored-by: William Fu-Hinthorn <[email protected]>
Co-authored-by: Bagatur <[email protected]>

* LangServe: Relax requirements (langchain-ai#11198)

Relax requirements

* Add last_edited_time and created_time props to NotionDBLoader (langchain-ai#11020)

# Description

Adds logic for NotionDBLoader to correctly populate `last_edited_time`
and `created_time` fields from [page
properties](https://developers.notion.com/reference/page#property-value-object).

There are no relevant tests for this code to be updated.

---------

Co-authored-by: Bagatur <[email protected]>

* `LlamaCppEmbeddings`: adds `verbose` parameter, similar to `llms.LlamaCpp` class (langchain-ai#11038)

## Description

As of now, when instantiating and during inference, `LlamaCppEmbeddings`
outputs (a lot of) verbose when controlled from Langchain binding - it
is a bit annoying when computing the embeddings of long documents, for
instance.

This PR adds `verbose` for `LlamaCppEmbeddings` objects to be able
**not** to print the verbose of the model to `stderr`. It is natively
supported by `llama-cpp-python` and directly passed to the library – the
PR is hence very small.

The value of `verbose` is `True` by default, following the way it is
defined in [`LlamaCpp` (`llamacpp.py`
#L136-L137)](https://github.com/langchain-ai/langchain/blob/c87e9fb2ce0ae617e3b2edde52421c80adef54cc/libs/langchain/langchain/llms/llamacpp.py#L136-L137)

## Issue

_No issue linked_

## Dependencies

_No additional dependency needed_

## To see it in action

```python
from langchain.embeddings import LlamaCppEmbeddings

MODEL_PATH = "<path_to_gguf_file>"

if __name__ == "__main__":
    llm_embeddings = LlamaCppEmbeddings(
        model_path=MODEL_PATH,
        n_gpu_layers=1,
        n_batch=512,
        n_ctx=2048,
        f16_kv=True,
        verbose=False,
    )
```

Co-authored-by: Bagatur <[email protected]>

* Support new version of tiktoken that are working with langchain (tag "^0.3.2" => "">=0.3.2,<0.6.0" and python "^3.9" =>">=3.9") (langchain-ai#11006)

- **Description:**
be able to use langchain with other version than tiktoken 0.3.3 i.e
0.5.1
  - **Issue:**
cannot installed the conda-forge version since it applied all optional
dependency:
       conda-forge/langchain-feedstock#85  
replace "^0.3.2" by "">=0.3.2,<0.6.0" and "^3.9" by python=">=3.9"
      Tested with python 3.10, langchain=0.0.288 and tiktoken==0.5.0

---------

Co-authored-by: Bagatur <[email protected]>

* Typo fix to MathpixPDFLoader - changed processed_file_format default … (langchain-ai#10960)

…from mmd to md. langchain-ai#7282

<!-- 
- **Description:** minor fix to a breaking typo - MathPixPDFLoader
processed_file_format is "mmd" by default, doesn't work, changing to
"md" fixes the issue,
- **Issue:** 7282
(langchain-ai#7282),
  - **Dependencies:** none,
  - **Tag maintainer:** @hwchase17,
  - **Twitter handle:** none
 -->

Co-authored-by: jare0530 <[email protected]>

* Fix web-base loader (langchain-ai#11135)

Fix initialization

langchain-ai#11095

* Updated `LocalAIEmbeddings` docstring to better explain why `openai` (langchain-ai#10946)

Fixes my misgivings in
langchain-ai#10912

* Add support for project metadata in run_on_dataset (langchain-ai#11200)

* Add from_embeddings for opensearch (langchain-ai#10957)

* Skip for py3.8

* Skip in py3.8

* skip more

* Even more

* Enable creating Tools from any Runnable

* Fix invocation

* Lint

* Lint

* Add RunnableGenerator

* Add tests

* Lint

* Add a streaming json parser

* Implement str one

* WIP Add tests§

* Implement diff

* Implement diff

* Backwards compat

* Clean warnings: replace type with isinstance and fix syntax (langchain-ai#11219)

Clean warnings: replace type with `isinstance` and fix on notebook
syntax syntax

* Add async tests and comments

* Update fireworks features (langchain-ai#11205)

Description
* Update fireworks feature on web page

Issue - Not applicable
Dependencies - None
Tag maintainer - @baskaryan

* mongodb doc loader init (langchain-ai#10645)

- **Description:** A Document Loader for MongoDB
  - **Issue:** n/a
  - **Dependencies:** Motor, the async driver for MongoDB
  - **Tag maintainer:** n/a
  - **Twitter handle:** pigpenblue

Note that an initial mongodb document loader was created 4 months ago,
but the [PR ](langchain-ai#4285
never pulled in. @leo-gan had commented on that PR, but given it is
extremely far behind the master branch and a ton has changed in
Langchain since then (including repo name and structure), I rewrote the
branch and issued a new PR with the expectation that the old one can be
closed.

Please reference that old PR for comments/context, but it can be closed
in favor of this one. Thanks!

---------

Co-authored-by: Bagatur <[email protected]>
Co-authored-by: Eugene Yurtsev <[email protected]>

* Suppress warnings in interactive env that stem from tab completion (langchain-ai#11190)

Suppress warnings in interactive environments that can arise from users 
relying on tab completion (without even using deprecated modules).

jupyter seems to filter warnings by default (at least for me), but
ipython surfaces them all

* OpenAI gpt-3.5-turbo-instruct cost information (langchain-ai#11218)

Added pricing info for `gpt-3.5-turbo-instruct` for OpenAI and Azure
OpenAI.

Co-authored-by: Attila Tőkés <[email protected]>

* Fix typo in gradient.ipynb (langchain-ai#11206)

Enviroment -> Environment

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

* Make test deterministic

* bump 305 (langchain-ai#11224)

* Using langchain input types (langchain-ai#11204)

Using langchain input type

* Make tests stricter, remove old code, fix up pydantic import when using v2 (langchain-ai#11231)

Make tests stricter, remove old code, fix up pydantic import when using v2 (langchain-ai#11231)

* Combine with existing json output parsers

* Lint

* Keep exceptions when not in streaming mode

* Update json.py

Co-authored-by: Eugene Yurtsev <[email protected]>

* Update json.py

Co-authored-by: Eugene Yurtsev <[email protected]>

* Lint

* Remove flawed test

- It is not possible to access properties on classes, only on instances, therefore this test is not something we can implement

* Implement RunnablePassthrough.assign(...) (langchain-ai#11222)

Passes through dict input and assigns additional keys

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

* Add type to message chunks (langchain-ai#11232)

* Ignore aadd (langchain-ai#11235)

* fix code injection vuln (langchain-ai#11233)

- **Description:** Fix a code injection vuln by adding one more keyword
into the filtering list
  - **Issue:** N/A
  - **Dependencies:** N/A
  - **Tag maintainer:** 
  - **Twitter handle:**

Co-authored-by: Eugene Yurtsev <[email protected]>

* Bump deps in langserve (langchain-ai#11234)

Bump deps in langserve lockfile

* Update DeepSparse LLM (langchain-ai#11236)

**Description:** Adds streaming and many more sampling parameters to the
DeepSparse interface

---------

Co-authored-by: Harrison Chase <[email protected]>

* docs: `integrations/memory` consistency (langchain-ai#10255)

- updated titles and descriptions of the `integrations/memory` notebooks
into consistent and laconic format;
- removed
`docs/extras/integrations/memory/motorhead_memory_managed.ipynb` file as
a duplicate of the
`docs/extras/integrations/memory/motorhead_memory.ipynb`;
- added `integrations/providers` Integration Cards for `dynamodb`,
`motorhead`.
- updated `integrations/providers/redis.mdx` with links
- renamed several notebooks; updated `vercel.json` to reroute new names.

* docs: `document_transformers` consistency (langchain-ai#10467)

- Updated `document_transformers` examples: titles, descriptions, links
- Added `integrations/providers` for missed document_transformers

* docs: updated `YouTube` and `tutorial` video links (langchain-ai#10897)

updated `YouTube` and `tutorial` videos with new links.
Removed couple of duplicates.
Reordered several links by view counters
Some formatting: emphasized the names of products

* minor fix: remove redundant code from OpenAIFunctionsAgent (langchain-ai#11245)

minor fix: remove redundant code from OpenAIFunctionsAgent (langchain-ai#11245)

* rename repo namespace to langchain-ai (langchain-ai#11259)

### Description
renamed several repository links from `hwchase17` to `langchain-ai`.

### Why
I discovered that the README file in the devcontainer contains an old
repository name, so I took the opportunity to rename the old repository
name in all files within the repository, excluding those that do not
require changes.

### Dependencies
none

### Tag maintainer
@baskaryan

### Twitter handle
[kzk_maeda](https://twitter.com/kzk_maeda)

* Fix typo in docstring (langchain-ai#11256)

Description : Remove meaningless 's' in docstring

* Create new RunnableSerializable class in preparation for configurable runnables

- Also move RunnableBranch to its own file

* Lint

* Lint

* Lint

* Lint

* Move RunnableWithFallbacks to its own file

* Lint

* Lint

* Lint

* Update quickstart.mdx to add backtick after `ChatMessages`  (langchain-ai#11241)

While going through the documentation I found this small issue and
wanted to contribute!

<!-- Thank you for contributing to LangChain! -->

* Remove extra spaces (langchain-ai#11283)

### Description
When I was reading the document, I found that some examples had extra
spaces and violated "Unexpected spaces around keyword / parameter equals
(E251)" in pep8. I removed these extra spaces.
  
### Tag maintainer
@eyurtsev 
### Twitter handle
[billvsme](https://twitter.com/billvsme)

* Add base docker image and ci script for building and pushing (langchain-ai#10927)

* bump 306 (langchain-ai#11289)

* Small changes to runnable docs (langchain-ai#11293)

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

* Add Google GitHub Action creds file to gitignore. (langchain-ai#11296)

Should resolve the issue here:
https://github.com/langchain-ai/langchain/actions/runs/6342767671/job/17229204508#step:7:36

After this merges, we can revert
langchain-ai#11192

* Add pending deprecation warning (langchain-ai#11133)

This PR uses 2 dedicated LangChain warnings types for deprecations
(mirroring python's built in deprecation and pending deprecation
warnings).

These deprecation types are unslienced during initialization in
langchain achieving the same default behavior that we have with our
current warnings approach. However, because these warnings have a
dedicated type, users will be able to silence them selectively (I think
this is strictly better than our current handling of warnings).

The PR adds a deprecation warning to llm symbolic math.

---------

Co-authored-by: Predrag Gruevski <[email protected]>

* Make numexpr optional (langchain-ai#11049)

Co-authored-by: Eugene Yurtsev <[email protected]>

* Bump min version of numexpr (langchain-ai#11302)

Bump min version

* Bedrock scheduled tests (langchain-ai#11194)

* Fix closing bracket in length-based selector snippet (langchain-ai#11294)

**Description:**

Fix a forgotten closing bracket in the length-based selector snippet

Co-authored-by: Eugene Yurtsev <[email protected]>

* Fix line break in docs imports (langchain-ai#11270)

It is just a straightforward docs fix.

* add LLMBashChain to experimental (langchain-ai#11305)

Add LLMBashChain to experimental

* Add .configurable_fields() and .configurable_alternatives() to expose fields of a Runnable to be configured at runtime (langchain-ai#11282)

* Upgrade `langchain` dependency versions to resolve dependabot alerts. (langchain-ai#11307)

* Add scoring chain (langchain-ai#11123)

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

* Make Google PaLM classes serialisable (langchain-ai#11121)

Similarly to Vertex classes, PaLM classes weren't marked as
serialisable. Should be working fine with LangSmith.

---------

Co-authored-by: Erick Friis <[email protected]>

* Mark Vertex AI classes as serialisable (langchain-ai#10484)

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - Description: a description of the change, 
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. These live is docs/extras
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17, @rlancemartin.
 -->

---------

Co-authored-by: Erick Friis <[email protected]>

* Adds Tavily Search API retriever (langchain-ai#11314)

@baskaryan @efriis

* Update clarifai.mdx

---------

Signed-off-by: Naveen Tatikonda <[email protected]>
Co-authored-by: Nuno Campos <[email protected]>
Co-authored-by: Eugene Yurtsev <[email protected]>
Co-authored-by: Bagatur <[email protected]>
Co-authored-by: William FH <[email protected]>
Co-authored-by: Apurv Agarwal <[email protected]>
Co-authored-by: Nan LI <[email protected]>
Co-authored-by: Nuno Campos <[email protected]>
Co-authored-by: Akio Nishimura <[email protected]>
Co-authored-by: mani2348 <[email protected]>
Co-authored-by: Mani Kumar Adari <[email protected]>
Co-authored-by: Arthur Telders <[email protected]>
Co-authored-by: Arthur Telders <[email protected]>
Co-authored-by: Naveen Tatikonda <[email protected]>
Co-authored-by: Joseph McElroy <[email protected]>
Co-authored-by: Justin Plock <[email protected]>
Co-authored-by: Bagatur <[email protected]>
Co-authored-by: Hugues <[email protected]>
Co-authored-by: Noah Stapp <[email protected]>
Co-authored-by: Tomaz Bratanic <[email protected]>
Co-authored-by: Guy Korland <[email protected]>
Co-authored-by: Piotr Mardziel <[email protected]>
Co-authored-by: Piyush Jain <[email protected]>
Co-authored-by: Nicolas <[email protected]>
Co-authored-by: Michael Kim <[email protected]>
Co-authored-by: Michael Landis <[email protected]>
Co-authored-by: Jeff Kayne <[email protected]>
Co-authored-by: Kenneth Choe <[email protected]>
Co-authored-by: Fynn Flügge <[email protected]>
Co-authored-by: Jacob Lee <[email protected]>
Co-authored-by: Donatas Remeika <[email protected]>
Co-authored-by: PaperMoose <[email protected]>
Co-authored-by: Noah Czelusta <[email protected]>
Co-authored-by: Clément Sicard <[email protected]>
Co-authored-by: Dr. Fabien Tarrade <[email protected]>
Co-authored-by: jreinjr <[email protected]>
Co-authored-by: jare0530 <[email protected]>
Co-authored-by: James Braza <[email protected]>
Co-authored-by: Cynthia Yang <[email protected]>
Co-authored-by: Jon Saginaw <[email protected]>
Co-authored-by: Attila Tőkés <[email protected]>
Co-authored-by: Attila Tőkés <[email protected]>
Co-authored-by: Ikko Eltociear Ashimine <[email protected]>
Co-authored-by: Haozhe <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Harrison Chase <[email protected]>
Co-authored-by: Leonid Ganeline <[email protected]>
Co-authored-by: Dayuan Jiang <[email protected]>
Co-authored-by: Kazuki Maeda <[email protected]>
Co-authored-by: Yeonji-Lim <[email protected]>
Co-authored-by: James Odeyale <[email protected]>
Co-authored-by: zhengkai <[email protected]>
Co-authored-by: Predrag Gruevski <[email protected]>
Co-authored-by: Oleg Sinavski <[email protected]>
Co-authored-by: João Carabetta <[email protected]>
Co-authored-by: CG80499 <[email protected]>
Co-authored-by: David Duong <[email protected]>
Co-authored-by: Erick Friis <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
03 enhancement Enhancement of existing functionality lgtm PR looks good. Use to confirm that a PR is ready for merging. Ɑ: vector store Related to vector store module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add MongoDBAtlasVectorSearch vectorstore
5 participants