Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

For New Contributors: Use SecretStr for api_keys #12165

Closed
5 of 40 tasks
eyurtsev opened this issue Oct 23, 2023 · 56 comments · Fixed by #14309, #20257, #20982 or #20986
Closed
5 of 40 tasks

For New Contributors: Use SecretStr for api_keys #12165

eyurtsev opened this issue Oct 23, 2023 · 56 comments · Fixed by #14309, #20257, #20982 or #20986

Comments

@eyurtsev
Copy link
Collaborator

eyurtsev commented Oct 23, 2023

Updated: 20233-12-06

Hello everyone! thank you all for your contributions! We've made a lot of progress with SecretStrs in the code base.

First time contributors -- hope you had fun learning how to work in the code base and thanks for putting in the time. All contributors -- thanks for all your efforts in improving LangChain.

We'll create a new first time issue in a few months.


Hello LangChain community,

We're always happy to see more folks getting involved in contributing to the LangChain codebase.

This is a good first issue if you want to learn more about how to set up
for development in the LangChain codebase.

Goal

Your contribution will make it safer to print out a LangChain object
without having any secrets included in raw format in the string representation.

Set up for development

Prior to making any changes in the code:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

Make sure you're able to test, format, lint from langchain/libs/langchain

make test
make format
make lint

Shall you accept

Shall you accept this challenge, please claim one (and only one) of the modules from the list
below as one that you will be working on, and respond to this issue.

Once you've made the required code changes, open a PR and link to this issue.

Acceptance Criteria

  • invoking str or repr on the the object does not show the secret key

Integration test for the code updated to include tests that:

  • confirms the object can be initialized with an API key provided via the initializer
  • confirms the object can be initialized with an API key provided via an env variable

Confirm that it works:

  • either re-run notebook for the given object or else add an appropriate test
    that confirms that the actual secret is used appropriately (i.e.,.get_secret_value())

If your code does not use get_secret_value() somewhere, then it probably contains a bug!

Modules

  • langchain/chat_models/anyscale.py @aidoskanapyanov
  • langchain/chat_models/azure_openai.py @onesolpark
  • langchain/chat_models/azureml_endpoint.py @fyasla
  • langchain/chat_models/baichuan.py
  • langchain/chat_models/everlyai.py @sfreisthler
  • langchain/chat_models/fireworks.py @nepalprabin
  • langchain/chat_models/google_palm.py @faisalt14
  • langchain/chat_models/javelin_ai_gateway.py
  • langchain/chat_models/jinachat.py
  • langchain/chat_models/konko.py
  • langchain/chat_models/litellm.py
  • langchain/chat_models/openai.py @AnasKhan0607
  • langchain/chat_models/tongyi.py
  • langchain/llms/ai21.py
  • langchain/llms/aleph_alpha.py @slangenbach
  • langchain/llms/anthropic.py
  • langchain/llms/anyscale.py @aidoskanapyanov
  • langchain/llms/arcee.py
  • langchain/llms/azureml_endpoint.py
  • langchain/llms/bananadev.py
  • langchain/llms/cerebriumai.py
  • langchain/llms/cohere.py @arunsathiya
  • langchain/llms/edenai.py @kristinspenc
  • langchain/llms/fireworks.py
  • langchain/llms/forefrontai.py
  • langchain/llms/google_palm.py @Harshil-Patel28
  • langchain/llms/gooseai.py
  • langchain/llms/javelin_ai_gateway.py
  • langchain/llms/minimax.py
  • langchain/llms/nlpcloud.py
  • langchain/llms/openai.py @HassanA01
  • langchain/llms/petals.py @akshatvishu
  • langchain/llms/pipelineai.py
  • langchain/llms/predibase.py
  • langchain/llms/stochasticai.py
  • langchain/llms/symblai_nebula.py @praveenv
  • langchain/llms/together.py
  • langchain/llms/tongyi.py
  • langchain/llms/writer.py @ommirzaei
  • langchain/llms/yandex.py

Motivation

Prevent secrets from being printed out when printing the given langchain object.

Your contribution

Please sign up by responding to this issue and including the name of the module.

@eyurtsev eyurtsev added the good first issue Good for newcomers label Oct 23, 2023
@eyurtsev eyurtsev changed the title New Contributors For New Contributors: Use SecretStr for api_keys Oct 23, 2023
@dosubot dosubot bot added Ɑ: models Related to LLMs or chat model modules 🤖:improvement Medium size change to existing code to handle new use-cases labels Oct 23, 2023
@HassanA01
Copy link
Contributor

Hey @eyurtsev ! I am working with a group of 4 and would like to tackle this issue for my project at my course. Do you think completing these modules would be feasible in a month?

@eyurtsev
Copy link
Collaborator Author

@HassanA01 yes definitely. But the work is very similar across all the modules -- I would suggest selecting a single module per individual and should be just a few lines of code. This is a good entry point to set up for development w/ langchain and learn a little bit about the code structure

@HassanA01
Copy link
Contributor

HassanA01 commented Oct 23, 2023

@eyurtsev Sounds good! Also, could you recommend an issue that might be a little bit more complex but doesn't require too much knowledge about the in-depths of langchain which we can also tackle and would be feasible to complete in 2-3 weeks

@eyurtsev
Copy link
Collaborator Author

Not of the top of my head, but I'll think some -- in the meantime feel free to scan through the list of issues or feature requests to see if something stands out.

@eyurtsev eyurtsev removed 🤖:improvement Medium size change to existing code to handle new use-cases Ɑ: models Related to LLMs or chat model modules labels Oct 23, 2023
@faisalt14
Copy link
Contributor

Hey, I’m also part of @HassanA01’s group and am planning to work on the langchain/chat_models/google_palm.py module. 😁

@HassanA01
Copy link
Contributor

No worries! My group and I will get started to tackle this issue. I'll work on the langchain/llms/openai.py module!

@HassanA01
Copy link
Contributor

@eyurtsev Is it fine if as a group of 5, we make one PR to the upstream main with 5 module fixes (1 per team member)?

@AnasKhan0607
Copy link
Contributor

Another member of the team 🫡. Will be tackling the langchain/chat_models/openai.py module.

@eyurtsev
Copy link
Collaborator Author

@HassanA01 totally fine -- claim the modules that you want :)

@Harshil-Patel28
Copy link
Contributor

Hey @eyurtsev. I'm also a member of the team. Ill be working on the langchain/llms/google_palm.py module 😊

@arunsathiya
Copy link
Contributor

arunsathiya commented Oct 24, 2023

I have taken langchain/llms/cohere.py:

@eyurtsev eyurtsev pinned this issue Oct 24, 2023
@kristinspenc
Copy link
Contributor

Hey, I’m also part of @HassanA01’s group and am planning to work on the langchain/llms/edenai.py module.

@aidoskanapyanov
Copy link
Contributor

Hi @eyurtsev! Thanks for creating this "good first issue" 👍.
Can you please assign me on langchain/chat_models/anyscale.py and langchain/llms/anyscale.py?

@onesolpark
Copy link
Contributor

Hey @eyurtsev thanks for setting this up.
Can you assign me to langchain/chat_models/azure_openai.py.
Happy to get involved in contributing to the LangChain codebase.

@aidoskanapyanov
Copy link
Contributor

@eyurtsev I'm having an issue with installing the dependencies with poetry. It automatically removes some of the libraries. Then I end up with ModuleNotFoundError: No module named 'rapidfuzz' error when trying to run tests or anything with poety. It's related to this issue #12237.

For anyone facing this issue, as a workaround for now I installed the missing packages like so:

python -m pip install rapidfuzz
python -m pip install filelock
python -m pip install msgpack
python -m pip install build

@slangenbach
Copy link
Contributor

slangenbach commented Oct 26, 2023

@eyurtsev, thanks for making it easy for future contributors to get going.
You can sign me up for langchain/llms/aleph_alpha.py

@nepalprabin
Copy link
Contributor

Been using Langchain for a while. Thanks @eyurtsev for setting this up. Hope I can return something useful to the community.
You can sign me up for langchain/chat_models/fireworks.py.

@gautamanirudh
Copy link
Contributor

Hey @eyurtsev,
You can assign me to langchain/llms/ai21.py.

eyurtsev added a commit that referenced this issue Oct 27, 2023
- **Description:** Add masking of API Key for Aleph Alpha LLM when
printed.
- **Issue**: #12165
- **Dependencies:** None
- **Tag maintainer:** @eyurtsev

---------

Co-authored-by: Eugene Yurtsev <[email protected]>
aymeric-roucher pushed a commit to andrewrreed/langchain that referenced this issue Dec 11, 2023
- **Description:** Mask API key for Arcee LLM and its associated unit
tests
  - **Issue:** langchain-ai#12165
  - **Dependencies:** N/A
  - **Tag maintainer:** @eyurtsev
  - **Twitter handle:** `eekaiboon`

---------

Co-authored-by: Bagatur <[email protected]>
aymeric-roucher pushed a commit to andrewrreed/langchain that referenced this issue Dec 11, 2023
- **Description:** Added masking for the API key for Minimax LLM + tests
inspired by langchain-ai#12418.
- **Issue:** the issue # fixes
langchain-ai#12165
- **Dependencies:** this fix is dependent on Minimax instantiation fix
which is introduced in
langchain-ai#13439, so merge this one
after.
  - **Tag maintainer:** @eyurtsev

---------

Co-authored-by: Harrison Chase <[email protected]>
aymeric-roucher referenced this issue in andrewrreed/langchain Dec 11, 2023
Description: This PR masked baidu qianfan - Chat_Models API Key and
added unit tests.
Issue: the issue langchain-ai#12165.
Tag maintainer: @eyurtsev

---------

Co-authored-by: xiayi <[email protected]>
aymeric-roucher pushed a commit to andrewrreed/langchain that referenced this issue Dec 11, 2023
- **Description:** Masking API key for CerebriumAI LLM to protect user
secrets.
 - **Issue:** langchain-ai#12165 
 - **Dependencies:** None
 - **Tag maintainer:** @eyurtsev

---------

Signed-off-by: Yuchen Liang <[email protected]>
Co-authored-by: Harrison Chase <[email protected]>
hwchase17 pushed a commit that referenced this issue Jan 1, 2024
hoanq1811 pushed a commit to hoanq1811/langchain that referenced this issue Feb 2, 2024
- **Description:** Add masking of API Key for Aleph Alpha LLM when
printed.
- **Issue**: langchain-ai#12165
- **Dependencies:** None
- **Tag maintainer:** @eyurtsev

---------

Co-authored-by: Eugene Yurtsev <[email protected]>
hoanq1811 pushed a commit to hoanq1811/langchain that referenced this issue Feb 2, 2024
- **Description:** Added masking of the API Key for AI21 LLM when
printed and improved the docstring for AI21 LLM.
- Updated the AI21 LLM to utilize SecretStr from pydantic to securely
manage API key.
- Made improvements in the docstring of AI21 LLM. It now mentions that
the API key can also be passed as a named parameter to the constructor.
    - Added unit tests.
  - **Issue:** langchain-ai#12165 
  - **Tag maintainer:** @eyurtsev

---------

Co-authored-by: Anirudh Gautam <[email protected]>
hoanq1811 pushed a commit to hoanq1811/langchain that referenced this issue Feb 2, 2024
Description: Add masking of API Key for GooseAI LLM when printed.
Issue: langchain-ai#12165
Dependencies: None
Tag maintainer: @eyurtsev

---------

Co-authored-by: Samad Koita <>
hoanq1811 pushed a commit to hoanq1811/langchain that referenced this issue Feb 2, 2024
- **Description:** This pull request removes secrets present in raw
format,
- **Issue:** Fireworks api key was exposed when printing out the
langchain object
[langchain-ai#12165](langchain-ai#12165)
 - **Maintainer:** @eyurtsev

---------

Co-authored-by: Bagatur <[email protected]>
arunraja1 added a commit to skypointcloud/skypoint-langchain that referenced this issue Feb 15, 2024
* feat: Increased compatibility with new and old versions for dalle (#14222)

- **Description:** Increased compatibility with all versions openai for
dalle,

This pr add support for openai version from 0 ~ 1.3.

* Adds "NIN" metadata filter for pgvector to all checking for set absence (#14205)

This PR adds support for metadata filters of the form:

`{"filter": {"key": { "NIN" : ["list", "of", "values"]}}}`

"IN" is already supported, so this is a quick & related update to add
"NIN"

* Fixed a typo in smart_llm prompt (#13052)

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

* Feature: GitLab url from ENV (#14221)

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** add gitlab url from env, 
  - **Issue:** no issue,
  - **Dependencies:** no,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

---------

Co-authored-by: Erick Friis <[email protected]>

* info sql tool remove whitespaces in table names (#13712)

Remove whitespaces from the input of the ListSQLDatabaseTool for better
support.
for example, the input "table1,table2,table3" will throw an exception
whiteout the change although it's a valid input.

---------

Co-authored-by: Harrison Chase <[email protected]>

* Updated integration with Clarifai python SDK functions (#13671)

Description :

Updated the functions with new Clarifai python SDK.
Enabled initialisation of Clarifai class with model URL.
Updated docs with new functions examples.

* OpenAIEmbeddings: retry_min_seconds/retry_max_seconds parameters (#13138)

- **Description:** new parameters in OpenAIEmbeddings() constructor
(retry_min_seconds and retry_max_seconds) that allow parametrization by
the user of the former min_seconds and max_seconds that were hidden in
_create_retry_decorator() and _async_retry_decorator()
  - **Issue:** #9298, #12986
  - **Dependencies:** none
  - **Tag maintainer:** @hwchase17
  - **Twitter handle:** @adumont

make format ✅
make lint ✅
make test ✅

Co-authored-by: Harrison Chase <[email protected]>

* Amadeus toolkit minor update (#13002)

- update `Amadeus` toolkit with ability to switch Amadeus environments 
- update minor code explanations

---------

Co-authored-by: MinjiK <[email protected]>

* Demonstrate use of get_buffer_string (#13013)

**Description**

The docs for creating a RAG chain with Memory [currently use a manual
lambda](https://python.langchain.com/docs/expression_language/cookbook/retrieval#with-memory-and-returning-source-documents)
to format chat history messages. [There exists a helper method within
the
codebase](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/schema/messages.py#L14C15-L14C15)
to perform this task so I've updated the documentation to demonstrate
its usage

Also worth noting that the current documented method of using the
included `_format_chat_history ` function actually results in an error:

```
TypeError: 'HumanMessage' object is not subscriptable
```

---------

Co-authored-by: Harrison Chase <[email protected]>

* Fix typo in lcel example for rerank in doc (#14336)

fix typo in lcel example for rerank in doc

* Exclude `max_tokens` from request if it's None (#14334)

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->


We found a request with `max_tokens=None` results in the following error
in Anthropic:

```
HTTPError: 400 Client Error: Bad Request for url: https://oregon.staging.cloud.databricks.com/serving-endpoints/corey-anthropic/invocations. 
Response text: {"error_code":"INVALID_PARAMETER_VALUE","message":"INVALID_PARAMETER_VALUE: max_tokens was not of type Integer: null"}
```

This PR excludes `max_tokens` if it's None.

* feat(add): LLM integration of Cloudflare Workers AI (#14322)

Add [Text Generation by Cloudflare Workers
AI](https://developers.cloudflare.com/workers-ai/models/text-generation/).
It's a new LLM integration.

- Dependencies: N/A

* Mask API key for baidu qianfan (#14281)

Description: This PR masked baidu qianfan - Chat_Models API Key and
added unit tests.
Issue: the issue langchain-ai#12165.
Tag maintainer: @eyurtsev

---------

Co-authored-by: xiayi <[email protected]>

* feat: mask api key for cerebriumai llm (#14272)

- **Description:** Masking API key for CerebriumAI LLM to protect user
secrets.
 - **Issue:** #12165 
 - **Dependencies:** None
 - **Tag maintainer:** @eyurtsev

---------

Signed-off-by: Yuchen Liang <[email protected]>
Co-authored-by: Harrison Chase <[email protected]>

* Qdrant metadata payload keys (#13001)

- **Description:** In Qdrant allows to input list of keys as the
content_payload_key to retrieve multiple fields (the generated document
will contain the dictionary {field: value} in a string),
- **Issue:** Previously we were able to retrieve only one field from the
vector database when making a search
  - **Dependencies:** 
  - **Tag maintainer:** 
  - **Twitter handle:** @jb_dlb

---------

Co-authored-by: Jean Baptiste De La Broise <[email protected]>

* docs[patch]: fix ipynb links (#14325)

Keeping it simple for now.

Still iterating on our docs build in pursuit of making everything mdxv2
compatible for docusaurus 3, and the fewer custom scripts we're reliant
on through that, the less likely the docs will break again.

Other things to consider in future:

Quarto rewriting in ipynbs:
https://quarto.org/docs/extensions/nbfilter.html (but this won't do
md/mdx files)

Docusaurus plugins for rewriting these paths

* Update doc-string in RunnableWithMessageHistory (#14262)

Update doc-string in RunnableWithMessageHistory

* docs[patch]: Fix broken link 'tip' in docs (#14349)

* core[patch], langchain[patch]: ByteStore (#14312)

* Fix multi vector retriever subclassing (#14350)


Fixes #14342

@eyurtsev @baskaryan

---------

Co-authored-by: Erick Friis <[email protected]>

* infra: ci matrix (#14306)

* langchain[patch]: import nits (#14354)

import from core instead of langchain.schema

* Include run_id (#14331)

in the test run outputs

* core[patch]: message history error typo (#14361)

* [core/minor] Runnables: Implement a context api (#14046)

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

---------

Co-authored-by: Brace Sproul <[email protected]>

* core[patch]: Release 0.0.11 (#14367)

* langchain[patch]: Release 0.0.347 (#14368)

* langchain[patch]: fix ChatVertexAI streaming (#14369)

* langchain[patch]: Rollback multiple keys in Qdrant (#14390)

This reverts commit 38813d7090294c0c96d4963a2a230db4fef5e37e. This is a
temporary fix, as I don't see a clear way on how to use multiple keys
with `Qdrant.from_texts`.

Context: #14378

* API Reference building script update (#13587)

The namespaces like `langchain.agents.format_scratchpad` clogging the
API Reference sidebar.
This change removes those 3-level namespaces from sidebar (this issue
was discussed with @efriis )

---------

Co-authored-by: Erick Friis <[email protected]>

* core[patch], langchain[patch]: fix required deps (#14373)

* core[patch]: Release 0.0.12 (#14415)

* langchain[patch]: Release 0.0.348 (#14417)

* experimental[patch]: Release 0.0.45 (#14418)

* docs: notebook linting (#14366)

Many jupyter notebooks didn't pass linting. List of these files are
presented in the [tool.ruff.lint.per-file-ignores] section of the
pyproject.toml . Addressed these bugs:
- fixed bugs; added missed imports; updated pyproject.toml
 Only the `document_loaders/tensorflow_datasets.ipyn`,
`cookbook/gymnasium_agent_simulation.ipynb` are not completely fixed.
I'm not sure about imports.

---------

Co-authored-by: Erick Friis <[email protected]>

* docs[patch]: `promptlayer` pages update (#14416)

Updated provider page by adding LLM and ChatLLM references; removed a
content that is duplicate text from the LLM referenced page.
Updated the collback page

* fix imports from core (#14430)

* langchain[patch]: Fix scheduled testing (#14428)

- integration tests in pyproject
- integration test fixes

* langchain[patch]: fix scheduled testing ci variables (#14459)

* revoke serialization (#14456)

* langchain[patch]: fix scheduled testing ci dep install (#14460)

* langchain[patch]: xfail unstable vertex test (#14462)

* Use deepcopy in RunLogPatch (#14244)

This PR adds deepcopy usage in RunLogPatch.

I included a unit-test that shows an issue that was caused in LangServe
in the RemoteClient.

```python
import jsonpatch

s1 = {}
s2 = {'value': []}
s3 = {'value': ['a']}

ops0 = list(jsonpatch.JsonPatch.from_diff(None, s1))
ops1 = list(jsonpatch.JsonPatch.from_diff(s1, s2))
ops2 = list(jsonpatch.JsonPatch.from_diff(s2, s3))
ops = ops0 + ops1 + ops2

jsonpatch.apply_patch(None, ops)
{'value': ['a']}

jsonpatch.apply_patch(None, ops)
{'value': ['a', 'a']}

jsonpatch.apply_patch(None, ops)
{'value': ['a', 'a', 'a']}
```

* docs `Dependents` updated statistics (#14461)

Updated statistics for the dependents (packages dependent on `langchain`
package). Only packages with 100+ starts

* core[patch], langchain[patch], experimental[patch]: import CI (#14414)

* Update mongodb_atlas docs for GA (#14425)

Updated the MongoDB Atlas Vector Search docs to indicate the service is
Generally Available, updated the example to use the new index
definition, and added an example that uses metadata pre-filtering for
semantic search

---------

Co-authored-by: Harrison Chase <[email protected]>

* infra: add langchain-community release workflow (#14469)

* docs `networkx`update (#14426)

Added setting up instruction, package description and link

* experimental[patch]: SmartLLMChain Output Key Customization (#14466)

**Description**
The `SmartLLMChain` was was fixed to output key "resolution".
Unfortunately, this prevents the ability to use multiple `SmartLLMChain`
in a `SequentialChain` because of colliding output keys. This change
simply gives the option the customize the output key to allow for
sequential chaining. The default behavior is the same as the current
behavior.

Now, it's possible to do the following:
```
from langchain.chat_models import ChatOpenAI
from langchain.prompts import PromptTemplate
from langchain_experimental.smart_llm import SmartLLMChain
from langchain.chains import SequentialChain

joke_prompt = PromptTemplate(
    input_variables=["content"],
    template="Tell me a joke about {content}.",
)
review_prompt = PromptTemplate(
    input_variables=["scale", "joke"],
    template="Rate the following joke from 1 to {scale}: {joke}"
)

llm = ChatOpenAI(temperature=0.9, model_name="gpt-4-32k")
joke_chain = SmartLLMChain(llm=llm, prompt=joke_prompt, output_key="joke")
review_chain = SmartLLMChain(llm=llm, prompt=review_prompt, output_key="review")

chain = SequentialChain(
    chains=[joke_chain, review_chain],
    input_variables=["content", "scale"],
    output_variables=["review"],
    verbose=True
)
response = chain.run({"content": "chickens", "scale": "10"})
print(response)
```

---------

Co-authored-by: Erick Friis <[email protected]>

* Update README and vectorstore path for multi-modal template (#14473)

* docs[patch]: link and description cleanup (#14471)

Fixed inconsistencies; added links and descriptions

---------

Co-authored-by: Erick Friis <[email protected]>

* langchain[patch], docs[patch]: use byte store in multivectorretriever (#14474)

* manual mapping (#14422)

* docs[patch]: add missing imports for local_llms (#14453)

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
Keeping it consistent with everywhere else in the docs and adding the
missing imports to be able to copy paste and run the code example.

---------

Co-authored-by: Erick Friis <[email protected]>

* docs[patch]: `microsoft` platform page update (#14476)

Added `presidio` and `OneNote` references to `microsoft.mdx`; added link
and description to the `presidio` notebook

---------

Co-authored-by: Erick Friis <[email protected]>

* docs[patch]: `google` platform page update (#14475)

Added missed tools

---------

Co-authored-by: Erick Friis <[email protected]>

* RunnableWithMessageHistory: Fix input schema (#14516)

Input schema should not have history key

* community[major], core[patch], langchain[patch], experimental[patch]: Create langchain-community (#14463)

Moved the following modules to new package langchain-community in a backwards compatible fashion:

```
mv langchain/langchain/adapters community/langchain_community
mv langchain/langchain/callbacks community/langchain_community/callbacks
mv langchain/langchain/chat_loaders community/langchain_community
mv langchain/langchain/chat_models community/langchain_community
mv langchain/langchain/document_loaders community/langchain_community
mv langchain/langchain/docstore community/langchain_community
mv langchain/langchain/document_transformers community/langchain_community
mv langchain/langchain/embeddings community/langchain_community
mv langchain/langchain/graphs community/langchain_community
mv langchain/langchain/llms community/langchain_community
mv langchain/langchain/memory/chat_message_histories community/langchain_community
mv langchain/langchain/retrievers community/langchain_community
mv langchain/langchain/storage community/langchain_community
mv langchain/langchain/tools community/langchain_community
mv langchain/langchain/utilities community/langchain_community
mv langchain/langchain/vectorstores community/langchain_community
mv langchain/langchain/agents/agent_toolkits community/langchain_community
mv langchain/langchain/cache.py community/langchain_community
mv langchain/langchain/adapters community/langchain_community
mv langchain/langchain/callbacks community/langchain_community/callbacks
mv langchain/langchain/chat_loaders community/langchain_community
mv langchain/langchain/chat_models community/langchain_community
mv langchain/langchain/document_loaders community/langchain_community
mv langchain/langchain/docstore community/langchain_community
mv langchain/langchain/document_transformers community/langchain_community
mv langchain/langchain/embeddings community/langchain_community
mv langchain/langchain/graphs community/langchain_community
mv langchain/langchain/llms community/langchain_community
mv langchain/langchain/memory/chat_message_histories community/langchain_community
mv langchain/langchain/retrievers community/langchain_community
mv langchain/langchain/storage community/langchain_community
mv langchain/langchain/tools community/langchain_community
mv langchain/langchain/utilities community/langchain_community
mv langchain/langchain/vectorstores community/langchain_community
mv langchain/langchain/agents/agent_toolkits community/langchain_community
mv langchain/langchain/cache.py community/langchain_community
```

Moved the following to core
```
mv langchain/langchain/utils/json_schema.py core/langchain_core/utils
mv langchain/langchain/utils/html.py core/langchain_core/utils
mv langchain/langchain/utils/strings.py core/langchain_core/utils
cat langchain/langchain/utils/env.py >> core/langchain_core/utils/env.py
rm langchain/langchain/utils/env.py
```

See .scripts/community_split/script_integrations.sh for all changes

* Move runnable context to beta (#14507)

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

* community[patch]: Fix agenttoolkits imports (#14559)

* core[patch]: Release 0.0.13 (#14558)

* infra: Turn release branch check back on (#14563)

* infra: import CI fix (#14562)

TIL `**` globstar doesn't work in make

Makefile changes fix that.

`__getattr__` changes allow import of all files, but raise error when
accessing anything from the module.

file deletions were corresponding libs change from #14559

* community[patch]: Release 0.0.1 (#14565)

* infra: import CI speed (#14566)

Was taking 10 mins. Now a few seconds.

* langchain[patch]: Release 0.0.349 (#14570)

* experimental[patch]: Release 0.0.46 (#14572)

* infra: import checking bugfix (#14569)

* docs[patch], templates[patch]: Import from core (#14575)

Update imports to use core for the low-hanging fruit changes. Ran
following

```bash
git grep -l 'langchain.schema.runnable' {docs,templates,cookbook}  | xargs sed -i '' 's/langchain\.schema\.runnable/langchain_core.runnables/g'
git grep -l 'langchain.schema.output_parser' {docs,templates,cookbook} | xargs sed -i '' 's/langchain\.schema\.output_parser/langchain_core.output_parsers/g'
git grep -l 'langchain.schema.messages' {docs,templates,cookbook} | xargs sed -i '' 's/langchain\.schema\.messages/langchain_core.messages/g'
git grep -l 'langchain.schema.chat_histry' {docs,templates,cookbook} | xargs sed -i '' 's/langchain\.schema\.chat_history/langchain_core.chat_history/g'
git grep -l 'langchain.schema.prompt_template' {docs,templates,cookbook} | xargs sed -i '' 's/langchain\.schema\.prompt_template/langchain_core.prompts/g'
git grep -l 'from langchain.pydantic_v1' {docs,templates,cookbook} | xargs sed -i '' 's/from langchain\.pydantic_v1/from langchain_core.pydantic_v1/g'
git grep -l 'from langchain.tools.base' {docs,templates,cookbook} | xargs sed -i '' 's/from langchain\.tools\.base/from langchain_core.tools/g'
git grep -l 'from langchain.chat_models.base' {docs,templates,cookbook} | xargs sed -i '' 's/from langchain\.chat_models.base/from langchain_core.language_models.chat_models/g'
git grep -l 'from langchain.llms.base' {docs,templates,cookbook} | xargs sed -i '' 's/from langchain\.llms\.base\ /from langchain_core.language_models.llms\ /g'
git grep -l 'from langchain.embeddings.base' {docs,templates,cookbook} | xargs sed -i '' 's/from langchain\.embeddings\.base/from langchain_core.embeddings/g'
git grep -l 'from langchain.vectorstores.base' {docs,templates,cookbook} | xargs sed -i '' 's/from langchain\.vectorstores\.base/from langchain_core.vectorstores/g'
git grep -l 'from langchain.agents.tools' {docs,templates,cookbook} | xargs sed -i '' 's/from langchain\.agents\.tools/from langchain_core.tools/g'
git grep -l 'from langchain.schema.output' {docs,templates,cookbook} | xargs sed -i '' 's/from langchain\.schema\.output\ /from langchain_core.outputs\ /g'
git grep -l 'from langchain.schema.embeddings' {docs,templates,cookbook} | xargs sed -i '' 's/from langchain\.schema\.embeddings/from langchain_core.embeddings/g'
git grep -l 'from langchain.schema.document' {docs,templates,cookbook} | xargs sed -i '' 's/from langchain\.schema\.document/from langchain_core.documents/g'
git grep -l 'from langchain.schema.agent' {docs,templates,cookbook} | xargs sed -i '' 's/from langchain\.schema\.agent/from langchain_core.agents/g'
git grep -l 'from langchain.schema.prompt ' {docs,templates,cookbook} | xargs sed -i '' 's/from langchain\.schema\.prompt\ /from langchain_core.prompt_values /g'
git grep -l 'from langchain.schema.language_model' {docs,templates,cookbook} | xargs sed -i '' 's/from langchain\.schema\.language_model/from langchain_core.language_models/g'


```

* docs[patch]: Fix embeddings example for Databricks (#14576)

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

Fix `from langchain.llms import DatabricksEmbeddings` to `from
langchain.embeddings import DatabricksEmbeddings`.

Signed-off-by: harupy <[email protected]>

* docs[patch]: update installation with core and community (#14577)

* Update RunnableWithMessageHistory (#14351)

This PR updates RunnableWithMessage history to support user specific
configuration for the factory.

It extends support to passing multiple named arguments into the factory
if the factory takes more than a single argument.

* Add Gmail Agent Example (#14567)

Co-authored-by: Harrison Chase <[email protected]>

* allow other namespaces (#14606)

* core[minor]: Release 0.1.0 (#14607)

* community[patch]: Release 0.0.2 (#14610)

* langchain[patch]: Release 0.0.350 (#14612)

* Add image (#14611)

* experimental[patch]: Release 0.0.47 (#14617)

* docs: core and community readme (#14623)

* docs: fix links in readme (#14624)

* Minor update to ensemble retriever to handle a mix of Documents or str (#14552)

* Update Docugami Cookbook (#14626)

**Description:** Update the information in the Docugami cookbook. Fix
broken links and add information on our kg-rag template.

Co-authored-by: Kenzie Mihardja <[email protected]>

* docs: update multi_modal_RAG_chroma.ipynb (#14602)

seperate -> separate

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

* templates[patch]: fix pydantic imports (#14632)

* fix a bug in RedisNum filter againt value 0 (#14587)

- **Description:** There is a bug in RedisNum filter that filter towards
value 0 will be parsed as "*". This is a fix to it.
  - **Issue:** NA
  - **Dependencies:** NA
  - **Tag maintainer:** NA
  - **Twitter handle:** NA

* create mypy cache dir if it doesn't exist (#14579)

### Description

When running `make lint` multiple times, i can see the error `mkdir:
.mypy_cache: File exists`. Use `mkdir -p` to solve this problem.
<img width="1512" alt="Screenshot 2023-12-12 at 11 22 01 AM"
src="https://github.com/langchain-ai/langchain/assets/10000925/1429383d-3283-4e22-8882-5693bc50b502">

* fix: to rag-semi-structured template (#14568)

**Description:** 

Fixes to rag-semi-structured template.

- Added required libraries
- pdfminer was causing issues when installing with pip. pdfminer.six
works best
- Changed the pdf name for demo from llama2 to llava


<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

* docs `ollama` pages (#14561)

added provider page; fixed broken links.

* infra: rm community split scripts (#14633)

* docs: update langchain diagram (#14619)

* Update cohere provider docs (#14528)

Preview since github wont preview .mdx : 

<img width="1401" alt="image"
src="https://github.com/langchain-ai/langchain/assets/144115527/9e8ba3d9-24ff-4584-9da3-2c9b60e7e624">

* Added notebook tutorial on using Yellowbrick as a vector store with LangChain (#14509)

- **Description:** a notebook documenting Yellowbrick as a vector store
usage

---------

Co-authored-by: markcusack <[email protected]>
Co-authored-by: markcusack <[email protected]>

* docs[patch] Fix some typos in merger_retriever.ipynb (#14502)

This patch fixes some typos.

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

Signed-off-by: Masanari Iida <[email protected]>

* feat: Yaml output parser (#14496)

## Description
New YAML output parser as a drop-in replacement for the Pydantic output
parser. Yaml is a much more token-efficient format than JSON, proving to
be **~35% faster and using the same percentage fewer completion
tokens**.

☑️ Formatted
☑️ Linted
☑️ Tested (analogous to the existing`test_pydantic_parser.py`)

The YAML parser excels in situations where a list of objects is
required, where the root object needs no key:
```python
class Products(BaseModel):
   __root__: list[Product]
```

I ran the prompt `Generate 10 healthy, organic products` 10 times on one
chain using the `PydanticOutputParser`, the other one using
the`YamlOutputParser` with `Products` (see below) being the targeted
model to be created.

LLMs used were Fireworks' `lama-v2-34b-code-instruct` and OpenAI
`gpt-3.5-turbo`. All runs succeeded without validation errors.

```python
class Nutrition(BaseModel):
    sugar: int = Field(description="Sugar in grams")
    fat: float = Field(description="% of daily fat intake")

class Product(BaseModel):
    name: str = Field(description="Product name")
    stats: Nutrition

class Products(BaseModel):
    """A list of products"""

    products: list[Product] # Used `__root__` for the yaml chain
```
Stats after 10 runs reach were as follows:
### JSON
ø time: 7.75s
ø tokens: 380.8

### YAML
ø time: 5.12s
ø tokens: 242.2


Looking forward to feedback, tips and contributions!

* [docs]: add missing tiktoken dependency (#14497)

Description: I was following the docs and got an error about missing
tiktoken dependency. Adding it to the comment where the langchain and
docarray libs are.

* fix(embeddings): huggingface hub embeddings and TEI (#14489)

**Description:** This PR fixes `HuggingFaceHubEmbeddings` by making the
API token optional (as in the client beneath). Most models don't require
one. I also updated the notebook for TEI (text-embeddings-inference)
accordingly as requested here #14288. In addition, I fixed a mistake in
the POST call parameters.

**Tag maintainers:** @baskaryan

* Fix token_usage None issue in ChatOpenAI with local Chatglm2-6B (#14493)

When using local Chatglm2-6B by changing OPENAI_BASE_URL to localhost,
the token_usage in ChatOpenAI becomes None. This leads to an
AttributeError when trying to access token_usage.items().

This commit adds a check to ensure token_usage is not None before
accessing its items. This change prevents the AttributeError and allows
ChatOpenAI to work seamlessly with a local Chatglm2-6B model, aligning
with the way it operates with the OpenAI API.

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

Co-authored-by: Harrison Chase <[email protected]>

* DOC: model update in 'Using OpenAI Functions' docs (#14486)

- **Description:** : 
I just update the openai functions docs to use the latest model (ex.
gpt-3.5-turbo-1106)
https://python.langchain.com/docs/modules/chains/how_to/openai_functions

The reason is as follow: 

After reviewing the OpenAI Function Calling official guide at
https://platform.openai.com/docs/guides/function-calling, the following
information was noted:

> "The latest models (gpt-3.5-turbo-1106 and gpt-4-1106-preview) have
been trained to both detect when a function should be called (depending
on the input) and to respond with JSON that adheres to the function
signature more closely than previous models. With this capability also
comes potential risks. We strongly recommend building in user
confirmation flows before taking actions that impact the world on behalf
of users (sending an email, posting something online, making a purchase,
etc)."

CC: @efriis

* docs: Add Databricks Vector Search example notebook (#14158)

This PR adds an example notebook for the Databricks Vector Search vector
store. It also adds an introduction to the Databricks Vector Search
product on the Databricks's provider page.

---------

Co-authored-by: Bagatur <[email protected]>

* Fixed `DeprecationWarning` for `PromptTemplate.from_file` module-level calls (#14468)

Resolves https://github.com/langchain-ai/langchain/issues/14467

* Add Gemini Notebook (#14661)

* cli[patch]: integration template (#14571)

* Fix RRF and lucene escape characters for neo4j vector store (#14646)

* Remove Lucene special characters (fixes
https://github.com/langchain-ai/langchain/issues/14232)
* Fixes RRF normalization for hybrid search

* [Nit] Add newline in notebook (#14665)

For bullet list formatting

* infra: skip extended testing for partner packages (#14630)

Tested by merging into #14627

* cli[patch]: rc (#14667)

* Update Vertex AI to include Gemini (#14670)

h/t to @lkuligin 
-  **Description:** added new models on VertexAI
  - **Twitter handle:** @lkuligin

---------

Co-authored-by: Leonid Kuligin <[email protected]>
Co-authored-by: Harrison Chase <[email protected]>

* cli[patch]: unicode issue (#14672)

Some operating systems compile template, resulting in unicode decode
errors

* communty[patch]: Release 0.0.3 (#14673)

* [Partner] Add langchain-google-genai package (gemini) (#14621)

Add a new ChatGoogleGenerativeAI class in a `langchain-google-genai`
package.
Still todo: add a deprecation warning in PALM

---------

Co-authored-by: Erick Friis <[email protected]>
Co-authored-by: Leonid Kuligin <[email protected]>
Co-authored-by: Bagatur <[email protected]>

* \Fix tool_calls message merge (#14613)

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

* google-genai[patch]: Release 0.0.2 (#14677)

* Wfh/google docs update (#14676)

- Add gemini references
- Fix the notebook (ultra isn't generally available; also gemini will
randomly filter out responses, so added a fallback)

---------

Co-authored-by: Leonid Kuligin <[email protected]>

* docs: build partner api refs (#14675)

* docs: fix api ref link (#14679)

Don't point to stable, let api docs choose default version

* docs: per-package version in api docs (#14683)

* core[patch]: Fix runnable with message history (#14629)

Fix bug shown in #14458. Namely, that saving inputs to history fails
when the input to base runnable is a list of messages

* templates[patch]: Add cohere librarian template (#14601)

Adding the example I build for the Cohere hackathon.

It can:

use a vector database to reccommend books

<img width="840" alt="image"
src="https://github.com/langchain-ai/langchain/assets/144115527/96543a18-217b-4445-ab4b-950c7cced915">

Use a prompt template to provide information about the library

<img width="834" alt="image"
src="https://github.com/langchain-ai/langchain/assets/144115527/996c8e0f-cab0-4213-bcc9-9baf84f1494b">

Use Cohere RAG to provide grounded results

<img width="822" alt="image"
src="https://github.com/langchain-ai/langchain/assets/144115527/7bb4a883-5316-41a9-9d2e-19fd49a43dcb">

---------

Co-authored-by: Erick Friis <[email protected]>

* docs[patch]: fix bullet points (#14684)

- docs fixes
- escape
- bullets

* community[patch]: Fixed issue with importing Row from sqlalchemy (#14488)

- **Description:** Fixed import of Row in cache.py, 
- **Issue:** the issue # #13464
https://creditone.us.to/langchain-ai/langchain/issues/13464,
  - **Dependencies:** None,
  - **Twitter handle:** @frankybridman

Co-authored-by: Harrison Chase <[email protected]>

* community[patch]: Correct type annotation for azure_ad_token_provider Closed: #14402 (#14432)

Description
Fix https://github.com/langchain-ai/langchain/issues/14402, Similar
changes: https://github.com/langchain-ai/langchain/pull/14166

Twitter handle
[lin_bob57617](https://twitter.com/lin_bob57617)

* community[patch]: fix dashvector endpoint params error (#14484)

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

Co-authored-by: fangkeke <[email protected]>
Co-authored-by: Harrison Chase <[email protected]>

* docs: platform pages update (#14637)

Updated examples and platform pages.
- added missed tools
- added links and descriptions

* docs: api ref nav Python Docs -> Docs (#14686)

* Template for multi-modal w/ multi-vector (#14618)

Results - 

![image](https://github.com/langchain-ai/langchain/assets/122662504/16bac14d-74d7-47b1-aed0-72ae25a81f39)

* Gemini multi-modal RAG template (#14678)

![Screenshot 2023-12-13 at 12 53 39
PM](https://github.com/langchain-ai/langchain/assets/122662504/a6bc3b0b-f177-4367-b9c8-b8862c847026)

* [Partner] Gemini Embeddings (#14690)

Add support for Gemini embeddings in the langchain-google-genai package

* [Integration] NVIDIA AI Playground (#14648)

Description: Added NVIDIA AI Playground Initial support for a selection of models (Llama models, Mistral, etc.)

Dependencies: These models do depend on the AI Playground services in NVIDIA NGC. API keys with a significant amount of trial compute are available (10K queries as of the time of writing).

H/t to @VKudlay

* [Workflows] Add nvidia-aiplay to _release.yml (#14722)

As the title says.
In the future will want to have a script to automate this

* Add dense proposals (#14719)

Indexing strategy based on decomposing candidate propositions while
indexing.

* [Hub|tracing] Tag hub prompts (#14720)

If you're using the hub, you'll likely be interested in tracking the
commit/object when tracing. This PR adds it to the config

* Update `google_generative_ai.ipynb` (#14704)

* docs[patch]: fix databricks metadata (#14727)

* docs: updated branding for Google AI (#14728)

Replace this entire comment with:
  - **Description:** a small fix in branding

* infra: add integration test workflow (#14688)

* infra: Pre-release integration tests for partner pkgs (#14687)

* google-genai[patch]: add google-genai integration deps and extras (#14731)

* community[patch]: fix pgvector sqlalchemy (#14726)

Fixes #14699

* infra: add action checkout to pre-release-checks (#14732)

* Revert "[Hub|tracing] Tag hub prompts" (#14735)

Reverts langchain-ai/langchain#14720

* core[patch]: Release 0.1.1 (#14738)

* infra: docs build install community editable (#14739)

* infra: fix pre-release integration test and add unit test (#14742)

* docs: Remove trailing "`" in pip install command (#14730)

hi! just a simple typo fix in the local LLM python docs

- **Description:** removing a trailing "\`" character in a `!pip install
...` command
  - **Issue:** n/a
  - **Dependencies:** n/a
  - **Tag maintainer:** n/a
  - **Twitter handle:** n/a

* google-genai[patch], community[patch]: Added support for new Google GenerativeAI models (#14530)

Replace this entire comment with:
  - **Description:** added support for new Google GenerativeAI models
  - **Twitter handle:** lkuligin

---------

Co-authored-by: Erick Friis <[email protected]>

* [Tracing] String Stacktrace (#14131)

Add full stacktrace

* [Evals] End project (#14324)

Also does some cleanup.

Now that we support updating/ending projects, do this automatically.
Then you can edit the name of the project in the app.

* Fix OAI Tool Message (#14746)

See format here:
https://platform.openai.com/docs/guides/function-calling/parallel-function-calling


It expects a "name" argument, which we aren't providing by default.


![image](https://github.com/langchain-ai/langchain/assets/13333726/7cd82978-337c-40a1-b099-3bb25cd57eb4)


Alternative is to add the 'name' field directly to the message if people
prefer.

* Update propositional-retrieval template (#14766)

More descriptive name. Add parser in ingest. Update image link

* [Documentation] Updates to NVIDIA Playground/Foundation Model naming.… (#14770)

…  (#14723)

- **Description:** Minor updates per marketing requests. Namely, name
decisions (AI Foundation Models / AI Playground)
  - **Tag maintainer:** @hinthornw 

Do want to pass around the PR for a bit and ask a few more marketing
questions before merge, but just want to make sure I'm not working in a
vacuum. No major changes to code functionality intended; the PR should
be for documentation and only minor tweaks.

Note: QA model is a bit borked across staging/prod right now. Relevant
teams have been informed and are looking into it, and I'm placeholdered
the response to that of a working version in the notebook.

Co-authored-by: Vadim Kudlay <[email protected]>

* community[minor]: Add SurrealDB vectorstore (#13331)

**Description:** Vectorstore implementation around
[SurrealDB](https://www.surrealdb.com)

---------

Co-authored-by: Bagatur <[email protected]>

* langchain[patch]: remove unused imports (#14680)

Co-authored-by: Bagatur <[email protected]>

* docs: `Steam` update (#14778)

Updated the page title. It was inconsistent.
Updated page with links; description and setting details.

* docs: `cloudflare` update (#14779)

Added provider page.
Added links, descriptions

* Add image support for Ollama (#14713)

Support [LLaVA](https://ollama.ai/library/llava):
* Upgrade Ollama
* `ollama pull llava`

Ensure compatibility with [image prompt
template](https://github.com/langchain-ai/langchain/pull/14263)

---------

Co-authored-by: jacoblee93 <[email protected]>

* docs: `google drive` update (#14781)

The [Google Drive
toolkit](https://python.langchain.com/docs/integrations/toolkits/google_drive)
page is a duplicate of the [Google Drive
tool](https://python.langchain.com/docs/integrations/tools/google_drive)
page.
- Removed the `Google Drive toolkit` page (it shouldn't be a toolkit but
tool)
- Removed the correspondent reference in the Google platform page
- Redirected the removed page to the tool page.

* community[patch]: Update YandexGPT API (#14773)

Update LLMand Chat model to use new api version

---------

Co-authored-by: Dmitry Tyumentsev <[email protected]>

* community[patch]: Implement similarity_score_threshold for MongoDB Vector Store (#14740)

Adds the option for `similarity_score_threshold` when using
`MongoDBAtlasVectorSearch` as a vector store retriever.

Example use:

```
vector_search = MongoDBAtlasVectorSearch.from_documents(...)

qa_retriever = vector_search.as_retriever(
    search_type="similarity_score_threshold",
    search_kwargs={
        "score_threshold": 0.5,
    }
)

qa = RetrievalQA.from_chain_type(
	llm=OpenAI(), 
	chain_type="stuff", 
	retriever=qa_retriever,
)

docs = qa({"query": "..."})
```

I've tested this feature locally, using a MongoDB Atlas Cluster with a
vector search index.

* docs[patch]: fix zoom (#14786)

not sure why quarto is removing divs

* Permit updates in indexing (#14482)

* docs: developer docs (#14776)

Builds out a developer documentation section in the docs

- Links it from contributing.md
- Adds an initial guide on how to contribute an integration

---------

Co-authored-by: Bagatur <[email protected]>

* infra: cut down on integration steps (#14785)

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

---------

Co-authored-by: Bagatur <[email protected]>

* community[patch]: support for Sybase SQL anywhere added. (#14821)

- **Description:** support for Sybase SQL anywhere added in
sql_database.py file at path
langchain\libs\community\langchain_community\utilities
- **Issue:** It will resolve default schema setting for Sybase SQL
anywhere
  - **Dependencies:** No,
  - **Tag maintainer:** @baskaryan, @eyurtsev, @hwchase17,
  - **Twitter handle:** NA

---------

Co-authored-by: learn360sujeet <[email protected]>
Co-authored-by: Bagatur <[email protected]>

* community[patch]: fix agenerate return value (#14815)

Fixed:
  -  `_agenerate` return value in the YandexGPT Chat Model
  - duplicate line in the documentation

Co-authored-by: Dmitry Tyumentsev <[email protected]>

* docs: ensure consistency in declaring LANGCHAIN_API_KEY... (#14823)

... variable, accompanied by a quote

Co-authored-by: Yacine Bouakkaz <[email protected]>

* docs redundant pages (#14774)

[ScaNN](https://python.langchain.com/docs/integrations/providers/scann)
and
[DynamoDB](https://python.langchain.com/docs/integrations/platforms/aws#aws-dynamodb)
pages in `providers` are redundant because we have those references in
the Google and AWS platform pages. It is confusing.
- I removed unnecessary pages, redirected files to new nams;

* docs: Typo in Templates README.md (#14812)

Corrected path reference from package/pirate-speak to
packages/pirate-speak

* community: Add logprobs in gen output (#14826)

Now that it's supported again for OAI chat models .

Shame this wouldn't include it in the `.invoke()` output though (it's
not included in the message itself). Would need to do a follow-up for
that to be the case

* docs: Fix link typo to `/docs/integrations/text_embedding/nvidia_ai_endpoints` (#14827)

This page doesn't exist:
-
https://python.langchain.com/docs/integrations/text_embeddings/nvidia_ai_endpoints

but this one does:
-
https://python.langchain.com/docs/integrations/text_embedding/nvidia_ai_endpoints

* docs: typo in rag use case (#14800)

Description: Fixes minor typo to documentation

* docs: Fix the broken link to Extraction page (#14806)

**Description:** fixing a broken link to the extraction doc page

* community[minor]: New model parameters and dynamic batching for VertexAIEmbeddings (#13999)

- **Description:** VertexAIEmbeddings performance improvements
  - **Twitter handle:** @vladkol

## Improvements

- Dynamic batch size, starting from 250, lowering down to 5. Batch size
varies across regions.
Some regions support larger batches, and it significantly improves
performance.
When running large batches of texts in `us-central1`, performance gain
can be up to 3.5x.
The dynamic batching also makes sure every batch is below 20K token
limit.
- New model parameter `embeddings_type` that translates to `task_type`
parameter of the API. Newer model versions support [different embeddings
task
types](https://cloud.google.com/vertex-ai/docs/generative-ai/embeddings/get-text-embeddings#api_changes_to_models_released_on_or_after_august_2023).

* Update parser (#14831)

Gpt-3.5 sometimes calls with empty string arguments instead of `{}`

I'd assume it's because the typescript representation on their backend
makes it a bit ambiguous.

* [Bugfix] Ensure tool output is a str, for OAI Assistant (#14830)

Tool outputs have to be strings apparently. Ensure they are formatted
correctly before passing as intermediate steps.
 

```
BadRequestError: Error code: 400 - {'error': {'message': '1 validation error for Request\nbody -> tool_outputs -> 0 -> output\n  str type expected (type=type_error.str)', 'type': 'invalid_request_error', 'param': None, 'code': None}}
```

* community[patch]: Update Tongyi default model_name (#14844)

<img width="1305" alt="Screenshot 2023-12-18 at 9 54 01 PM"
src="https://github.com/langchain-ai/langchain/assets/10000925/c943fd81-cd48-46eb-8dff-4680424d9ba9">

The current model is no longer available.

* docs: update NVIDIA integration (#14780)

- **Description:** Modification of descriptions for marketing purposes
and transitioning towards `platforms` directory if possible.
- **Issue:** Some marketing opportunities, lodging PR and awaiting later
discussions.
  - 

This PR is intended to be merged when decisions settle/hopefully after
further considerations. Submitting as Draft for now. Nobody @'d yet.

---------

Co-authored-by: Bagatur <[email protected]>

* docs[patch]: gemini keywords (#14856)

* docs[patch]: more keywords (#14858)

* community[patch]: Release 0.0.4 (#14864)

* langchain[patch]: Release 0.0.351 (#14867)

* add methods to deserialize prompts that were old (#14857)

* community: replace deprecated davinci models (#14860)

This is technically a breaking change because it'll switch out default
models from `text-davinci-003` to `gpt-3.5-turbo-instruct`, but OpenAI
is shutting off those endpoints on 1/4 anyways.

Feels less disruptive to switch out the default instead.

* WIP: sql research assistant (#14240)

* docstrings `core` update (#14871)

Added missed docstrings

* Fix token text splitter duplicates (#14848)

- **Description:** 
- Add a break case to `text_splitter.py::split_text_on_tokens()` to
avoid unwanted item at the end of result.
    - Add a testcase to enforce the behavior.
  - **Issue:** 
    - #14649 
    - #5897
  - **Dependencies:** n/a,
 
---

**Quick illustration of change:**

```
text = "foo bar baz 123"

tokenizer = Tokenizer(
        chunk_overlap=3,
        tokens_per_chunk=7
)

output = split_text_on_tokens(text=text, tokenizer=tokenizer)
```
output before change: `["foo bar", "bar baz", "baz 123", "123"]`
output after change: `["foo bar", "bar baz", "baz 123"]`

* docstrings `langchain` update (#14870)

Added missed docstrings

* docs: fixed tiktoken link error (#14840)

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** fixed tiktoken link error, 
  - **Issue:** no,
  - **Dependencies:** no,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
  - **Twitter handle:** no!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
- **Description:** fixed tiktoken link error, 
- **Issue:** no,
- **Dependencies:** no,
- **Tag maintainer:** @baskaryan,
- **Twitter handle:** SignetCode!

* docs: fix typo in contributing re installing integration test deps (#14861)

**Description**

The contributing docs lists a poetry command to install community for
dev work that includes a poetry group called `integration_tests`. This
is a mistake: the poetry group for integration tests is called
`test_integration`, not `integration_tests`. See here:

https://github.com/langchain-ai/langchain/blob/master/libs/community/pyproject.toml#L119

* Update kendra.py to avoid Kendra query ValidationException (#14866)

Fixing issue - https://github.com/langchain-ai/langchain/issues/14494 to
avoid Kendra query ValidationException

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
- **Description:** Update kendra.py to avoid Kendra query
ValidationException,
- **Issue:** the issue
#https://github.com/langchain-ai/langchain/issues/14494,
  - **Dependencies:** None,
  - **Tag maintainer:** ,
  - **Twitter handle:** 

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

---------

Co-authored-by: Harrison Chase <[email protected]>

* Harrison/agent docs custom (#14877)

* Improve prompt injection detection (#14842)

- **Description:** This is addition to [my previous
PR](https://github.com/langchain-ai/langchain/pull/13930) with
improvements to flexibility allowing different models and notebook to
use ONNX runtime for faster speed. Since the last PR, [our
model](https://huggingface.co/laiyer/deberta-v3-base-prompt-injection)
got more than 660k downloads, and with the [public
benchmark](https://huggingface.co/spaces/laiyer/prompt-injection-benchmark)
showed much fewer false-positives than the previous one from deepset.
Additionally, on the ONNX runtime, it can be running 3x faster on the
CPU, which might be handy for builders using Langchain.
 **Issue:** N/A
 - **Dependencies:** N/A
 - **Tag maintainer:** N/A 
- **Twitter handle:** `@laiyer_ai`

* OPENAI_PROXY not working (#14833)

Replace this entire comment with:
- **Description:** OPENAI_PROXY is not working for openai==1.3.9, The
`proxies` argument is deprecated. The `http_client` argument should be
passed instead,
  - **Issue:** OPENAI_PROXY is not working,
  - **Dependencies:** None,
  - **Tag maintainer:** @hwchase17 ,
  - **Twitter handle:** timothy66666

* Docs `tencent` pages update (#14879)

- updated `Tencent` provider page: added a chat model and document
loader references; company description
- updated Chat model and Document loader pages with descriptions, links
- renamed files to consistent formats; redirected file names
Note:
I was getting this linting error on code that **was not changed in my
PR**!

> Error:
docs/docs/guides/safety/hugging_face_prompt_injection.ipynb:1:1: I001
Import block is un-sorted or un-formatted
> make: *** [Makefile:47: lint_package] Error 1

I've fixed this error in the notebook

* added history and support for system_message as param (#14824)

- **Description:** added support for chat_history for Google
GenerativeAI (to actually use the `chat` API) plus since Gemini
currently doesn't have a support for SystemMessage, added support for it
only if a user provides additional `convert_system_message_to_human`
flag during model initialization (in this case, SystemMessage would be
prepanded to the first HumanMessage)
  - **Issue:** #14710 
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
  - **Twitter handle:** lkuligin

---------

Co-authored-by: William FH <[email protected]>

* [Partner] Google GenAi new release (#14882)

to support the system message merging

Also fix integration tests that weren't passing

* [Partner] Update google integration test (#14883)

Gemini has decided that pickle rick is unsafe:
https://github.com/langchain-ai/langchain/actions/runs/7256642294/job/19769249444#step:8:189


![image](https://github.com/langchain-ai/langchain/assets/13333726/cfbf4312-53b6-4290-84ee-6ce0742e739e)

* [Partner] NVIDIA TRT Package (#14733)

Simplify #13976 and add as a separate package.

- [] Add README
- [X] Add doc notebook
- [X] Add simple LLM integration

---------

Co-authored-by: Jeremy Dyer <[email protected]>

* docs: docstrings `langchain_community` update (#14889)

Addded missed docstrings. Fixed inconsistency in docstrings.

**Note** CC @efriis 
There were PR errors on
`langchain_experimental/prompt_injection_identifier/hugging_face_identifier.py`
But, I didn't touch this file in this PR! Can it be some cache problems?
I fixed this error.

* docs: add reference for XataVectorStore constructor (#14903)

Adds doc reference to the XataVectorStore constructor for use with
existing Xata table contents.

@tsg @philkra

* langchain[patch]: export sagemaker LLMContentHandler (#14906)

Resolves #14904
…
baskaryan added a commit that referenced this issue Mar 30, 2024
)

- **Description:** Per #12165, this PR add to BananaLLM the function
convert_to_secret_str() during environment variable validation.
- **Issue:** #12165
- **Tag maintainer:** @eyurtsev
- **Twitter handle:** @treewatcha75751

---------

Co-authored-by: Bagatur <[email protected]>
gkorland pushed a commit to FalkorDB/langchain that referenced this issue Mar 30, 2024
…gchain-ai#14283)

- **Description:** Per langchain-ai#12165, this PR add to BananaLLM the function
convert_to_secret_str() during environment variable validation.
- **Issue:** langchain-ai#12165
- **Tag maintainer:** @eyurtsev
- **Twitter handle:** @treewatcha75751

---------

Co-authored-by: Bagatur <[email protected]>
efriis added a commit that referenced this issue Apr 12, 2024
**Description:** Masking of the API key for AI21 models
**Issue:** Fixes #12165 for AI21
**Dependencies:** None

Note: This fix came in originally through #12418 but was possibly missed
in the refactor to the AI21 partner package


---------

Co-authored-by: Erick Friis <[email protected]>
hinthornw pushed a commit that referenced this issue Apr 26, 2024
)

- **Description:** Per #12165, this PR add to BananaLLM the function
convert_to_secret_str() during environment variable validation.
- **Issue:** #12165
- **Tag maintainer:** @eyurtsev
- **Twitter handle:** @treewatcha75751

---------

Co-authored-by: Bagatur <[email protected]>
hinthornw pushed a commit that referenced this issue Apr 26, 2024
**Description:** Masking of the API key for AI21 models
**Issue:** Fixes #12165 for AI21
**Dependencies:** None

Note: This fix came in originally through #12418 but was possibly missed
in the refactor to the AI21 partner package


---------

Co-authored-by: Erick Friis <[email protected]>
baskaryan pushed a commit that referenced this issue Apr 29, 2024
**Description:** Add tests to check API keys are masked
**Issue:** Resolves
#12165 for Anthropic
models
**Dependencies:** None
baskaryan pushed a commit that referenced this issue May 1, 2024
**Description:** Add tests to check API keys and Active Directory tokens
are masked
**Issue:** Resolves #12165 for OpenAI and Azure OpenAI models
**Dependencies:** None

Also resolves #12473 which may be closed.

Additional contributors @alex4321 (#12473) and @onesolpark (#12542)
ccurme added a commit that referenced this issue Jul 19, 2024
- **Description**: Mask API key for ChatOpenAi based chat_models
(openai, azureopenai, anyscale, everlyai).
Made changes to all chat_models that are based on ChatOpenAI since all
of them assumes that openai_api_key is str rather than SecretStr.
  - **Issue:**: #12165 
  - **Dependencies:**  N/A
  - **Tag maintainer:** @eyurtsev
  - **Twitter handle:** N/A

---------

Co-authored-by: Chester Curme <[email protected]>
ccurme added a commit that referenced this issue Jul 19, 2024
**Description:** 
- Added masking of the API Keys for the modules:
  - `langchain/chat_models/openai.py`
  - `langchain/llms/openai.py`
  - `langchain/llms/google_palm.py`
  - `langchain/chat_models/google_palm.py`
  - `langchain/llms/edenai.py`

- Updated the modules to utilize `SecretStr` from pydantic to securely
manage API key.
- Added unit/integration tests
- `langchain/chat_models/asure_openai.py` used the `open_api_key` that
is derived from the `ChatOpenAI` Class and it was assuming
`openai_api_key` is a str so we changed it to expect `SecretStr`
instead.

**Issue:** #12165 ,
**Dependencies:** none,
**Tag maintainer:** @eyurtsev

---------

Co-authored-by: HassanA01 <[email protected]>
Co-authored-by: Aneeq Hassan <[email protected]>
Co-authored-by: kristinspenc <[email protected]>
Co-authored-by: faisalt14 <[email protected]>
Co-authored-by: Harshil-Patel28 <[email protected]>
Co-authored-by: kristinspenc <[email protected]>
Co-authored-by: faisalt14 <[email protected]>
Co-authored-by: Chester Curme <[email protected]>
olgamurraft pushed a commit to olgamurraft/langchain that referenced this issue Aug 16, 2024
- **Description**: Mask API key for ChatOpenAi based chat_models
(openai, azureopenai, anyscale, everlyai).
Made changes to all chat_models that are based on ChatOpenAI since all
of them assumes that openai_api_key is str rather than SecretStr.
  - **Issue:**: langchain-ai#12165 
  - **Dependencies:**  N/A
  - **Tag maintainer:** @eyurtsev
  - **Twitter handle:** N/A

---------

Co-authored-by: Chester Curme <[email protected]>
olgamurraft pushed a commit to olgamurraft/langchain that referenced this issue Aug 16, 2024
**Description:** 
- Added masking of the API Keys for the modules:
  - `langchain/chat_models/openai.py`
  - `langchain/llms/openai.py`
  - `langchain/llms/google_palm.py`
  - `langchain/chat_models/google_palm.py`
  - `langchain/llms/edenai.py`

- Updated the modules to utilize `SecretStr` from pydantic to securely
manage API key.
- Added unit/integration tests
- `langchain/chat_models/asure_openai.py` used the `open_api_key` that
is derived from the `ChatOpenAI` Class and it was assuming
`openai_api_key` is a str so we changed it to expect `SecretStr`
instead.

**Issue:** langchain-ai#12165 ,
**Dependencies:** none,
**Tag maintainer:** @eyurtsev

---------

Co-authored-by: HassanA01 <[email protected]>
Co-authored-by: Aneeq Hassan <[email protected]>
Co-authored-by: kristinspenc <[email protected]>
Co-authored-by: faisalt14 <[email protected]>
Co-authored-by: Harshil-Patel28 <[email protected]>
Co-authored-by: kristinspenc <[email protected]>
Co-authored-by: faisalt14 <[email protected]>
Co-authored-by: Chester Curme <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment