Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix: Change _call_eas to return bytes instead of str #4

Closed
wants to merge 970 commits into from
Closed

Fix: Change _call_eas to return bytes instead of str #4

wants to merge 970 commits into from

Conversation

seeker-jie
Copy link

This pull request addresses issue langchain-ai#20453 by changing the return type of the _call_eas method in the PaiEasChatEndpoint class to return bytes instead of str.

shaunakgodbole and others added 30 commits June 19, 2024 21:14
)

Thank you for contributing to LangChain!

**Description**
The current code snippet for `Fireworks` had incorrect parameters. This
PR fixes those parameters.

---------

Co-authored-by: Chester Curme <[email protected]>
Co-authored-by: Bagatur <[email protected]>
)

- **Description:** Add optional max_messages to MessagePlaceholder
- **Issue:**
[16096](langchain-ai#16096)
- **Dependencies:** None
- **Twitter handle:** @DaveDeCaprio

Sometimes it's better to limit the history in the prompt itself rather
than the memory. This is needed if you want different prompts in the
chain to have different history lengths.

---------

Co-authored-by: Harrison Chase <[email protected]>
fix bug 
The ZhipuAIEmbeddings class is not working.

Co-authored-by: xu yandong <[email protected]>
Added missed docstrings. Format docstrings to the consistent format
(used in the API Reference)

Co-authored-by: ccurme <[email protected]>
…with pip installs of required langchain_chroma and langchain_community. (langchain-ai#23204)

Description: Update Rag tutorial notebook so it includes an additional
notebook cell with pip installs of required langchain_chroma and
langchain_community.

This fixes the issue with the rag tutorial gives you a 'missing modules'
error if you run code in the notebook as is.

---------

Co-authored-by: Chester Curme <[email protected]>
- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
…angchain-ai#22877)

Currently, calling `with_structured_output()` with an invalid method
argument raises `Unrecognized method argument. Expected one of
'function_calling' or 'json_format'`, but the JSON mode option [is now
referred
to](https://python.langchain.com/v0.2/docs/how_to/structured_output/#the-with_structured_output-method)
by `'json_mode'`. This fixes that.

Co-authored-by: Harrison Chase <[email protected]>
Added missed docstrings. Format docstrings to the consistent format
(used in the API Reference)
- Fix bug with TypedDicts rendering inherited methods if inherting from
  typing_extensions.TypedDict rather than typing.TypedDict
- Do not surface inherited pydantic methods for subclasses of BaseModel
- Subclasses of RunnableSerializable will not how methods inherited from
  Runnable or from BaseModel
- Subclasses of Runnable that not pydantic models will include a link to
RunnableInterface (they still show inherited methods, we can fix this
later)
These do not work anyway. Let's remove them for now for simplicity.
…hain-ai#13200)

**Description:**
langchain offers three technologies to save data:
-
[vectorstore](https://python.langchain.com/docs/modules/data_connection/vectorstores/)
- [docstore](https://js.langchain.com/docs/api/schema/classes/Docstore)
- [record
manager](https://python.langchain.com/docs/modules/data_connection/indexing)

If you want to combine these technologies in a sample persistence
stategy you need a common implementation for each. `DocStore` propose
`InMemoryDocstore`.

We propose the class `MemoryRecordManager` to complete the system.

This is the prelude to another full-request, which needs a consistent
combination of persistence components.

**Tag maintainer:**
@baskaryan

**Twitter handle:**
@pprados

---------

Co-authored-by: Eugene Yurtsev <[email protected]>
The return type of `json.loads` is `Any`.

In fact, the return type of `dumpd` must be based on `json.loads`, so
the correction here is understandable.

Co-authored-by: Harrison Chase <[email protected]>
…anfanChatEndpoint, MiniMaxChat, ChatSparkLLM, ChatZhipuAI (langchain-ai#22853)

This PR updates root validators for:

- ChatModels: ChatBaichuan, QianfanChatEndpoint, MiniMaxChat,
ChatSparkLLM, ChatZhipuAI

Issues langchain-ai#22819

---------

Co-authored-by: Eugene Yurtsev <[email protected]>
…gchain-ai#23227)

Add streaming clarification to runnable lambda docstring.
….2 (langchain-ai#22094)

This change updates the requirements in
`libs/partners/pinecone/pyproject.toml` to allow all versions of
`pinecone-client` greater than or equal to 3.2.2.

This change resolves issue
[21955](langchain-ai#21955).

---------

Co-authored-by: Erick Friis <[email protected]>
Co-authored-by: Bagatur <[email protected]>
… Job (langchain-ai#23224)

**Desscription**: When the ``sql_database.from_databricks`` is executed
from a Workflow Job, the ``context`` object does not have a
"browserHostName" property, resulting in an error. This change manages
the error so the "DATABRICKS_HOST" env variable value is used instead of
stoping the flow

Co-authored-by: lmorosdb <lmorosdb>
Code blocks need extra space around them to be rendered properly by
sphinx
…import error quick fix. (langchain-ai#22725)

minor changes to module import error handling and minor issues in
tutorial documents.

---------

Co-authored-by: Bagatur <[email protected]>
Co-authored-by: Eugene Yurtsev <[email protected]>
Co-authored-by: Eugene Yurtsev <[email protected]>
If we're moving documenting inherited params think these kinds of tables
become more important

![Screenshot 2024-06-20 at 3 59 12
PM](https://github.com/langchain-ai/langchain/assets/22008038/722266eb-2353-4e85-8fae-76b19bd333e0)
…n-ai#23214)

- **Description:** AsyncRootListenersTracer support on_chat_model_start,
it's schema_format should be "original+chat".
  - **Issue:** N/A
  - **Dependencies:**
baskaryan and others added 29 commits July 2, 2024 17:43
Added missed docstrings. Formatted docstrings to the consistent form.
Added missed docstrings. Formatted docstrings to the consistent form.
- **Description:** Add interface to `ChatBaichuan` to support
asynchronous requests
    - `_agenerate` method
    - `_astream` method

---------

Co-authored-by: ccurme <[email protected]>
This PR includes fix for DeepInfra tool call parsing.
Use pydantic to infer nested schemas and all that fun.
Include bagatur's convenient docstring parser
Include annotation support


Previously we didn't adequately support many typehints in the
bind_tools() method on raw functions (like optionals/unions, nested
types, etc.)
```python
"""python scripts/update_mypy_ruff.py"""
import glob
import tomllib
from pathlib import Path

import toml
import subprocess
import re

ROOT_DIR = Path(__file__).parents[1]


def main():
    for path in glob.glob(str(ROOT_DIR / "libs/**/pyproject.toml"), recursive=True):
        print(path)
        with open(path, "rb") as f:
            pyproject = tomllib.load(f)
        try:
            pyproject["tool"]["poetry"]["group"]["typing"]["dependencies"]["mypy"] = (
                "^1.10"
            )
            pyproject["tool"]["poetry"]["group"]["lint"]["dependencies"]["ruff"] = (
                "^0.5"
            )
        except KeyError:
            continue
        with open(path, "w") as f:
            toml.dump(pyproject, f)
        cwd = "/".join(path.split("/")[:-1])
        completed = subprocess.run(
            "poetry lock --no-update; poetry install --with typing; poetry run mypy . --no-color",
            cwd=cwd,
            shell=True,
            capture_output=True,
            text=True,
        )
        logs = completed.stdout.split("\n")

        to_ignore = {}
        for l in logs:
            if re.match("^(.*)\:(\d+)\: error:.*\[(.*)\]", l):
                path, line_no, error_type = re.match(
                    "^(.*)\:(\d+)\: error:.*\[(.*)\]", l
                ).groups()
                if (path, line_no) in to_ignore:
                    to_ignore[(path, line_no)].append(error_type)
                else:
                    to_ignore[(path, line_no)] = [error_type]
        print(len(to_ignore))
        for (error_path, line_no), error_types in to_ignore.items():
            all_errors = ", ".join(error_types)
            full_path = f"{cwd}/{error_path}"
            try:
                with open(full_path, "r") as f:
                    file_lines = f.readlines()
            except FileNotFoundError:
                continue
            file_lines[int(line_no) - 1] = (
                file_lines[int(line_no) - 1][:-1] + f"  # type: ignore[{all_errors}]\n"
            )
            with open(full_path, "w") as f:
                f.write("".join(file_lines))

        subprocess.run(
            "poetry run ruff format .; poetry run ruff --select I --fix .",
            cwd=cwd,
            shell=True,
            capture_output=True,
            text=True,
        )


if __name__ == "__main__":
    main()

```
…23819)

enviroment -> environment


- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"
…angchain-ai#23818)

The mongdb have some errors.
- `add_texts() -> List` returns a list of `ObjectId`, and not a list of
string
- `delete()` with `id` never remove chunks.

---------

Co-authored-by: Eugene Yurtsev <[email protected]>
Added missed docstrings. Formatted docstrings to the consistent form.
…order to format an `ImagePromptTemplate` (langchain-ai#22613)

- Description: Add support for `path` and `detail` keys in
`ImagePromptTemplate`. Previously, only variables associated with the
`url` key were considered. This PR allows for the inclusion of a local
image path and a detail parameter as input to the format method.
- Issues:
    - fixes langchain-ai#20820 
    - related to langchain-ai#22024 
- Dependencies: None
- Twitter handle: @DeschampsTho5

---------

Co-authored-by: tdeschamps <[email protected]>
Co-authored-by: Eugene Yurtsev <[email protected]>
Co-authored-by: Eugene Yurtsev <[email protected]>
to correctly display the link

Co-authored-by: Mu Xianming <[email protected]>
**Description**: The ``declarative_base()`` function is now available as
sqlalchemy.orm.declarative_base(). (depreca ted since: 2.0) (Background
on SQLAlchemy 2.0 at: https://sqlalche.me/e/b8d9)

---------

Co-authored-by: Isaac Francisco <[email protected]>
Co-authored-by: isaac hershenson <[email protected]>
… that is used as a key for caching chat models responses (langchain-ai#23842)

This PR should fix the following issue:
langchain-ai#23824
Introduced as part of this PR:
langchain-ai#23416

I am unable to reproduce the issue locally though it's clear that we're
getting a `serialized` object which is not a dictionary somehow.

The test below passes for me prior to the PR as well

```python

def test_cache_with_sqllite() -> None:
    from langchain_community.cache import SQLiteCache

    from langchain_core.globals import set_llm_cache

    cache = SQLiteCache(database_path=".langchain.db")
    set_llm_cache(cache)
    chat_model = FakeListChatModel(responses=["hello", "goodbye"], cache=True)
    assert chat_model.invoke("How are you?").content == "hello"
    assert chat_model.invoke("How are you?").content == "hello"
```
Update former pull request:
langchain-ai#22595.

Modified
`libs/partners/huggingface/langchain_huggingface/embeddings/huggingface_endpoint.py`,
where the API call function does not match current [Text Embeddings
Inference
API](https://huggingface.github.io/text-embeddings-inference/#/Text%20Embeddings%20Inference/embed).
One example is:
```json
{
  "inputs": "string",
  "normalize": true,
  "truncate": false
}
```
Parameters in `_model_kwargs` are not passed properly in the latest
version. By the way, the issue *[why cause 413?
langchain-ai#50](huggingface/text-embeddings-inference#50
might be solved.
@seeker-jie seeker-jie closed this Jul 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.