Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ChatOllama stream method raises warn_deprecated NotImplementedError #14980

Closed
2 of 14 tasks
v-byte-cpu opened this issue Dec 20, 2023 · 5 comments
Closed
2 of 14 tasks

ChatOllama stream method raises warn_deprecated NotImplementedError #14980

v-byte-cpu opened this issue Dec 20, 2023 · 5 comments
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature Ɑ: models Related to LLMs or chat model modules

Comments

@v-byte-cpu
Copy link

System Info

langchain version: v0.0.352
python version: 3.11

Hi there! After that PR #14713 was merged, I started getting errors in stream() method:

File .../lib/python3.11/site-packages/langchain_core/_api/deprecation.py:295, in warn_deprecated(since, message, name, alternative, pending, obj_type, addendum, removal)
    293 if not removal:
    294     removal = f"in {removal}" if removal else "within ?? minor releases"
--> 295     raise NotImplementedError(
    296         f"Need to determine which default deprecation schedule to use. "
    297         f"{removal}"
    298     )
    299 else:
    300     removal = f"in {removal}"

NotImplementedError: Need to determine which default deprecation schedule to use. within ?? minor releases

I guess this decorator must have a pending=True argument.

Who can help?

@hwchase17 @agola11

Information

  • The official example notebooks/scripts
  • My own modified scripts

Related Components

  • LLMs/Chat Models
  • Embedding Models
  • Prompts / Prompt Templates / Prompt Selectors
  • Output Parsers
  • Document Loaders
  • Vector Stores / Retrievers
  • Memory
  • Agents / Agent Executors
  • Tools / Toolkits
  • Chains
  • Callbacks/Tracing
  • Async

Reproduction

from langchain.chat_models import ChatOllama
llm = ChatOllama(
    model="openchat:7b-v3.5-1210-q4_K_M",
)
for chunk in llm.stream("Where were the Olympics held?"):
    print(chunk, end="", flush=True)

Expected behavior

successful streaming output from llm

Copy link

dosubot bot commented Dec 20, 2023

🤖

It's great to see that you've identified a potential issue and a possible solution related to the stream() method in LangChain. Your observation about adding a pending=True argument to the decorator seems like a step in the right direction.

If you're interested in contributing to LangChain, I would encourage you to go ahead and create a pull request with your proposed fix. The maintainers will review your changes and determine if it aligns with the project's goals.

Thank you for your interest in improving LangChain!


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

@dosubot dosubot bot added Ɑ: models Related to LLMs or chat model modules 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature labels Dec 20, 2023
@tylertitsworth
Copy link
Contributor

tylertitsworth commented Dec 28, 2023

I'm getting the same issue in my app. Chainlit offers a generic langchain provider that streams the model backend request provided by an llm input. When I add support for updating slider inputs in a playground request like so:

result = await make_async(self.llm.stream)(
    input=messages, # User Prompt
    **request.prompt.settings # User Inputs like temperature, top_K, etc.
)

I get the error above.

When I downgrade to v0.0.350 I also receive the same error, so I am unsure if the issue is in the specific PR you mentioned.

The out-of-the-box GenericLangchainProvider in chainlit for v0.0.352 doesn't work because of this issue.

Edit: On followup testing, it appears that v0.0.349 is the release that is working for me.

langchain_ollama_stream_error.mp4

@romellfudi
Copy link

romellfudi commented Jan 3, 2024

The issue still persists in the latest version. Please provide guidance on how to handle streaming in ChatOllama. @v-byte-cpu do you have any updates?

@tylertitsworth
Copy link
Contributor

Issue was fixed in #15104

@kesavan22
Copy link

I'm facing this issue for GooglePalm in the latest version

@dosubot dosubot bot added the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Apr 19, 2024
@dosubot dosubot bot closed this as not planned Won't fix, can't repro, duplicate, stale Apr 26, 2024
@dosubot dosubot bot removed the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Apr 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature Ɑ: models Related to LLMs or chat model modules
Projects
None yet
Development

No branches or pull requests

4 participants