-
Notifications
You must be signed in to change notification settings - Fork 15.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
langchain.schema.OutputParserException: Could not parse LLM output: `Thought: Do I need to use a tool? No #7480
Comments
The same! |
I think I found a pretty good fix:
Once the current step is completed the |
Hi, @pradeepdev-1995, I'm helping the LangChain team manage their backlog and am marking this issue as stale. From what I understand, you are experiencing an error when using the langchain library to run a conversational agent. There have been comments from kashirin-dm and yousenwang, as well as a proposed fix from Ecolow involving a change to the llm_prefix property in the ConversationalAgent class. Could you please confirm if this issue is still relevant to the latest version of the LangChain repository? If it is, please let the LangChain team know by commenting on the issue. Otherwise, feel free to close the issue yourself or the issue will be automatically closed in 7 days. Thank you! |
@dosubot You should probably update issue #1358 with my solution as well. I've been looking at the thread for months (I know) and they are still stuck at retrying rather than fixing the prefix like I did. |
🤖 @baskaryan, could you please assist with the issue mentioned in #1358? The user has provided an update and it seems they are still encountering the same problem with the conversational agent. This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
I was struggling with same issue. This happens mostly only using weaker LLMs or models with small context. Solution that I found by debugging internals for langchain was to repeat actual format that we expect model to respond with. This works really reliable together with a $ tip than just telling model to 'remember what I told you about answer format'. Sadly I failed to find a good way how one can recover without calling LLM again. Here is my handle error prompt: def _handle_error(error: Exception) -> str:
return """I could not parse your answer.
It is really important and you will get 1000$ tip if you will use the following answer format:
```
Thought: Do I need to use a tool? Yes
Action: the action to take, should be one of tools mentioned before.
Action Input: the input to the action
Observation: the result of the action
```
When you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format:
```
Thought: Do I need to use a tool? No
AI: [your response here]
```
Please answer again while complying to rules and format just mentioned!
""" and of course pass it in as error handler response: agent = initialize_agent(
handle_parsing_errors=_handle_error,
...
) |
System Info
langchain==0.0.219
python 3.9
Who can help?
No response
Information
Related Components
Reproduction
Trying the above code, but when i ask queries, it shows the error - 'langchain.schema.OutputParserException: Could not parse LLM output: `Thought: Do I need to use a tool? No'
Expected behavior
The error should not occur
The text was updated successfully, but these errors were encountered: