-
Notifications
You must be signed in to change notification settings - Fork 15.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixed ValueError: Could not parse LLM output
#1707
Conversation
Still getting the erorr File "/home/venv/lib/python3.10/site-packages/langchain/agents/conversational_chat/base.py", line 119, in _extract_tool_and_input Using this Parser
|
I think maybe we can adjust the prompt to avoid this error |
Any thoughts on how to improve the prompt? It seems to happen for all agents periodically for gpt-4 |
Add a bunch of in-context examples? But still no guarantee, so a robust parser on top is a must IMHO. |
For some context, is there a reason there are so many different parsers? It seems there is a lot of complexity in the fact that there are different parsers. I'm not trying to be critical :). There just appears to be a slightly different issue for each of the parsers and it seems like whack-a-mole. |
This solved the issue for me, but PR is out of sync with latest langchain |
@alexprice12 I agree with you, it's basically a fast and ugly hack. In my project, I ended up replacing this agent with several custom chains, where each chain does its own little thing (but does it very well, each with few-shot examples):
I would suggest replacing the original Langchain's one-step JSON with a two-step JSON-free approach (first generate just the tool name, then generate what the tool needs - in different chains). |
@klimentij that makes perfect sense, are you willing to share your code or is it already on github in your fork? Price for the API calls is the same, so I don't understand the design decision of doing more than one step in the same prompt, especially since langchain has chain in the name, and its not using chains properly in agents?? It would be interesting to understand that let to that decision, there may be a good reason. @hwchase17 can you chime in? Thanks! |
included changes from langchain-ai#1707
I am facing the same issue, I am using one tool only, do you think if I force the model to use the tool always would be a good idea? if yes, how can I modify the base.py to do so? The issue is raising from class ConversationalAgent(Agent) : [dist-packages/langchain/agents/conversational/base.py] |
This error is happening for small, non instruction fine-tuned models (which means, most of non-commercial LLMs) I suggest when the output cannot be parsed, the chain just returns the latest model output |
I modified the prompts: FORMAT_INSTRUCTIONS = """RESPONSE FORMAT INSTRUCTIONS
----------------------------
When responding please, please output a response in this format:
thought: Reason about what action to take next, and whether to use a tool.
action: The tool to use. Must be one of: {tool_names}
action_input: The input to the tool
For example:
thought: I need to send a message to xxx
action: Telegram
action_input: Send a message to xxx: I don't agree with that idea
""" The agent responds in the following format:
The modified def parse(self, text: str) -> Any:
"""
More resilient version of parser.
Searches for lines beginning with 'action:' and 'action_input:', and ignores preceding lines.
"""
cleaned_output = text.strip()
# Find line starting with 'action:'
action_match = re.search("^action: (.*)$", cleaned_output, re.MULTILINE)
if action_match is None:
raise ValueError(f"Could not find valid action in {cleaned_output}")
action = action_match.group(1)
action_input_match = re.search(
"^action_input: (.*)", text, flags=re.MULTILINE | re.DOTALL
)
if action_input_match is None:
raise ValueError(f"Could not find valid action_input in {cleaned_output}")
action_input = action_input_match.group(1)
return {"action": action, "action_input": action_input} |
I tried to solve this using semantic parsing as a fail over. Would love to hear feedback: #2958 |
How would I be able to call this function from JsonAgent Toolkit? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see 'action input' in the response, not 'action_input'. Is there a way to make this fix more general?
I am using the langchain agent. I have passed handle_parsing_errors=True this solved my issue |
this literally worked,thanks! |
check this, worked for me: |
only do not have error when using 'gpt-4' model! |
@klimentij Hi Klimentij, could you, please, resolve the merging issues? After that ping me and I push this PR for the review. Thanks! |
Closing because the library's folder structure has changed since this PR was last worked on, and you're welcome to reopen if you get it in line with the new format! |
Sometimes a conversational agent produces a noisy JSON with
action
andaction_input
keys that has hard time to be parsed even after cleaning, for example:This causes a
ValueError: Could not parse LLM output
.Related issues: #1657 #1477 #1358
I just added a simple fallback parsing that instead of using
json.loads()
relies on detecting"action": "
and"action_input": "
and parsing the value withsplit()
. Also in case LLM failed to produce any JSON at all, I made the agent to returnllm_output
as aFinal Answer
instead of raising an error