-
Notifications
You must be signed in to change notification settings - Fork 15.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ValueError: Could not parse LLM output: #1358
Comments
same here can't figure it out |
In your case The LLM should output:
For your example I think this happens in these models because they are not trained to follow instructions, they are LLMs used for language modeling, but in the case of OpenAI GPT-3.5, it is specifically trained to follow user instructions (like asking it to output the format that I mentioned before, Thought, Action, Action Input, Observation or Thought, {ai_prefix}) I tested it, in my case, I got I hope this is clear! |
Am having the same issue My code is here @st.cache_resource
def create_tool(_index, chosen_pdf):
tools = [
Tool(
name=f"{chosen_pdf} index",
func=lambda q: str(_index.query(q)),
description="Useful to answering questions about the given file",
return_direct=True,
),
]
return tools
@st.cache_resource
def create_agent(chosen_class, chosen_pdf):
memory = ConversationBufferMemory(memory_key="chat_history")
llm = OpenAI(temperature=0, model_name="gpt-3.5-turbo")
index = get_index(chosen_class, chosen_pdf)
tools = create_tool(index, chosen_pdf)
agent = initialize_agent(
tools, llm, agent="conversational-react-description", memory=memory
)
return agent
def query_gpt_memory(chosen_class, chosen_pdf, query):
agent = create_agent(chosen_class, chosen_pdf)
res = agent.run(input=query)
st.session_state.memory = agent.memory.buffer
return res error output
However, I still get a response even with this value error |
+1 |
1 similar comment
+1 |
I'm having the same problem with OpenAIChat llm |
Yes same issue here |
Just to understand clearly, is there a correct way of going about this? I am also getting the same error even if i use the Or should we wait for an update? |
just boosting this thread -- this is a common issue in my builds with langchain, but I also utilize promptlayer, so I can see that the outputs are indeed parsable to the API that routes my convo logs...clearly something is wrong and I have yet to find a full-proof solution to avoiding this. |
+1 This seems to a common issue with chat agents which are the need of the hour! |
+1 same issue here |
same here... |
I was able to fix this locally by simply calling the LLM again when there is a Parse error. I am sure my code is not exactly in the spirit of langchain, but if anyone wants to take the time to review my branch https://github.com/tomsib2001/langchain/tree/fix_parse_LLL_error (it's a POC, not a finished MR) and tell me:
|
I have the same problem |
Same problem |
I have a same problem. Don't know what can be the reason, when the output that cause the error contains well formulated answer. |
This is super hacky, but while we don't have a solution for this issue, you can use this:
|
Might be anecdotal, but I think GPT does better with JSON-type formatting, maybe that will help with the formating issues? I made a couple of changes on my local copy of langchain, seems to work much more reliably |
@noobmaster19 can you share what changes you made to make things more reliable? |
Are there any HuggingFace models that work as an agent, or are we forced to use OpenAI? |
It is possible to pass output parser to the agent executor. Here is how I did: class NewAgentOutputParser(BaseOutputParser):
def get_format_instructions(self) -> str:
return FORMAT_INSTRUCTIONS
def parse(self, text: str) -> Any:
print("-" * 20)
cleaned_output = text.strip()
# Regex patterns to match action and action_input
action_pattern = r'"action":\s*"([^"]*)"'
action_input_pattern = r'"action_input":\s*"([^"]*)"'
# Extracting first action and action_input values
action = re.search(action_pattern, cleaned_output)
action_input = re.search(action_input_pattern, cleaned_output)
if action:
action_value = action.group(1)
print(f"First Action: {action_value}")
else:
print("Action not found")
if action_input:
action_input_value = action_input.group(1)
print(f"First Action Input: {action_input_value}")
else:
print("Action Input not found")
print("-" * 20)
if action_value and action_input_value:
return {"action": action_value, "action_input": action_input_value}
# Problematic code left just in case
if "```json" in cleaned_output:
_, cleaned_output = cleaned_output.split("```json")
if "```" in cleaned_output:
cleaned_output, _ = cleaned_output.split("```")
if cleaned_output.startswith("```json"):
cleaned_output = cleaned_output[len("```json"):]
if cleaned_output.startswith("```"):
cleaned_output = cleaned_output[len("```"):]
if cleaned_output.endswith("```"):
cleaned_output = cleaned_output[: -len("```")]
cleaned_output = cleaned_output.strip()
response = json.loads(cleaned_output)
return {"action": response["action"], "action_input": response["action_input"]}
# end of problematic code
def make_chain():
memory = ConversationBufferMemory(
memory_key="chat_history", return_messages=True)
agent = ConversationalChatAgent.from_llm_and_tools(
llm=ChatOpenAI(), tools=[], system_message=SYSTEM_MESSAGE, memory=memory, verbose=True, output_parser=NewAgentOutputParser())
agent_chain = AgentExecutor.from_agent_and_tools(
agent=agent,
tools=tools,
memory=memory,
verbose=True,
)
return agent_chain |
@Saik0s |
@gambastyle I did some modifications to system message, but that is not related to the problem. You can find default system message here. I just wanted to show my approach to fix problem by introducing upgraded output parser |
Everyone seems to have missed klimentij's fix in this thread? Having a prompt doing more than one thing and parsing the output is a bad idea with LLMs, that is why chains exist and are not being properly used in this agent. See code and comments here: @Saik0s did you check klimentij's parser changes and do you changes compare? |
@tiagoefreitas I checked it, his approach is similar to what I did, and he did it directly in the library code. I would even say that his approach is better, because there is no regular expressions in it 😅 |
When I run this simple exampel
and llm is a self wrapper of azure openai with some authorization staff and implement using LLM _call method which return the message.content str only It always throw the following OutputParserException, how to fix
|
Same with the error encountered with @choupijiang Here is my error log
and below is my script tools = load_tools(
["llm-math"],
llm=self.llm,
)
agent_chain = initialize_agent(
tools,
self.llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
)
answer = agent_chain.run("what is 1+1")
print(answer) Anyone know how to solve this? |
I had the same problem. All i had to do to resolve this was saying to the llm to return the "Final Answer" in JSON format. Here is my code:
|
Hi, i found a temporary fix. I added this code in: ~/anaconda3/envs/llm/lib/python3.9/site-packages/langchain/agents/mrkl/ output_parser.py ----------------------------------------------------------if not re.search(r"Action\s*\d*\s*:[\s](.?)", text, re.DOTALL): ---------------------------------------------------------- |
As others have pointed out, the root cause is that the LLM is ignoring the formatting instructions. So the solution is also to use a better LLM
On Aug 3, 2023 5:48 AM, abhishekj-growexxer ***@***.***> wrote:
Hi, i found a temporary fix.
I added this code in: ~/anaconda3/envs/llm/lib/python3.9/site-packages/langchain/agents/mrkl/ output_parser.py
Line 42
…----------------------------------------------------------
if not re.search(r"Action\s*\d*\s*:[\s](.?)", text, re.DOTALL):
# custom change 1
text = 'Action: ' + text
----------------------------------------------------------
—
Reply to this email directly, view it on GitHub<#1358 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AAGAGMI7755SBJJRB5Q7XI3XTN6XVANCNFSM6AAAAAAVLYBQWE>.
You are receiving this because you commented.Message ID: ***@***.***>
|
Agree with you. But till then this will work as it allows us to access the final thought from agent without breaking the pipeline. Please let me know if this fix causes any other issue. |
Someone had tried to add this? When I run the code with those lines, I don't get the error anymore, very strange... |
+1 raise OutputParserException(f"Could not parse LLM output: {text}") |
This is a lazy way, but what I did was modify my initial prompt. I added the sentence: You must return a final answer, not an action. If you can't return a final answer, return "none". |
My best
|
You can sol this issue by set : agent = 'chat-conversational-react-description' Your code after sol will be : agent_chain = initialize_agent( tools=tools, llm= HuggingFaceHub(repo_id="google/flan-t5-xl"), agent="chat-conversational-react-description", memory=memory, verbose=False) |
For me I am getting parsing error if I initialized an agent as:
But no error if I use ConversationalChatAgent instead:
|
Any updates? |
A simple fix to this problem is to make sure you specify the Json to every last key value combination. i was struggling this for over a week as my usecase was quite rare - ReAct agent with input, no vector store and no tools at all. I fixed with a simple prompt addition. -
|
Basically the error is because the randomness of LLM return, when it did not strictly follow the instruction, it will cause error because we wont' be able to find SolutionsI. Use tool when not parseOne way I can think is use specific tool, e.g. search, to handle the error (parse the output with search tool, and because it's search, it will almost always return something). Or use "user input tool" to ask user specify more details Drawback is the output might not be that meaningful II. Retry ParserAnother way is to use LLM handle LLM error, I found this page: looks like one solution would be use III. parse exception & retry
https://github.com/langchain-ai/langchain/blob/97a91d9d0d2d064cef16bf34ea7ca8188752cddf/libs/langchain/langchain/agents/output_parsers/react_single_input.py IV. Change to JSON format output & parserhttps://github.com/langchain-ai/langchain/blob/97a91d9d0d2d064cef16bf34ea7ca8188752cddf/libs/langchain/langchain/agents/output_parsers/react_json_single_input.py V. Fine-tuneFinal thought are using fine-tune, I did not try this method yet but ideally we could fine-tune with example input and answers (with correct format you want) beside prompt, to let LLM behave :) |
I was using the ReAct-chat prompt instruction, before that the normal ReAct prompt instruction. And the problem was that this part...
Always appends a "Thought:" after an observation happened. You can also see this in the Langsmith debugging. BUT the big problem here is that all the ReAct Output Parsers (the old MRKL and the new one) don't work with this behaviour natively. They search for "Thought: ...." in the answer that was generated by the LLM. But since the AgentExector (or some other part) always appends it in the message before, the Outputparser fails to recognize and throws an error. In my case i solved this (hard-coded) with changing the suffix to:
|
where do we edit this ? |
try adding the prompt saying that the final answer must give in markdown format. |
Ok, so, I've been watching this thread since it's inception and I thought you would have found my solution at #7480 but you guys keep on creating more and more ways of retrying the same request rather than fixing the actual problem, so I'm obliged to re-post this:
Once the current step is completed the It did solve this issue for me in most cases using Llama 2. Good luck and keep going. |
Update to:
It works for me. Thanks |
@Mohamedhabi Thanks for your suggestion. Also for someone might need. I'm trying to work on
Together with After enabling When I had digged the source code from BTW, guys should never named your local file as |
`agent_chain = initialize_agent( tools=tools, llm= HuggingFaceHub(repo_id="google/flan-t5-xl"), agent="conversational-react-description", memory=memory, verbose=False)
agent_chain.run("Hi")`
throws error. This happens with Bloom as well. Agent only with OpenAI is only working well.
`_(self, inputs, return_only_outputs)
140 except (KeyboardInterrupt, Exception) as e:
141 self.callback_manager.on_chain_error(e, verbose=self.verbose)
--> 142 raise e
143 self.callback_manager.on_chain_end(outputs, verbose=self.verbose)
...
---> 83 raise ValueError(f"Could not parse LLM output: "{llm_output}")
84 action = match.group(1)
85 action_input = match.group(2)
ValueError: Could not parse LLM output: Assistant, how can I help you today?`
The text was updated successfully, but these errors were encountered: