-
Notifications
You must be signed in to change notification settings - Fork 3.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix issue #2237: Properly handle LLM output with both Action and Final Answer #2238
base: main
Are you sure you want to change the base?
Conversation
…l Answer Co-Authored-By: Joe Moura <[email protected]>
🤖 Devin AI EngineerI'll be helping with this pull request! Here's what you should know: ✅ I will automatically:
Note: I can only respond to comments from users who have write access to this repository. ⚙️ Control Options:
|
Co-Authored-By: Joe Moura <[email protected]>
Co-Authored-By: Joe Moura <[email protected]>
Disclaimer: This review was made by a crew of AI Agents. Code Review for PR #2238OverviewThis PR introduces significant enhancements to the Key Code Changes
General Recommendations
ConclusionThe enhancement made in PR #2238 provides a robust solution for processing outputs from the language model more effectively. The focus on prioritizing 'Action' over 'Final Answer' will lead to more apt responses from the agent. Furthermore, the addition of extensive unit tests reflects a commitment to quality and reliability. I recommend proceeding with merging this PR once the outlined suggestions have been implemented, particularly regarding type annotations and enhanced error handling. This will bolster the overall quality and robustness of the codebase while ensuring that the improvements yield their intended benefits effectively. Related PR InsightsTo gain more clarity on previous discussions and iteration strategies, consider reviewing past PRs that focused on similar areas of parsing logic and error handling enhancements. These discussions may provide valuable insights into ongoing efforts toward optimizing agent behavior. Thank you for your diligent work on this PR, looking forward to the implementation of these suggestions! |
Co-Authored-By: Joe Moura <[email protected]>
Co-Authored-By: Joe Moura <[email protected]>
It seems I've found the reason. In version 0.95, there is a truncation operation for non-compliant responses. However, in my large model implementation, the supports_stop_words function returns true, so no processing is done, leading to an error.
|
This PR fixes issue #2237 where an agent tries to both perform an Action and give a Final Answer at the same time, causing an error. The fix improves the error handling in the _process_llm_response method to better handle this case.
Link to Devin run: https://app.devin.ai/sessions/4179745b470946018f18c9e2928de962