-
Notifications
You must be signed in to change notification settings - Fork 16.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add LLMCheckerChain #281
Add LLMCheckerChain #281
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so im down to check this in as is because i think its great, but one thing id like to think about before checking this in - how general could we make this? eg what is the most general prompt/chain that i could have that would work with this?
eg right now the use case here is you ask a generic question in the first prompt. are there other good use cases for this?
In the most general case, I think this verification becomes an agent, not a chain? You probably want to have an agent list its assumptions and then verify them with its tools, e.g. go to Wikipedia to look up a list of mammals. This actually looks a lot like a parallel version of self-ask? |
oh i like that idea. in that framing the checkers are the tools then right? so what part of this chain is that checker/tool? not the whole part, because it also contains the question. maybe the next two parts at the end? |
Right so I guess there's two options:
Maybe we do both? #1 used when compute / latency isn't as much an issue, #2 more widely used to make things a bit more correct? |
what would (1) look like exactly? would we change the prompt of the agent, to tell it to answer the question and then use a tool to fact check? what would that tool look like? would htat tool basically be the prompts here? is there a common prereq for both (1) and (2)? namely the tool/chain that does the verfication? and in (2) its just a simple sequence of original LLM -> verificaton chain, and in (1) its a tool the agent has access to? |
Scoping this PR to a chain for now. Ready for review! |
Implementation of https://github.com/jagilley/fact-checker. Works pretty well.
Verifying this manually:
cc @hwchase17