-
Notifications
You must be signed in to change notification settings - Fork 15.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is there no chain for question answer with sources and memory? #1246
Comments
I have accomplished what you are trying to do using an conversational agent and providing a qa with sources chain as a tool to that agent. It's very similar to this example in the help docs: https://langchain.readthedocs.io/en/latest/modules/memory/examples/conversational_agent.html Tools are pretty easy to define, so if you already have a working qa chain you should be able to adapt examples here: |
When I am trying to do this, my agent is never referring to the sources I am giving in my QA with sources tool. How did you pass your input documents? I think I am doing it wrong, Here's what I am trying to do
|
can confirm not getting a source output with
outputs
|
I had the same problem. It worked when I used a custom prompt. This is possibly because the default prompt of
|
I was able to accomplish what I wanted using the ConversationalRetrievalChain. According to the documentation itself, "The only difference between this chain and the RetrievalQAChain is that this allows for passing in of a chat history which can be used to allow for follow up questions." |
I have tried using memory inside load_qa_with_sources_chain but it throws up an error. Works fine with load_qa_chain. No other way to do this other than creating a custom chain?
The text was updated successfully, but these errors were encountered: