Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there no chain for question answer with sources and memory? #1246

Closed
shreyabhadwal opened this issue Feb 23, 2023 · 5 comments
Closed

Is there no chain for question answer with sources and memory? #1246

shreyabhadwal opened this issue Feb 23, 2023 · 5 comments

Comments

@shreyabhadwal
Copy link

I have tried using memory inside load_qa_with_sources_chain but it throws up an error. Works fine with load_qa_chain. No other way to do this other than creating a custom chain?

@brandco
Copy link

brandco commented Feb 23, 2023

I have accomplished what you are trying to do using an conversational agent and providing a qa with sources chain as a tool to that agent. It's very similar to this example in the help docs: https://langchain.readthedocs.io/en/latest/modules/memory/examples/conversational_agent.html

Tools are pretty easy to define, so if you already have a working qa chain you should be able to adapt examples here:
https://langchain.readthedocs.io/en/latest/modules/agents/examples/custom_tools.html

@shreyabhadwal
Copy link
Author

shreyabhadwal commented Feb 24, 2023

When I am trying to do this, my agent is never referring to the sources I am giving in my QA with sources tool. How did you pass your input documents? I think I am doing it wrong,

Here's what I am trying to do

chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="map_reduce")
chain({"input_documents": search_index.similarity_search(question1, k=4),"question": question1,}, return_only_outputs=True)

tools = [Tool(name = 'QASourcesChain' ,func = chain.run, description = "use to answer every question")]

memory = ConversationBufferMemory(memory_key="chat_history")

agent = initialize_agent(tools, llm, agent="conversational-react-description", verbose=True,
                         memory=memory)

@chris-aeviator
Copy link

chris-aeviator commented Mar 8, 2023

can confirm not getting a source output with load_qa_with_sources_chain, though I can see the source metadata in my documents.

loader = DirectoryLoader('knowledge/', glob="*.md")
knowledge = loader.load()
embeddings = HuggingFaceEmbeddings()
docsearch = Chroma.from_documents(knowledge, embeddings, persist_directory="chroma-db",metadatas=[{"source": str(i)} for i in range(len(knowledge))])
llm = HuggingFacePipeline(pipeline=pipe)
docs = docsearch.similarity_search(query)
chain = load_qa_with_sources_chain(llm, chain_type="map_reduce")
result = chain({"input_documents": docs, "question": query}, return_only_outputs=False)

outputs

{ 'output_text': ' Re-use existing images'}

@faz-cxr
Copy link

faz-cxr commented Mar 29, 2023

I had the same problem. It worked when I used a custom prompt. This is possibly because the default prompt of load_qa_chain is different from load_qa_with_sources_chain. Here's an example you could try:

template = """You are an AI chatbot having a conversation with a human. Given the following extracted parts of a long document and a question, create a final answer.  
ALWAYS return a "SOURCES" part in your answer.
The "SOURCES" part should be a reference to the sources in the documents from which you got your answer.
Example of your response should be:

---
The answer is foo

SOURCES: 
- xyz
---

=====BEGIN DOCUMENT=====
{summaries}
=====END DOCUMENT=====

=====BEGIN CONVERSATION=====
{chat_history}
Human: {human_input}
AI:"""

prompt = PromptTemplate(
   input_variables=["chat_history", "human_input", "summaries"],
   template=template
)

memory = ConversationBufferMemory(memory_key="chat_history", input_key="human_input")
chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="stuff", memory=memory, prompt=prompt)

@shreyabhadwal
Copy link
Author

shreyabhadwal commented Mar 30, 2023

I was able to accomplish what I wanted using the ConversationalRetrievalChain.

According to the documentation itself, "The only difference between this chain and the RetrievalQAChain is that this allows for passing in of a chat history which can be used to allow for follow up questions."

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants