Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue: <Please write a comprehensive title after the 'Issue: ' prefix> #11173

Closed
Locke-bot opened this issue Sep 28, 2023 · 3 comments
Closed
Labels
Ɑ: models Related to LLMs or chat model modules 🤖:question A specific question about the codebase, product, project, or how to use a feature

Comments

@Locke-bot
Copy link

Issue you'd like to raise.

full_chain = {
    "source1": {"question": lambda x: x["question"]} | query_chain | db.run,
    "source2": (lambda x: x['question']) | vectorstore.as_retriever(),
    "question": lambda x: x['question'],

} | qa_prompt | ChatOpenAI(streaming=True, callbacks=[QueueCallback(q)], temperature=0, model_name='gpt-4')

how can I add chat history to this?

Suggestion:

No response

@dosubot dosubot bot added Ɑ: models Related to LLMs or chat model modules 🤖:question A specific question about the codebase, product, project, or how to use a feature labels Sep 28, 2023
@dosubot
Copy link

dosubot bot commented Sep 28, 2023

🤖

Hello,

To incorporate chat history into your full_chain code snippet, you can use the ConversationalRetrievalChain as suggested in a similar issue #1246. This chain allows for passing in a chat history which can be used to allow for follow-up questions.

However, if you want to modify the _generate method in the ChatLiteLLM class, you can do so by passing the chat history as a list of BaseMessage instances to this method. Here's an example of how you can modify this method:

def _generate(
    self,
    messages: List[BaseMessage],
    stop: Optional[List[str]] = None,
    run_manager: Optional[CallbackManagerForLLMRun] = None,
    stream: Optional[bool] = None,
    **kwargs: Any,
) -> ChatResult:
    should_stream = stream if stream is not None else self.streaming
    if should_stream:
        stream_iter = self._stream(
            messages, stop=stop, run_manager=run_manager, **kwargs
        )
        return _generate_from_stream(stream_iter)

    message_dicts, params = self._create_message_dicts(messages, stop)
    params = {**params, **kwargs}
    response = self.completion_with_retry(
        messages=message_dicts, run_manager=run_manager, **params
    )
    return self._create_chat_result(response)

In this code:

  • The messages argument is a list of BaseMessage instances, which should include the chat history.
  • The _create_message_dicts method is used to convert the chat history into a format that can be passed to the model.
  • The completion_with_retry method is used to generate a response from the model based on the chat history.

You need to create BaseMessage instances for each message in the chat history and pass them to the _generate method. You can use the HumanMessage, AIMessage, and SystemMessage classes to create these instances.

You can find more details in the following files:

I hope this helps! If you have any other questions, feel free to ask.

Best,
Dosu

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

@Locke-bot
Copy link
Author

how can I use ConversationalRetrievalChain with multiple sources, with some as databases and some as vectorstores

Copy link

dosubot bot commented Dec 28, 2023

Hi, @Locke-bot,

I'm helping the LangChain team manage their backlog and am marking this issue as stale. From what I understand, the issue was opened to add chat history to a code snippet involving a chain of functions for processing questions and generating responses. Dosubot provided a detailed response, suggesting the use of the ConversationalRetrievalChain and providing a code snippet for modifying the _generate method in the ChatLiteLLM class to incorporate chat history. You followed up with a question about using ConversationalRetrievalChain with multiple sources.

Could you please confirm if this issue is still relevant to the latest version of the LangChain repository? If it is, please let the LangChain team know by commenting on the issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days. Thank you!

@dosubot dosubot bot added the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Dec 28, 2023
@dosubot dosubot bot closed this as not planned Won't fix, can't repro, duplicate, stale Jan 30, 2024
@dosubot dosubot bot removed the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Jan 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Ɑ: models Related to LLMs or chat model modules 🤖:question A specific question about the codebase, product, project, or how to use a feature
Projects
None yet
Development

No branches or pull requests

1 participant