Having issues while retrieving from GPT-4 #9398
Replies: 1 comment
-
🤖 Hello, Based on the error message you're encountering, it seems that the GPT-4 model is not compatible with the completion operation you're trying to perform. This could be due to the fact that the GPT-4 model might not be supported by the operation or the tiktoken package used by the LangChain framework. From the similar issues I found in the LangChain repository, there are a few potential solutions you could try:
AzureChatOpenAI(
deployment_name="35-turbo-dev",
openai_api_version="2023-03-15-preview",
) Please note that the API version may change, so always ensure you're using the correct version. You can find more details about this solution in this issue.
qa = ConversationalRetrievalChain.from_llm(ChatOpenAI(model="gpt-4", temperature=0.2, max_tokens=5000),
retriever=retriever,
return_source_documents=True,
verbose=True,
output_key='answer',
combine_docs_chain_kwargs={'prompt': prompt},
condense_question_llm=ChatOpenAI(model="gpt-3.5-turbo", temperature=0)) You can find more details about this solution in this issue.
If you're still encountering issues after trying these solutions, please provide more specific details about how you're using the GPT-4 model in the LangChain framework. This will help us better understand the issue and provide a more accurate solution. Best, Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Beta Was this translation helpful? Give feedback.
-
Until now, our communication with our PDF was facilitated by GPT-3.5 Turbo. The PDF was kept in a Chroma vector store using OpenAI Embeddings and answers were retrieved from the PDF through the RetrievalQA chain. However, we have been encountering an error when attempting the same process with GPT-4. The error message states: "InvalidRequestError: The completion operation is not compatible with the selected model, gpt-4. Please select a different model and try again." More information about compatible models for each operation can be found here: https://go.microsoft.com/fwlink/?linkid=2197993.
Following is the code to retrieve LLM response -
retriever = chatDb.as_retriever(search_type="similarity", search_kwargs={"k": 3})
qa = RetrievalQA.from_chain_type( llm=get_openai_model(), chain_type="stuff", retriever=retriever, return_source_documents=True, verbose = True, chain_type_kwargs={ "verbose" : True, "prompt" : promptHist, } )
result = qa({"query": prompt})
Beta Was this translation helpful? Give feedback.
All reactions