-
Notifications
You must be signed in to change notification settings - Fork 7.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Context Template from settings for QueryDocs Allowing the text … #1398
base: main
Are you sure you want to change the base?
Changes from 1 commit
47b3713
bbb9d2c
cfbba71
4c51eeb
f06a8b4
7463efc
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -19,6 +19,9 @@ | |
) | ||
from private_gpt.open_ai.extensions.context_filter import ContextFilter | ||
from private_gpt.server.chunks.chunks_service import Chunk | ||
from private_gpt.settings.settings import settings | ||
|
||
DEFAULT_CONTEXT_TEMPLATE = settings().rag.default_context_template | ||
|
||
|
||
class Completion(BaseModel): | ||
|
@@ -97,6 +100,7 @@ def _chat_engine( | |
system_prompt: str | None = None, | ||
use_context: bool = False, | ||
context_filter: ContextFilter | None = None, | ||
context_template: str | None = DEFAULT_CONTEXT_TEMPLATE, | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The default value should be injected in the body of the function instead of here (given that it is coming from There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ok, something like the below?
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It should be
where |
||
) -> BaseChatEngine: | ||
if use_context: | ||
vector_index_retriever = self.vector_store_component.get_retriever( | ||
|
@@ -109,6 +113,7 @@ def _chat_engine( | |
node_postprocessors=[ | ||
MetadataReplacementPostProcessor(target_metadata_key="window"), | ||
], | ||
context_template=context_template, | ||
) | ||
else: | ||
return SimpleChatEngine.from_defaults( | ||
|
@@ -121,6 +126,7 @@ def stream_chat( | |
messages: list[ChatMessage], | ||
use_context: bool = False, | ||
context_filter: ContextFilter | None = None, | ||
context_template: str | None = None, | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Given we are not making use of this context_template argument anywhere, I'd suggest to remove it for the moment. |
||
) -> CompletionGen: | ||
chat_engine_input = ChatEngineInput.from_messages(messages) | ||
last_message = ( | ||
|
@@ -141,6 +147,7 @@ def stream_chat( | |
system_prompt=system_prompt, | ||
use_context=use_context, | ||
context_filter=context_filter, | ||
context_template=context_template, | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Given we are not making use of this context_template argument anywhere, I'd suggest to remove it for the moment. |
||
) | ||
streaming_response = chat_engine.stream_chat( | ||
message=last_message if last_message is not None else "", | ||
|
@@ -157,6 +164,7 @@ def chat( | |
messages: list[ChatMessage], | ||
use_context: bool = False, | ||
context_filter: ContextFilter | None = None, | ||
context_template: str | None = None, | ||
) -> Completion: | ||
chat_engine_input = ChatEngineInput.from_messages(messages) | ||
last_message = ( | ||
|
@@ -177,6 +185,7 @@ def chat( | |
system_prompt=system_prompt, | ||
use_context=use_context, | ||
context_filter=context_filter, | ||
context_template=context_template, | ||
) | ||
wrapped_response = chat_engine.chat( | ||
message=last_message if last_message is not None else "", | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -56,4 +56,7 @@ sagemaker: | |
|
||
openai: | ||
api_key: ${OPENAI_API_KEY:} | ||
model: gpt-3.5-turbo | ||
model: gpt-3.5-turbo | ||
|
||
rag: | ||
default_context_template: "Context information is below.\n--------------------\n{context_str}\n--------------------\n" | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Please use There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Thanks for the comment. Happy to do so, just one thing. In this case I was trying to keep the value as closely as possible as the original one, that I understand includes "\n" (see https://github.com/run-llama/llama_index/blob/main/llama_index/chat_engine/context.py) DEFAULT_CONTEXT_TEMPLATE = ( In case you want me to make the change anyway, can you confirm if what you´d expect would be: thanks There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @lopagela means use the multiline option in yaml, check the PR he linked it's well explained There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ok, I'll wait until the changes to that PR are done to replicate as it's not 100% clear to me (the instruction here is to use "I" style, but in the PR there is a comment to use ">" instead, not sure which one is more current). A caveat I maintain that may be relevant is that DEFAULT_SYSTEM_PROMPT in llama_index is written in python as multiline but it's actually a single line, while DEFAULT_CONTEXT_TEMPLATE is indeed multi-line, so just want to make sure we are on the same page that the treatment may be slightly different (see below)
This is what ChatGPT (GPT-4) suggested (link to convo): A:
or 2 alternatives with ">" notation, depending on readability B1:
B2:
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The PR has been merged. In that case we wanted single line (as default_system_prompt is supposed to be single line). In the case of default_context_template you do want to have line breaks, so you should use | instead. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Were is set (at usage) this new
context_template
?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure if I understand.
If "query docs" is being used, it creates a ContextChatEngine (from llama_index) that accepts this parameter. Currently it's never passed, so it defaults to a hardcoded value. With this change, if there is a value defined in the settings file, the value in settings will be used instead. This should work both from the Gradio UI or from the API.
As per the Discord discussion, I didn't expose this configuration to the API