Skip to content

Commit

Permalink
feat: 在stream模式下让用户选择显示模式 #21;style: 提示中的开启关闭上色
Browse files Browse the repository at this point in the history
  • Loading branch information
xiaoxx-mac committed May 5, 2023
1 parent e014424 commit e5c97d5
Show file tree
Hide file tree
Showing 3 changed files with 70 additions and 13 deletions.
12 changes: 11 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,8 @@ Uses the [gpt-3.5-turbo](https://platform.openai.com/docs/guides/chat/chat-compl

- Add `/rand` command to set temperature parameter

- Add overflow mode switch for `/stream` command, now you can run command `/stream visible` to switch to always visible mode. In this mode, the content that exceeds the screen will be scrolled up, and the new content will be output until it is completed

<details>
<summary>More Change log</summary>

Expand Down Expand Up @@ -188,6 +190,14 @@ LOG_LEVEL=INFO

> In stream mode, the answer will start outputting as soon as the first response arrives, which can reducing waiting time. Stream mode is on by default.
- `/stream ellipsis` (default)

> Switch the mode of streaming output to auto omission, when the output content exceeds the screen, three small dots will be displayed at the bottom of the screen and wait until the output is completed
- `/stream visible`

> Toggle the streaming output mode to always visible, in this mode, the content that exceeds the screen will be scrolled up, and the new content will be output until it is completed. Note that in this mode the terminal will not properly clean up off-screen content.
- `/tokens`: Display the total tokens spent and the tokens for the current conversation

> GPT-3.5 has a token limit of 4096; use this command to check if you're approaching the limit
Expand Down Expand Up @@ -281,9 +291,9 @@ This project exists thanks to all the people who contribute.
├── LICENSE # License
├── README.md # Documentation
├── chat.py # Script entry point
├── config.ini # API key storage and other settings
├── gpt_term # Project package folder
│ ├── __init__.py
│ ├── config.ini # API key storage and other settings
│ └── main.py # Main program
├── requirements.txt # List of dependencies
└── setup.py
Expand Down
12 changes: 11 additions & 1 deletion README.zh-CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,8 @@

- 添加`/rand`命令设置temperature参数

-`/stream` 命令添加 overflow 模式切换,现在可以运行命令 `/stream visible` 切换到始终可见模式。在这个模式下,超出屏幕的内容将被向上滚动,新内容会一直输出直到完成

<details>
<summary>更多 Change log</summary>

Expand Down Expand Up @@ -191,6 +193,14 @@ LOG_LEVEL=INFO

> 在流式传输模式下,回复将在客户端收到第一部分回应后开始逐字输出,减少等待时间。流式传输默认为开启。
- `/stream ellipsis` (默认)

> 切换流式输出的模式为自动省略,当输出内容超过屏幕时,将在屏幕下方显示三个小点并等待直到输出完成
- `/stream visible`

> 切换流式输出的模式为始终可见,在这个模式下,超出屏幕的内容将被向上滚动,新内容会一直输出直到完成。注意在这个模式下终端将无法正确清理超出屏幕的内容。
- `/tokens`:显示已花费的 API token 数统计和本次对话的 token 长度

> GPT-3.5的对话token限制为4096,可通过此命令实时查看是否接近限制
Expand Down Expand Up @@ -284,9 +294,9 @@ LOG_LEVEL=INFO
├── LICENSE # 许可证
├── README.md # 说明文档
├── chat.py # 脚本入口
├── config.ini # 密钥存储以及其他设置
├── gpt_term # 项目包文件夹
│ ├── __init__.py
│ ├── config.ini # 密钥存储以及其他设置
│ └── main.py # 主程序
├── requirements.txt # 依赖包列表
└── setup.py
Expand Down
59 changes: 48 additions & 11 deletions gpt_term/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -70,26 +70,26 @@ class ChatMode:
def toggle_raw_mode(cls):
cls.raw_mode = not cls.raw_mode
console.print(
f"[dim]Raw mode {'enabled' if cls.raw_mode else 'disabled'}, use `/last` to display the last answer.")
f"[dim]Raw mode {'[green]enabled[/]' if cls.raw_mode else '[bright_red]disabled[/]'}, use `[bright_magenta]/last[/]` to display the last answer.")

@classmethod
def toggle_stream_mode(cls):
cls.stream_mode = not cls.stream_mode
if cls.stream_mode:
console.print(
f"[dim]Stream mode enabled, the answer will start outputting as soon as the first response arrives.")
f"[dim]Stream mode [green]enabled[/], the answer will start outputting as soon as the first response arrives.")
else:
console.print(
f"[dim]Stream mode disabled, the answer is being displayed after the server finishes responding.")
f"[dim]Stream mode [bright_red]disabled[/], the answer is being displayed after the server finishes responding.")

@classmethod
def toggle_multi_line_mode(cls):
cls.multi_line_mode = not cls.multi_line_mode
if cls.multi_line_mode:
console.print(
f"[dim]Multi-line mode enabled, press [[bright_magenta]Esc[/]] + [[bright_magenta]ENTER[/]] to submit.")
f"[dim]Multi-line mode [green]enabled[/], press [[bright_magenta]Esc[/]] + [[bright_magenta]ENTER[/]] to submit.")
else:
console.print(f"[dim]Multi-line mode disabled.")
console.print(f"[dim]Multi-line mode [bright_red]disabled[/].")


class ChatGPT:
Expand All @@ -114,6 +114,7 @@ def __init__(self, api_key: str, timeout: float):
self.gen_title_messages = Queue()
self.auto_gen_title_background_enable = True
self.threadlock_total_tokens_spent = threading.Lock()
self.stream_overflow = 'ellipsis'

self.credit_total_granted = 0
self.credit_total_used = 0
Expand Down Expand Up @@ -176,7 +177,7 @@ def send_request_silent(self, data):
def process_stream_response(self, response: requests.Response):
reply: str = ""
client = sseclient.SSEClient(response)
with Live(console=console, auto_refresh=False) as live:
with Live(console=console, auto_refresh=False, vertical_overflow=self.stream_overflow) as live:
try:
rprint("[bold cyan]ChatGPT: ")
for event in client.events():
Expand Down Expand Up @@ -477,6 +478,26 @@ def modify_system_prompt(self, new_content: str):
console.print(
f"[dim]No system prompt found in messages.")

def set_stream_overflow(self, new_overflow: str):
# turn on stream if not
if not ChatMode.stream_mode:
ChatMode.toggle_stream_mode()

if new_overflow == self.stream_overflow:
console.print("[dim]No change.")
return

old_overflow = self.stream_overflow
if new_overflow == 'ellipsis' or new_overflow == 'visible':
self.stream_overflow = new_overflow
console.print(
f"[dim]Stream overflow option has been modified from '{old_overflow}' to '{new_overflow}'.")
if new_overflow == 'visible':
console.print("[dim]Note that in this mode the terminal will not properly clean up off-screen content.")
else:
console.print(f"[dim]No such Stream overflow option, remain '{old_overflow}' unchanged.")


def set_model(self, new_model: str):
old_model = self.model
if not new_model:
Expand Down Expand Up @@ -531,6 +552,11 @@ class CustomCompleter(Completer):
"all"
]

stream_actions = [
"visible",
"ellipsis"
]

available_models = [
"gpt-3.5-turbo",
"gpt-3.5-turbo-0301",
Expand Down Expand Up @@ -561,6 +587,12 @@ def get_completions(self, document, complete_event):
for delete in self.delete_actions:
if delete.startswith(delete_prefix):
yield Completion(delete, start_position=-len(delete_prefix))
# Check if it's a /stream command
elif text.startswith('/stream '):
stream_prefix = text[8:]
for stream in self.stream_actions:
if stream.startswith(stream_prefix):
yield Completion(stream, start_position=-len(stream_prefix))
else:
for command in self.commands:
if command.startswith(text):
Expand Down Expand Up @@ -679,8 +711,13 @@ def handle_command(command: str, chat_gpt: ChatGPT, key_bindings: KeyBindings, c
ChatMode.toggle_raw_mode()
elif command == '/multi':
ChatMode.toggle_multi_line_mode()
elif command == '/stream':
ChatMode.toggle_stream_mode()

elif command.startswith('/stream'):
args = command.split()
if len(args) > 1:
chat_gpt.set_stream_overflow(args[1])
else:
ChatMode.toggle_stream_mode()

elif command == '/tokens':
chat_gpt.threadlock_total_tokens_spent.acquire()
Expand All @@ -696,7 +733,7 @@ def handle_command(command: str, chat_gpt: ChatGPT, key_bindings: KeyBindings, c
console.print(Panel(f"[bold green]Total Granted:[/]\t\t${format(chat_gpt.credit_total_granted, '.2f')}\n"
f"[bold cyan]Used This Month:[/]\t${format(chat_gpt.credit_used_this_month, '.2f')}\n"
f"[bold blue]Used Total:[/]\t\t${format(chat_gpt.credit_total_used, '.2f')}",
title="Credit Summary", title_align='left', subtitle=f"[blue]Plan: {chat_gpt.credit_plan}", width=35))
title="Credit Summary", title_align='left', subtitle=f"[bright_blue]Plan: {chat_gpt.credit_plan}", width=35))

elif command.startswith('/model'):
args = command.split()
Expand Down Expand Up @@ -843,7 +880,7 @@ def handle_command(command: str, chat_gpt: ChatGPT, key_bindings: KeyBindings, c
console.print('''[bold]Available commands:[/]
/raw - Toggle raw mode (showing raw text of ChatGPT's reply)
/multi - Toggle multi-line mode (allow multi-line input)
/stream - Toggle stream output mode (flow print the answer)
/stream \[overflow_mode] - Toggle stream output mode (flow print the answer)
/tokens - Show the total tokens spent and the tokens for the current conversation
/usage - Show total credits and current credits used
/last - Display last ChatGPT's reply
Expand Down Expand Up @@ -1012,7 +1049,7 @@ def main():

if not config.getboolean("AUTO_GENERATE_TITLE", True):
chat_gpt.auto_gen_title_background_enable = False
log.debug("Auto title generation disabled")
log.debug("Auto title generation [bright_red]disabled[/]")

gen_title_daemon_thread = threading.Thread(
target=chat_gpt.auto_gen_title_background, daemon=True)
Expand Down

0 comments on commit e5c97d5

Please sign in to comment.