A file-based AI chat tool designed to integrate with Visual Studio Code (VS Code), enabling you to engage with a Large Language Model (LLM) directly within a Markdown file.
https://www.python.org/downloads/release/python-3122/
See requirements.txt
.
https://code.visualstudio.com/
-
Clone the repository:
git clone https://github.com/noveky/filechat.git
-
Ensure that Python and
pip
are installed on your system. -
Create a Python virtual environment in the repository. (Optional)
-
Install project dependencies:
pip install -r requirements.txt
-
Open the repository with VS Code.
-
Install the Code Runner extension (
formulahendry.code-runner
) for VS Code. -
Specify the run code command for Markdown files in
.vscode/settings.json
.-
For Windows:
// .vscode/settings.json { "code-runner.runInTerminal": true, "code-runner.executorMap": { "markdown": "cd $dir && ..\\run.ps1 $fullFileName" } }
-
For macOS/Linux:
// .vscode/settings.json { "code-runner.runInTerminal": true, "code-runner.executorMap": { "markdown": "cd $dir && bash ../run.sh $fullFileName" } }
-
-
Install the Markdown All in One extension (
yzhang.markdown-all-in-one
).This extension enables you to open a preview panel for a Markdown file (default keyboard shortcut is
Ctrl + K
V
), where you can read the rendered Markdown content more easily.
To access a chat model, you need to specify the OpenAI API key and base URL.
There are three ways to do that:
-
Set system environment variables
OPENAI_API_KEY
andOPENAI_BASE_URL
. -
Create a
.env
file in the root directory of the repository and set the two environment variables in the file.OPENAI_API_KEY=<your_api_key> OPENAI_BASE_URL=<your_base_url>
-
Specify the API key and the base URL in the app configuration file introduced below.
You can customize the app configurations in config.yaml
.
Here is a template, and you can choose what you want to specify:
# ---- Chat Completion ----
model: # Name of your preferred completion model
temperature: # Your preferred `temperature` parameter for the model (optional)
max_tokens: # Your preferred `max_tokens` parameter for the model (optional)
# ---- App Behaviors ----
max_retries: # How many times to retry (default is 3)
print_response: # Whether to print the response in standard output (default is true)
stream_for_file: # Whether to append the response to the file token by token or as a whole (default is true)
# ---- OpenAI API ----
api_key: # OpenAI API key (overrides the environment variable `OPENAI_API_KEY` if specified)
base_url: # OpenAI base URL (overrides the environment variable `OPENAI_BASE_URL` if specified)
Here is a quick guide to get yourself started:
-
Make a
chats
directory in the repository and create a Markdown file in it (e.g.New Chat.md
). -
Type a message in the Markdown file.
Tell me a joke.
-
Save the file, and run Filechat on the current file by hitting "Run Code" (default keyboard shortcut is
Ctrl + Alt + N
).Non-structured file content is regarded as a user message, so the file gets formatted into a
# User
heading followed by the original content.After a few seconds, hopefully you will see the LLM response get appended to the end of the file. The new
# User
heading prompts you continue the conversation.# User Tell me a joke. # Assistant Why don't skeletons fight each other? They don't have the guts! # User |
-
Type your reply message and follow step 3 to continue chatting.
Besides user messages, you can optionally include a system prompt, labeled with a # System
heading. This is a special section that sets the context or provides initial instructions for the model before processing user inputs.
# System
This is a system prompt.
# User
This is a user message.
At the very beginning of your chat file, you can include a front matter section. This section is enclosed within triple dashes (---
) and is written in YAML format. It allows you to specify file-wise configurations that override the app configurations.
---
model: gpt-4o
temperature: 0.7
---