- Neovim 0.10.0+ - Older versions are not officially supported
- curl - 8.0.0+ is recommended for best compatibility. Should be installed by default on most systems and also shipped with Neovim
- Copilot chat in the IDE setting enabled in GitHub settings
- Optional tiktoken_core - Used for more accurate token counting
- For Arch Linux users, you can install
luajit-tiktoken-bin
orlua51-tiktoken-bin
from aur - Alternatively, install via luarocks:
sudo luarocks install --lua-version 5.1 tiktoken_core
- Alternatively, download a pre-built binary from lua-tiktoken releases. You can check your Lua PATH in Neovim by doing
:lua print(package.cpath)
. Save the binary astiktoken_core.so
in any of the given paths.
- For Arch Linux users, you can install
- Optional git - Used for fetching git diffs for
git
context- For Arch Linux users, you can install
git
from the official repositories - For other systems, use your package manager to install
git
. For windows use the installer provided from git site
- For Arch Linux users, you can install
- Optional lynx - Used for improved fetching of URLs for
url
context- For Arch Linux users, you can install
lynx
from the official repositories - For other systems, use your package manager to install
lynx
. For windows use the installer provided from lynx site
- For Arch Linux users, you can install
Warning
If you are on neovim < 0.11.0, you also might want to add noinsert
and popup
to your completeopt
to make the chat completion behave well.
return {
{
"CopilotC-Nvim/CopilotChat.nvim",
dependencies = {
{ "github/copilot.vim" }, -- or zbirenbaum/copilot.lua
{ "nvim-lua/plenary.nvim", branch = "master" }, -- for curl, log and async functions
},
build = "make tiktoken", -- Only on MacOS or Linux
opts = {
-- See Configuration section for options
},
-- See Commands section for default commands if you want to lazy load on them
},
}
See @jellydn for configuration
Similar to the lazy setup, you can use the following configuration:
call plug#begin()
Plug 'github/copilot.vim'
Plug 'nvim-lua/plenary.nvim'
Plug 'CopilotC-Nvim/CopilotChat.nvim'
call plug#end()
lua << EOF
require("CopilotChat").setup {
-- See Configuration section for options
}
EOF
- Put the files in the right place
mkdir -p ~/.config/nvim/pack/copilotchat/start
cd ~/.config/nvim/pack/copilotchat/start
git clone https://github.com/github/copilot.vim
git clone https://github.com/nvim-lua/plenary.nvim
git clone https://github.com/CopilotC-Nvim/CopilotChat.nvim
- Add to your configuration (e.g.
~/.config/nvim/init.lua
)
require("CopilotChat").setup {
-- See Configuration section for options
}
See @deathbeam for configuration
Commands are used to control the chat interface:
Command | Description |
---|---|
:CopilotChat <input>? |
Open chat with optional input |
:CopilotChatOpen |
Open chat window |
:CopilotChatClose |
Close chat window |
:CopilotChatToggle |
Toggle chat window |
:CopilotChatStop |
Stop current output |
:CopilotChatReset |
Reset chat window |
:CopilotChatSave <name>? |
Save chat history |
:CopilotChatLoad <name>? |
Load chat history |
:CopilotChatModels |
View/select available models |
:CopilotChatAgents |
View/select available agents |
:CopilotChat<PromptName> |
Use specific prompt template |
Default mappings in the chat interface:
Insert | Normal | Action |
---|---|---|
<Tab> |
<Tab> |
Trigger/accept completion menu for tokens |
<C-c> |
q |
Close the chat window |
<C-l> |
<C-l> |
Reset and clear the chat window |
<C-s> |
<CR> |
Submit the current prompt |
- | gr |
Toggle sticky prompt for line under cursor |
<C-y> |
<C-y> |
Accept nearest diff (best with COPILOT_GENERATE ) |
- | gj |
Jump to section of nearest diff |
- | gqa |
Add all answers from chat to quickfix list |
- | gqd |
Add all diffs from chat to quickfix list |
- | gy |
Yank nearest diff to register |
- | gd |
Show diff between source and nearest diff |
- | gi |
Show info about current chat |
- | gc |
Show current chat context |
- | gh |
Show help message |
The mappings can be customized by setting the mappings
table in your configuration. Each mapping can have:
normal
: Key for normal modeinsert
: Key for insert modedetail
: Description of what the mapping does
For example, to change the submit prompt mapping or show_diff full diff option:
{
mappings = {
submit_prompt = {
normal = '<Leader>s',
insert = '<C-s>'
}
show_diff = {
full_diff = true
}
}
}
Predefined prompt templates for common tasks. Reference them with /PromptName
in chat or use :CopilotChat<PromptName>
:
Prompt | Description |
---|---|
Explain |
Write an explanation for the selected code |
Review |
Review the selected code |
Fix |
Rewrite the code with bug fixes |
Optimize |
Optimize code for performance and readability |
Docs |
Add documentation comments to the code |
Tests |
Generate tests for the code |
Commit |
Write commit message using commitizen convention |
Define your own prompts in the configuration:
{
prompts = {
MyCustomPrompt = {
prompt = 'Explain how it works.',
system_prompt = 'You are very good at explaining stuff',
mapping = '<leader>ccmc',
description = 'My custom prompt description',
}
}
}
System prompts define the AI model's behavior. Reference them with /PROMPT_NAME
in chat:
Prompt | Description |
---|---|
COPILOT_INSTRUCTIONS |
Base GitHub Copilot instructions |
COPILOT_EXPLAIN |
Adds coding tutor behavior |
COPILOT_REVIEW |
Adds code review behavior with diagnostics |
COPILOT_GENERATE |
Adds code generation behavior and rules |
Define your own system prompts in the configuration (similar to prompts
):
{
prompts = {
Yarrr = {
system_prompt = 'You are fascinated by pirates, so please respond in pirate speak.',
}
}
}
Sticky prompts persist across chat sessions. They're useful for maintaining context or agent selection. They work as follows:
- Prefix text with
>
using markdown blockquote syntax - The prompt will be copied at the start of every new chat prompt
- Edit sticky prompts freely while maintaining the
>
prefix
Examples:
> #files
> List all files in the workspace
> @models Using Mistral-small
> What is 1 + 11
You can also set default sticky prompts in the configuration:
{
sticky = {
'@models Using Mistral-small',
'#files:full',
}
}
You can control which AI model to use in three ways:
- List available models with
:CopilotChatModels
- Set model in prompt with
$model_name
- Configure default model via
model
config key
For supported models, see:
- Copilot Chat Models
- GitHub Marketplace Models (experimental, limited usage)
Agents determine the AI assistant's capabilities. Control agents in three ways:
- List available agents with
:CopilotChatAgents
- Set agent in prompt with
@agent_name
- Configure default agent via
agent
config key
The default "noop" agent is none
. For more information:
Contexts provide additional information to the chat. Add context using #context_name[:input]
syntax:
Context | Input Support | Description |
---|---|---|
buffer |
✓ (number) | Current or specified buffer content |
buffers |
✓ (type) | All buffers content (listed/all) |
file |
✓ (path) | Content of specified file |
files |
✓ (mode) | Workspace files (list/full content) |
git |
✓ (ref) | Git diff (unstaged/staged/commit) |
url |
✓ (url) | Content from URL |
register |
✓ (name) | Content of vim register |
quickfix |
- | Quickfix list file contents |
Examples:
> #buffer
> #buffer:2
> #files:list
> #git:staged
> #url:https://example.com
Define your own contexts in the configuration with input handling and resolution:
{
contexts = {
birthday = {
input = function(callback)
vim.ui.select({ 'user', 'napoleon' }, {
prompt = 'Select birthday> ',
}, callback)
end,
resolve = function(input)
return {
{
content = input .. ' birthday info',
filename = input .. '_birthday',
filetype = 'text',
}
}
end
}
}
}
Selections determine the source content for chat interactions. Configure them globally or per-prompt.
Available selections are located in local select = require("CopilotChat.select")
:
Selection | Description |
---|---|
visual |
Current visual selection |
buffer |
Current buffer content |
line |
Current line content |
unnamed |
Unnamed register (last deleted/changed/yanked content) |
You can set a default selection in the configuration:
{
-- Default uses visual selection or falls back to buffer
selection = function(source)
return select.visual(source) or select.buffer(source)
end
}
Providers are modules that implement integration with different AI providers.
copilot
- Default GitHub Copilot provider used for chat and embeddingsgithub_models
- Provider for GitHub Marketplace models
Custom providers can implement these methods:
{
-- Optional: Disable provider
disabled?: boolean,
-- Optional: Provider to use for embeddings
embeddings?: string,
-- Optional: Get authentication token
get_token(): string, number?,
-- Required: Get request headers
get_headers(token: string): table,
-- Required: Get API endpoint URL
get_url(opts: table): string,
-- Required: Prepare request body
prepare_input(inputs: table, opts: table, model: table): table,
-- Optional: Get available models
get_models?(headers: table): table,
-- Optional: Get available agents
get_agents?(headers: table): table,
}
Here's how to implement an ollama provider:
{
providers = {
ollama = {
embeddings = 'copilot_embeddings', -- Use Copilot as embedding provider
get_headers = function()
return {
['Content-Type'] = 'application/json',
}
end,
get_models = function(headers)
local utils = require('CopilotChat.utils')
local response = utils.curl_get('http://localhost:11434/api/tags', { headers = headers })
if not response or response.status ~= 200 then
error('Failed to fetch models: ' .. tostring(response and response.status))
end
local models = {}
for _, model in ipairs(vim.json.decode(response.body)['models']) do
table.insert(models, {
id = model.name,
name = model.name,
version = "latest",
tokenizer = "o200k_base",
})
end
return models
end,
prepare_input = function(inputs, opts)
return {
model = opts.model,
messages = inputs,
stream = true,
}
end,
get_url = function()
return 'http://localhost:11434/api/chat'
end,
}
}
}
Below are all available configuration options with their default values:
{
-- Shared config starts here (can be passed to functions at runtime and configured via setup function)
system_prompt = prompts.COPILOT_INSTRUCTIONS.system_prompt, -- System prompt to use (can be specified manually in prompt via /).
model = 'gpt-4o', -- Default model to use, see ':CopilotChatModels' for available models (can be specified manually in prompt via $).
agent = 'copilot', -- Default agent to use, see ':CopilotChatAgents' for available agents (can be specified manually in prompt via @).
context = nil, -- Default context or array of contexts to use (can be specified manually in prompt via #).
sticky = nil, -- Default sticky prompt or array of sticky prompts to use at start of every new chat.
temperature = 0.1, -- GPT result temperature
headless = false, -- Do not write to chat buffer and use history(useful for using callback for custom processing)
callback = nil, -- Callback to use when ask response is received
-- default selection
selection = function(source)
return select.visual(source) or select.buffer(source)
end,
-- default window options
window = {
layout = 'vertical', -- 'vertical', 'horizontal', 'float', 'replace'
width = 0.5, -- fractional width of parent, or absolute width in columns when > 1
height = 0.5, -- fractional height of parent, or absolute height in rows when > 1
-- Options below only apply to floating windows
relative = 'editor', -- 'editor', 'win', 'cursor', 'mouse'
border = 'single', -- 'none', single', 'double', 'rounded', 'solid', 'shadow'
row = nil, -- row position of the window, default is centered
col = nil, -- column position of the window, default is centered
title = 'Copilot Chat', -- title of chat window
footer = nil, -- footer of chat window
zindex = 1, -- determines if window is on top or below other floating windows
},
show_help = true, -- Shows help message as virtual lines when waiting for user input
show_folds = true, -- Shows folds for sections in chat
highlight_selection = true, -- Highlight selection
highlight_headers = true, -- Highlight headers in chat, disable if using markdown renderers (like render-markdown.nvim)
auto_follow_cursor = true, -- Auto-follow cursor in chat
auto_insert_mode = false, -- Automatically enter insert mode when opening window and on new prompt
insert_at_end = false, -- Move cursor to end of buffer when inserting text
clear_chat_on_new_prompt = false, -- Clears chat on every new prompt
-- Static config starts here (can be configured only via setup function)
debug = false, -- Enable debug logging (same as 'log_level = 'debug')
log_level = 'info', -- Log level to use, 'trace', 'debug', 'info', 'warn', 'error', 'fatal'
proxy = nil, -- [protocol://]host[:port] Use this proxy
allow_insecure = false, -- Allow insecure server connections
chat_autocomplete = true, -- Enable chat autocompletion (when disabled, requires manual `mappings.complete` trigger)
log_path = vim.fn.stdpath('state') .. '/CopilotChat.log', -- Default path to log file
history_path = vim.fn.stdpath('data') .. '/copilotchat_history', -- Default path to stored history
question_header = '# User ', -- Header to use for user questions
answer_header = '# Copilot ', -- Header to use for AI answers
error_header = '# Error ', -- Header to use for errors
separator = '───', -- Separator to use in chat
-- default providers
-- see config/providers.lua for implementation
providers = {
copilot = {
},
github_models = {
},
copilot_embeddings = {
},
}
-- default contexts
-- see config/contexts.lua for implementation
contexts = {
buffer = {
},
buffers = {
},
file = {
},
files = {
},
git = {
},
url = {
},
register = {
},
quickfix = {
},
},
-- default prompts
-- see config/prompts.lua for implementation
prompts = {
Explain = {
prompt = '> /COPILOT_EXPLAIN\n\nWrite an explanation for the selected code as paragraphs of text.',
},
Review = {
prompt = '> /COPILOT_REVIEW\n\nReview the selected code.',
},
Fix = {
prompt = '> /COPILOT_GENERATE\n\nThere is a problem in this code. Rewrite the code to show it with the bug fixed.',
},
Optimize = {
prompt = '> /COPILOT_GENERATE\n\nOptimize the selected code to improve performance and readability.',
},
Docs = {
prompt = '> /COPILOT_GENERATE\n\nPlease add documentation comments to the selected code.',
},
Tests = {
prompt = '> /COPILOT_GENERATE\n\nPlease generate tests for my code.',
},
Commit = {
prompt = '> #git:staged\n\nWrite commit message for the change with commitizen convention. Make sure the title has maximum 50 characters and message is wrapped at 72 characters. Wrap the whole message in code block with language gitcommit.',
},
},
-- default mappings
-- see config/mappings.lua for implementation
mappings = {
complete = {
insert = '<Tab>',
},
close = {
normal = 'q',
insert = '<C-c>',
},
reset = {
normal = '<C-l>',
insert = '<C-l>',
},
submit_prompt = {
normal = '<CR>',
insert = '<C-s>',
},
toggle_sticky = {
detail = 'Makes line under cursor sticky or deletes sticky line.',
normal = 'gr',
},
accept_diff = {
normal = '<C-y>',
insert = '<C-y>',
},
jump_to_diff = {
normal = 'gj',
},
quickfix_answers = {
normal = 'gqa',
},
quickfix_diffs = {
normal = 'gqd',
},
yank_diff = {
normal = 'gy',
register = '"', -- Default register to use for yanking
},
show_diff = {
normal = 'gd',
full_diff = false, -- Show full diff instead of unified diff when showing diff window
},
show_info = {
normal = 'gi',
},
show_context = {
normal = 'gc',
},
show_help = {
normal = 'gh',
},
},
}
You can set local options for plugin buffers (copilot-chat
, copilot-diff
, copilot-overlay
):
vim.api.nvim_create_autocmd('BufEnter', {
pattern = 'copilot-*',
callback = function()
-- Set buffer-local options
vim.opt_local.relativenumber = true
-- Add buffer-local mappings
vim.keymap.set('n', '<C-p>', function()
print(require("CopilotChat").response())
end, { buffer = true })
end
})
local chat = require("CopilotChat")
-- Window Management
chat.open() -- Open chat window
chat.open({ -- Open with custom options
window = {
layout = 'float',
title = 'Custom Chat',
},
})
chat.close() -- Close chat window
chat.toggle() -- Toggle chat window
chat.reset() -- Reset chat window
-- Chat Interaction
chat.ask("Explain this code.") -- Basic question
chat.ask("Explain this code.", {
selection = require("CopilotChat.select").buffer,
context = { 'buffers', 'files' },
callback = function(response)
print("Response:", response)
end,
})
-- Utilities
chat.prompts() -- Get all available prompts
chat.response() -- Get last response
chat.log_level("debug") -- Set log level
-- Actions
local actions = require("CopilotChat.actions")
actions.pick(actions.prompt_actions({
selection = require("CopilotChat.select").visual,
}))
-- Update config
chat.setup({
model = 'gpt-4',
window = {
layout = 'float'
}
})
Set up a quick chat command that uses the entire buffer content:
-- Quick chat keybinding
vim.keymap.set('n', '<leader>ccq', function()
local input = vim.fn.input("Quick Chat: ")
if input ~= "" then
require("CopilotChat").ask(input, {
selection = require("CopilotChat.select").buffer
})
end
end, { desc = "CopilotChat - Quick chat" })
Configure the chat window to appear inline near the cursor:
require("CopilotChat").setup({
window = {
layout = 'float',
relative = 'cursor',
width = 1,
height = 0.4,
row = 1
}
})
Requires telescope.nvim:
vim.keymap.set('n', '<leader>ccp', function()
local actions = require("CopilotChat.actions")
require("CopilotChat.integrations.telescope").pick(actions.prompt_actions())
end, { desc = "CopilotChat - Prompt actions" })
Requires PerplexityAI Agent:
vim.keymap.set({ 'n', 'v' }, '<leader>ccs', function()
local input = vim.fn.input("Perplexity: ")
if input ~= "" then
require("CopilotChat").ask(input, {
agent = "perplexityai",
selection = false,
})
end
end, { desc = "CopilotChat - Perplexity Search" })
Use render-markdown.nvim for better chat display:
-- Register copilot-chat filetype
require('render-markdown').setup({
file_types = { 'markdown', 'copilot-chat' },
})
-- Adjust chat display settings
require('CopilotChat').setup({
highlight_headers = false,
separator = '---',
error_header = '> [!ERROR] Error',
})
To set up the environment:
- Clone the repository:
git clone https://github.com/CopilotC-Nvim/CopilotChat.nvim
cd CopilotChat.nvim
- Install development dependencies:
# Install pre-commit hooks
make install-pre-commit
To run tests:
make test
- Fork the repository
- Create your feature branch
- Make your changes
- Run tests and lint checks
- Submit a pull request
See CONTRIBUTING.md for detailed guidelines.
Thanks goes to these wonderful people (emoji key):
This project follows the all-contributors specification. Contributions of any kind are welcome!