- Run ollama on your machine
- call
:Ollama
Using packer:
use {
"totu/nvim-ollama",
requires = { { "nvim-lua/plenary.nvim" } }
}
You can change model
being queried as well as address
and port
of the ollama server.
By default model=codellama
and server is address=localhost
, port=11434
.
Here is an example configuration:
local ollama = require("nvim-ollama")
ollama.setup({
model = "codellama",
address = "127.0.0.1",
port = 11434,
})
You can bind ollama functions like this:
vim.keymap.set("n", "<leader>t", ":OllamaToggle<cr>")
vim.keymap.set("n", "<leader>o", ":Ollama<cr>")
- Ollama : starts a chat with the ollama server
- OllamaHide : hides the ollama window
- OllamaShow : shows the ollama window
- OllamaToggle : toggles between showing and hiding the ollama window