Skip to content

Latest commit

 

History

History
58 lines (40 loc) · 1.2 KB

README.md

File metadata and controls

58 lines (40 loc) · 1.2 KB

Ollama.ai client for NeoVIM

Dependencies

  1. Ollama server
  2. Curl
  3. Plenary

Usage

  1. Run ollama on your machine
  2. call :Ollama

Installation

Using packer:

use {
    "totu/nvim-ollama",
    requires = { { "nvim-lua/plenary.nvim" } }
}

Configuration / Setup

You can change model being queried as well as address and port of the ollama server. By default model=codellama and server is address=localhost, port=11434.

Here is an example configuration:

local ollama = require("nvim-ollama")
ollama.setup({
    model = "codellama",
    address = "127.0.0.1",
    port = 11434,
})

You can bind ollama functions like this:

vim.keymap.set("n", "<leader>t", ":OllamaToggle<cr>")
vim.keymap.set("n", "<leader>o", ":Ollama<cr>")

Functions

  • Ollama : starts a chat with the ollama server
  • OllamaHide : hides the ollama window
  • OllamaShow : shows the ollama window
  • OllamaToggle : toggles between showing and hiding the ollama window

Example of use

"Screenshot of ollama in action"