Skip to content

Commit

Permalink
docs: updated docs for running with ollama (litellm no longer needed)
Browse files Browse the repository at this point in the history
  • Loading branch information
ErikBjare committed Oct 9, 2024
1 parent fa59310 commit f258602
Showing 1 changed file with 5 additions and 15 deletions.
20 changes: 5 additions & 15 deletions docs/providers.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,26 +38,16 @@ To use OpenRouter, set your API key:
export OPENROUTER_API_KEY="your-api-key"
```

## Local
## Local/Ollama

There are several ways to run local LLM models in a way that exposes a OpenAI API-compatible server.

Here's we will cover how to achieve that with `ollama` together with the `litellm` proxy.
Here's we will cover how to achieve that with `ollama`.

You first need to install `ollama`, and then `litellm` with the `proxy` extra:
You first need to install `ollama`, then you can run it with:

```sh
pipx install litellm[proxy]
```

Then you can finally run it with:

```sh
MODEL=llama3.2:1b
ollama pull $MODEL
ollama pull llama3.2:1b
ollama serve
litellm --model ollama/$MODEL

export OPENAI_API_BASE="http://127.0.0.1:4000"
gptme 'hello' -m local/ollama/$MODEL
OPENAI_API_BASE="http://127.0.0.1:11434" gptme 'hello' -m local/llama3.2:1b
```

0 comments on commit f258602

Please sign in to comment.