From 5e44cfbc45fd09caa414f9b676e2458758df4e5d Mon Sep 17 00:00:00 2001 From: MohamedBassem Date: Sat, 12 Oct 2024 17:29:35 +0000 Subject: [PATCH] docs: Remove the warning about ollama being new --- docs/docs/02-Installation/01-docker.md | 7 ++++--- docs/docs/03-configuration.md | 2 +- 2 files changed, 5 insertions(+), 4 deletions(-) diff --git a/docs/docs/02-Installation/01-docker.md b/docs/docs/02-Installation/01-docker.md index 4745529a..fc46eb6c 100644 --- a/docs/docs/02-Installation/01-docker.md +++ b/docs/docs/02-Installation/01-docker.md @@ -52,15 +52,16 @@ OPENAI_API_KEY= Learn more about the costs of using openai [here](/openai).
- [EXPERIMENTAL] If you want to use Ollama (https://ollama.com/) instead for local inference. + If you want to use Ollama (https://ollama.com/) instead for local inference. - **Note:** The quality of the tags you'll get will depend on the quality of the model you choose. Running local models is a recent addition and not as battle tested as using openai, so proceed with care (and potentially expect a bunch of inference failures). + **Note:** The quality of the tags you'll get will depend on the quality of the model you choose. - Make sure ollama is running. - Set the `OLLAMA_BASE_URL` env variable to the address of the ollama API. - - Set `INFERENCE_TEXT_MODEL` to the model you want to use for text inference in ollama (for example: `mistral`) + - Set `INFERENCE_TEXT_MODEL` to the model you want to use for text inference in ollama (for example: `llama3.1`) - Set `INFERENCE_IMAGE_MODEL` to the model you want to use for image inference in ollama (for example: `llava`) - Make sure that you `ollama pull`-ed the models that you want to use. + - You might want to tune the `INFERENCE_CONTEXT_LENGTH` as the default is quite small. The larger the value, the better the quality of the tags, but the more expensive the inference will be.
diff --git a/docs/docs/03-configuration.md b/docs/docs/03-configuration.md index 98fa7a1a..9abd6fb2 100644 --- a/docs/docs/03-configuration.md +++ b/docs/docs/03-configuration.md @@ -45,7 +45,7 @@ Either `OPENAI_API_KEY` or `OLLAMA_BASE_URL` need to be set for automatic taggin :::warning - The quality of the tags you'll get will depend on the quality of the model you choose. -- Running local models is a recent addition and not as battle tested as using OpenAI, so proceed with care (and potentially expect a bunch of inference failures). +- You might want to tune the `INFERENCE_CONTEXT_LENGTH` as the default is quite small. The larger the value, the better the quality of the tags, but the more expensive the inference will be (money-wise on OpenAI and resource-wise on ollama). ::: | Name | Required | Default | Description |