Skip to content

Commit

Permalink
docs: Remove the warning about ollama being new
Browse files Browse the repository at this point in the history
  • Loading branch information
MohamedBassem committed Oct 12, 2024
1 parent 6035dff commit 5e44cfb
Show file tree
Hide file tree
Showing 2 changed files with 5 additions and 4 deletions.
7 changes: 4 additions & 3 deletions docs/docs/02-Installation/01-docker.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,15 +52,16 @@ OPENAI_API_KEY=<key>
Learn more about the costs of using openai [here](/openai).

<details>
<summary>[EXPERIMENTAL] If you want to use Ollama (https://ollama.com/) instead for local inference.</summary>
<summary>If you want to use Ollama (https://ollama.com/) instead for local inference.</summary>

**Note:** The quality of the tags you'll get will depend on the quality of the model you choose. Running local models is a recent addition and not as battle tested as using openai, so proceed with care (and potentially expect a bunch of inference failures).
**Note:** The quality of the tags you'll get will depend on the quality of the model you choose.

- Make sure ollama is running.
- Set the `OLLAMA_BASE_URL` env variable to the address of the ollama API.
- Set `INFERENCE_TEXT_MODEL` to the model you want to use for text inference in ollama (for example: `mistral`)
- Set `INFERENCE_TEXT_MODEL` to the model you want to use for text inference in ollama (for example: `llama3.1`)
- Set `INFERENCE_IMAGE_MODEL` to the model you want to use for image inference in ollama (for example: `llava`)
- Make sure that you `ollama pull`-ed the models that you want to use.
- You might want to tune the `INFERENCE_CONTEXT_LENGTH` as the default is quite small. The larger the value, the better the quality of the tags, but the more expensive the inference will be.


</details>
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/03-configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ Either `OPENAI_API_KEY` or `OLLAMA_BASE_URL` need to be set for automatic taggin
:::warning

- The quality of the tags you'll get will depend on the quality of the model you choose.
- Running local models is a recent addition and not as battle tested as using OpenAI, so proceed with care (and potentially expect a bunch of inference failures).
- You might want to tune the `INFERENCE_CONTEXT_LENGTH` as the default is quite small. The larger the value, the better the quality of the tags, but the more expensive the inference will be (money-wise on OpenAI and resource-wise on ollama).
:::

| Name | Required | Default | Description |
Expand Down

0 comments on commit 5e44cfb

Please sign in to comment.