From dde02245bcd51a7ede7b6789c82ae217cac53d92 Mon Sep 17 00:00:00 2001 From: Marco Braga Date: Mon, 8 Jul 2024 11:19:50 -0300 Subject: [PATCH] fix(docs): Fix concepts.mdx referencing to installation page (#1779) * Fix/update concepts.mdx referencing to installation page The link for `/installation` is broken in the "Main Concepts" page. The correct path would be `./installation` or maybe `/installation/getting-started/installation` * fix: docs --------- Co-authored-by: Javier Martinez --- fern/docs/pages/installation/concepts.mdx | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/fern/docs/pages/installation/concepts.mdx b/fern/docs/pages/installation/concepts.mdx index 1ccb44682..43e727aa2 100644 --- a/fern/docs/pages/installation/concepts.mdx +++ b/fern/docs/pages/installation/concepts.mdx @@ -15,13 +15,13 @@ You get to decide the setup for these 3 main components: There is an extra component that can be enabled or disabled: the UI. It is a Gradio UI that allows to interact with the API in a more user-friendly way. ### Setups and Dependencies -Your setup will be the combination of the different options available. You'll find recommended setups in the [installation](/installation) section. +Your setup will be the combination of the different options available. You'll find recommended setups in the [installation](./installation) section. PrivateGPT uses poetry to manage its dependencies. You can install the dependencies for the different setups by running `poetry install --extras " ..."`. Extras are the different options available for each component. For example, to install the dependencies for a a local setup with UI and qdrant as vector database, Ollama as LLM and HuggingFace as local embeddings, you would run `poetry install --extras "ui vector-stores-qdrant llms-ollama embeddings-huggingface"`. -Refer to the [installation](/installation) section for more details. +Refer to the [installation](./installation) section for more details. ### Setups and Configuration PrivateGPT uses yaml to define its configuration in files named `settings-.yaml`. @@ -57,4 +57,4 @@ For local LLM there are two options: In order for LlamaCPP powered LLM to work (the second option), you need to download the LLM model to the `models` folder. You can do so by running the `setup` script: ```bash poetry run python scripts/setup -``` \ No newline at end of file +```