From 10b16f00d7f98286ac9c8ccc52b579802fbcd7ce Mon Sep 17 00:00:00 2001 From: Casey Caruso Date: Wed, 21 Feb 2024 09:27:51 -0500 Subject: [PATCH] docs: poetry install instructions (#8) --- README.md | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/README.md b/README.md index 7d568ac..1d6c9ed 100644 --- a/README.md +++ b/README.md @@ -47,10 +47,11 @@ On the LLM side, Selfie uses tools like LiteLLM and txtai to support forwarding To launch Selfie, ensure that [python](https://www.python.org), [poetry](https://python-poetry.org), and [yarn](https://yarnpkg.com) are installed. Then run the following commands in the project directory: -1. `cp selfie/.env.example selfie/.env`. -2. `./scripts/build-ui.sh` (requires `yarn`) -3. `poetry install`, enable GPU or Metal acceleration via llama.cpp by subsequently installing GPU-enabled llama-cpp-python, see Scripts. -4. `poetry run python -m selfie`, or `poetry run python -m selfie --gpu` if your device is GPU-enabled. The first time you run this, it will download ~4GB of model weights. While you wait, you can download your WhatsApp or Google Takeout data for the next step. +1. `git clone git@github.com:vana-com/selfie.git` +2. `cp selfie/.env.example selfie/.env` +3. `./scripts/build-ui.sh` (requires `yarn`) +4. `poetry install`, enable GPU or Metal acceleration via llama.cpp by subsequently installing GPU-enabled llama-cpp-python, see Scripts. _Note: if you are on macOS and do not have poetry installed, you can run `brew install poetry`_. +5. `poetry run python -m selfie`, or `poetry run python -m selfie --gpu` if your device is GPU-enabled. The first time you run this, it will download ~4GB of model weights. While you wait, you can download your WhatsApp or Google Takeout data for the next step. [//]: # (Disable this note about installing with GPU support until supported via transformers, etc.) [//]: # (3. `poetry install` or `poetry install -E gpu` (to enable GPU devices via transformers). Enable GPU or Metal acceleration via llama.cpp by installing GPU-enabled llama-cpp-python, see Scripts.)