Skip to content

Commit

Permalink
Merge pull request #59 from ferric-sol/main
Browse files Browse the repository at this point in the history
Clarifying instructions to run locally
  • Loading branch information
sirkitree authored Oct 28, 2024
2 parents bb3f397 + 8f45ec7 commit 28716e7
Showing 1 changed file with 5 additions and 0 deletions.
5 changes: 5 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -80,6 +80,11 @@ npx --no node-llama-cpp source download --gpu cuda

Make sure that you've installed the CUDA Toolkit, including cuDNN and cuBLAS.

## Running locally
Add XAI_MODEL and set it to one of the above options from [Run with
Llama](#run-with-llama) - you can leave X_SERVER_URL and XAI_API_KEY blank, it
downloads the model from huggingface and queries it locally

# Cloud Setup (with OpenAI)

In addition to the environment variables above, you will need to add the following:
Expand Down

0 comments on commit 28716e7

Please sign in to comment.