From 05b4bcf57c36cfc6a71be715f5160086908e5ae7 Mon Sep 17 00:00:00 2001 From: Simon Willison Date: Tue, 11 Jul 2023 20:09:07 -0700 Subject: [PATCH] Docs for llm logs on/off/status, closes #98 --- docs/logging.md | 39 +++++++++++++++++++++++++++++---------- docs/setup.md | 34 +++++++++++++++++++++++++++++++++- docs/usage.md | 17 +---------------- 3 files changed, 63 insertions(+), 27 deletions(-) diff --git a/docs/logging.md b/docs/logging.md index 22eff8ff..afc7c9f6 100644 --- a/docs/logging.md +++ b/docs/logging.md @@ -1,13 +1,9 @@ +(logging)= # Logging to SQLite -`llm` can log all prompts and responses to a SQLite database. +`llm` defaults to logging all prompts and responses to a SQLite database. -First, create a database in the correct location. You can do that using the `llm init-db` command: - -```bash -llm init-db -``` -This creates a database in a directory on your computer. You can find the location of that database using the `llm logs path` command: +You can find the location of that database using the `llm logs path` command: ```bash llm logs path @@ -18,12 +14,35 @@ On my Mac that outputs: ``` This will differ for other operating systems. -Once that SQLite database has been created any prompts you run will be logged to that database. - -To avoid logging a prompt, pass `--no-log` or `-n` to the command: +To avoid logging an individual prompt, pass `--no-log` or `-n` to the command: ```bash llm 'Ten names for cheesecakes' -n ``` + +To turn logging by default off: + +```bash +llm logs off +``` +To turn it back on again: + +```bash +llm logs on +``` + +To see the status of that database, run this: +```bash +llm logs status +``` +Example output: +``` +Logging is ON for all prompts +Found log database at /Users/simon/Library/Application Support/io.datasette.llm/logs.db +Number of conversations logged: 32 +Number of responses logged: 47 +Database file size: 19.96MB +``` + ## Viewing the logs You can view the logs using the `llm logs` command: diff --git a/docs/setup.md b/docs/setup.md index 8a27eab4..40dc300c 100644 --- a/docs/setup.md +++ b/docs/setup.md @@ -98,7 +98,26 @@ If no environment variable is found, the tool will fall back to checking `keys.j You can force the tool to use the key from `keys.json` even if an environment variable has also been set using `llm "prompt" --key openai`. -## Custom directory location +## Configuration + +You can configure LLM in a number of different ways. + +### Setting a custom default model + +The model used when calling `llm` without the `-m/--model` option defaults to `gpt-3.5-turbo` - the fastest and least expensive OpenAI model, and the same model family that powers ChatGPT. + +You can use the `llm models default` command to set a different default model. For GPT-4 (slower and more expensive, but more capable) run this: + +```bash +llm models default gpt-4 +``` +You can view the current model by running this: +``` +llm models default +``` +Any of the supported aliases for a model can be passed to this command. + +### Setting a custom directory location This tool stores various files - prompt templates, stored keys, preferences, a database of logs - in a directory on your computer. @@ -111,3 +130,16 @@ You can set a custom location for this directory by setting the `LLM_USER_PATH` ```bash export LLM_USER_PATH=/path/to/my/custom/directory ``` +### Turning SQLite logging on and off + +By default, LLM will log every prompt and response you make to a SQLite database - see {ref}`logging` for more details. + +You can turn this behavior off by default by running: +```bash +llm logs off +``` +Or turn it back on again with: +``` +llm logs on +``` +Run `llm logs status` to see the current states of the setting. \ No newline at end of file diff --git a/docs/usage.md b/docs/usage.md index 9abc258d..d547560f 100644 --- a/docs/usage.md +++ b/docs/usage.md @@ -155,19 +155,4 @@ When running a prompt you can pass the full model name or any of the aliases to ```bash llm -m chatgpt-16k 'As many names for cheesecakes as you can think of, with detailed descriptions' ``` -Models that have been installed using plugins will be shown here as well. - -## Setting a custom default model - -The model used when calling `llm` without the `-m/--model` option defaults to `gpt-3.5-turbo` - the fastest and least expensive OpenAI model, and the same model family that powers ChatGPT. - -You can use the `llm models default` command to set a different default model. For GPT-4 (slower and more expensive, but more capable) run this: - -```bash -llm models default gpt-4 -``` -You can view the current model by running this: -``` -llm models default -``` -Any of the supported aliases for a model can be passed to this command. \ No newline at end of file +Models that have been installed using plugins will be shown here as well. \ No newline at end of file