Skip to content

Commit

Permalink
Docs for llm logs on/off/status, closes #98
Browse files Browse the repository at this point in the history
  • Loading branch information
simonw committed Jul 12, 2023
1 parent 833c4b4 commit 05b4bcf
Show file tree
Hide file tree
Showing 3 changed files with 63 additions and 27 deletions.
39 changes: 29 additions & 10 deletions docs/logging.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,9 @@
(logging)=
# Logging to SQLite

`llm` can log all prompts and responses to a SQLite database.
`llm` defaults to logging all prompts and responses to a SQLite database.

First, create a database in the correct location. You can do that using the `llm init-db` command:

```bash
llm init-db
```
This creates a database in a directory on your computer. You can find the location of that database using the `llm logs path` command:
You can find the location of that database using the `llm logs path` command:

```bash
llm logs path
Expand All @@ -18,12 +14,35 @@ On my Mac that outputs:
```
This will differ for other operating systems.

Once that SQLite database has been created any prompts you run will be logged to that database.

To avoid logging a prompt, pass `--no-log` or `-n` to the command:
To avoid logging an individual prompt, pass `--no-log` or `-n` to the command:
```bash
llm 'Ten names for cheesecakes' -n
```

To turn logging by default off:

```bash
llm logs off
```
To turn it back on again:

```bash
llm logs on
```

To see the status of that database, run this:
```bash
llm logs status
```
Example output:
```
Logging is ON for all prompts
Found log database at /Users/simon/Library/Application Support/io.datasette.llm/logs.db
Number of conversations logged: 32
Number of responses logged: 47
Database file size: 19.96MB
```

## Viewing the logs

You can view the logs using the `llm logs` command:
Expand Down
34 changes: 33 additions & 1 deletion docs/setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,26 @@ If no environment variable is found, the tool will fall back to checking `keys.j

You can force the tool to use the key from `keys.json` even if an environment variable has also been set using `llm "prompt" --key openai`.

## Custom directory location
## Configuration

You can configure LLM in a number of different ways.

### Setting a custom default model

The model used when calling `llm` without the `-m/--model` option defaults to `gpt-3.5-turbo` - the fastest and least expensive OpenAI model, and the same model family that powers ChatGPT.

You can use the `llm models default` command to set a different default model. For GPT-4 (slower and more expensive, but more capable) run this:

```bash
llm models default gpt-4
```
You can view the current model by running this:
```
llm models default
```
Any of the supported aliases for a model can be passed to this command.

### Setting a custom directory location

This tool stores various files - prompt templates, stored keys, preferences, a database of logs - in a directory on your computer.

Expand All @@ -111,3 +130,16 @@ You can set a custom location for this directory by setting the `LLM_USER_PATH`
```bash
export LLM_USER_PATH=/path/to/my/custom/directory
```
### Turning SQLite logging on and off

By default, LLM will log every prompt and response you make to a SQLite database - see {ref}`logging` for more details.

You can turn this behavior off by default by running:
```bash
llm logs off
```
Or turn it back on again with:
```
llm logs on
```
Run `llm logs status` to see the current states of the setting.
17 changes: 1 addition & 16 deletions docs/usage.md
Original file line number Diff line number Diff line change
Expand Up @@ -155,19 +155,4 @@ When running a prompt you can pass the full model name or any of the aliases to
```bash
llm -m chatgpt-16k 'As many names for cheesecakes as you can think of, with detailed descriptions'
```
Models that have been installed using plugins will be shown here as well.

## Setting a custom default model

The model used when calling `llm` without the `-m/--model` option defaults to `gpt-3.5-turbo` - the fastest and least expensive OpenAI model, and the same model family that powers ChatGPT.

You can use the `llm models default` command to set a different default model. For GPT-4 (slower and more expensive, but more capable) run this:

```bash
llm models default gpt-4
```
You can view the current model by running this:
```
llm models default
```
Any of the supported aliases for a model can be passed to this command.
Models that have been installed using plugins will be shown here as well.

1 comment on commit 05b4bcf

@simonw
Copy link
Owner Author

@simonw simonw commented on 05b4bcf Jul 12, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also refs #68

Please sign in to comment.