Skip to content

Commit

Permalink
Initial docs for llm.get_async_model() and await model.prompt()
Browse files Browse the repository at this point in the history
Refs #507
  • Loading branch information
simonw committed Nov 13, 2024
1 parent 1c83a4e commit ceb60d2
Showing 1 changed file with 26 additions and 1 deletion.
27 changes: 26 additions & 1 deletion docs/python-api.md
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@ print(response.text())
```
Some models do not use API keys at all.

## Streaming responses
### Streaming responses

For models that support it you can stream responses as they are generated, like this:

Expand All @@ -112,6 +112,31 @@ The `response.text()` method described earlier does this for you - it runs throu

If a response has been evaluated, `response.text()` will continue to return the same string.

## Async models

Some plugins provide async versions of their supported models, suitable for use with Python [asyncio](https://docs.python.org/3/library/asyncio.html).

To use an async model, use the `llm.get_async_model()` function instead of `llm.get_model()`:

```python
import llm
model = llm.get_async_model("gpt-4o")
```
You can then run a prompt using `await model.prompt(...)`:

```python
result = await model.prompt(
"Five surprising names for a pet pelican"
)
```
Or use `async for chunk in ...` to stream the response as it is generated:
```python
async for chunk in model.prompt(
"Five surprising names for a pet pelican"
):
print(chunk, end="")
```

## Conversations

LLM supports *conversations*, where you ask follow-up questions of a model as part of an ongoing conversation.
Expand Down

0 comments on commit ceb60d2

Please sign in to comment.