Multi-output, deepseek, new models
What's Changed
- Add Gemini 2.0 Pro model, and use 2.0 flash as a default by @ahyatt in #152
- Rename llm-make-tool-function to llm-make-tool by @danielkrizian in #153
- Fix README example of tool use by @ahyatt in #156
- Add o3 mini model by @ahyatt in #157
- llm.el (llm-chat-streaming-to-point): Add text processor callback by @ultronozm in #151
- Remove function-calls, add tool-uses to Claude capabilities by @ahyatt in #159
- Add the ability to return multiple outputs via a plist in llm calls by @ahyatt in #160
- Add Claude 3.7 Sonnet model, set to default by @ahyatt in #161
- Fix Claude streaming tool use by @ultronozm in #162
- Add deepseek, with support for a separate reasoning stream by @ahyatt in #163
- Correctly standardize capabilities name for tool use, add streaming tool use capability by @ahyatt in #164
- Add llm-models function, to list available models for a service by @ahyatt in #165
- Set keep_alive for ollama correctly, when non-standard-param set by @ahyatt in #166
- Fix instances of plist-put not in a setq by @ahyatt in #167
New Contributors
- @danielkrizian made their first contribution in #153
Full Changelog: 0.23.0...0.24.0