0.7
The new Model aliases commands can be used to configure additional aliases for models, for example:
llm aliases set turbo gpt-3.5-turbo-16k
Now you can run the 16,000 token gpt-3.5-turbo-16k
model like this:
llm -m turbo 'An epic Greek-style saga about a cheesecake that builds a SQL database from scratch'
Use llm aliases list
to see a list of aliases and llm aliases remove turbo
to remove one again. #151
Notable new plugins
- llm-mlc can run local models released by the MLC project, including models that can take advantage of the GPU on Apple Silicon M1/M2 devices.
- llm-llama-cpp uses llama.cpp to run models published in the GGML format. See Run Llama 2 on your own Mac using LLM and Homebrew for more details.
Also in this release
- OpenAI models now have min and max validation on their floating point options. Thanks, Pavel Král. #115
- Fix for bug where
llm templates list
raised an error if a template had an empty prompt. Thanks, Sherwin Daganato. #132 - Fixed bug in
llm install --editable
option which prevented installation of.[test]
. #136 llm install --no-cache-dir
and--force-reinstall
options. #146