A Rust-based CLI tool that generates and executes terminal commands using OpenAI's language models or local Ollama models.
- Configurable model and token limit (gpt-4o-mini, gpt-4o, or Ollama)
- Generate and execute terminal commands based on user prompts
- Works on both PowerShell and Unix-like shells (Automatically detected)
-
Download the binary from the Releases page
-
Set PATH to the binary
- MacOS/Linux:
export PATH="$PATH:/path/to/llm-term"
-
To set it permanently, add
export PATH="$PATH:/path/to/llm-term"
to your shell configuration file (e.g.,.bashrc
,.zshrc
) -
Windows:
set PATH="%PATH%;C:\path\to\llm-term"
- To set it permanently, add
set PATH="%PATH%;C:\path\to\llm-term"
to your shell configuration file (e.g.,$PROFILE
)
- Clone the repository
- Build the project using Cargo:
cargo build --release
- The executable will be available in the
target/release
directory
-
Set your OpenAI API key (if using OpenAI models):
-
MacOS/Linux:
export OPENAI_API_KEY="sk-..."
-
Windows:
set OPENAI_API_KEY="sk-..."
-
-
If using Ollama, make sure it's running locally on the default port (11434)
-
Run the application with a prompt:
./llm-term "your prompt here"
-
The app will generate a command based on your prompt and ask for confirmation before execution.
A config.json
file will be created in the same directory as the binary on first run. You can modify this file to change the default model and token limit.
-c, --config <FILE>
: Specify a custom config file path
- OpenAI GPT-4 (gpt-4o)
- OpenAI GPT-4 Mini (gpt-4o-mini)
- Ollama (local models, default: llama3.1)