diff --git a/README.md b/README.md index 786c339f1..a4d27d09c 100644 --- a/README.md +++ b/README.md @@ -29,45 +29,168 @@ Cortex.cpp is a multi-engine that uses `llama.cpp` as the default engine but als - [`tensorrt-llm`](https://github.com/janhq/cortex.tensorrt-llm) ## Installation +This Local Installer packages all required dependencies, so that you don’t need an internet connection during the installation process. + +Alternatively, Cortex is available with a [Network Installer](#network-installer) which downloads the necessary dependencies from the internet during the installation. +### Beta Preview (Stable version coming soon!) +### Windows: + + + cortex-local-installer.exe + + +### MacOS: + + + cortex-local-installer.pkg + + +### Linux: + + + cortex-local-installer.deb + + +Download the installer and run the following command in terminal: +```bash + sudo apt install ./cortex-local-installer.deb + # or + sudo apt install ./cortex-network-installer.deb +``` +The binary will be installed in the `/usr/bin/` directory. -### Download +## Usage +After installation, you can run Cortex.cpp from the command line by typing `cortex --help`. For Beta preview, you can run `cortex-beta --help`. + +## Built-in Model Library + +Cortex.cpp supports various models available on the [Cortex Hub](https://huggingface.co/cortexso). Once downloaded, all model source files will be stored in `~\cortexcpp\models`. + +Example models: + +| Model | llama.cpp
`:gguf` | TensorRT
`:tensorrt` | ONNXRuntime
`:onnx` | Command | +|------------------|-----------------------|------------------------------------------|----------------------------|-------------------------------| +| llama3.1 | ✅ | | ✅ | cortex run llama3.1:gguf | +| llama3 | ✅ | ✅ | ✅ | cortex run llama3 | +| mistral | ✅ | ✅ | ✅ | cortex run mistral | +| qwen2 | ✅ | | | cortex run qwen2:7b-gguf | +| codestral | ✅ | | | cortex run codestral:22b-gguf | +| command-r | ✅ | | | cortex run command-r:35b-gguf | +| gemma | ✅ | | ✅ | cortex run gemma | +| mixtral | ✅ | | | cortex run mixtral:7x8b-gguf | +| openhermes-2.5 | ✅ | ✅ | ✅ | cortex run openhermes-2.5 | +| phi3 (medium) | ✅ | | ✅ | cortex run phi3:medium | +| phi3 (mini) | ✅ | | ✅ | cortex run phi3:mini | +| tinyllama | ✅ | | | cortex run tinyllama:1b-gguf | + +> **Note**: +> You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 14B models, and 32 GB to run the 32B models. + +## Cortex.cpp CLI Commands +For complete details on CLI commands, please refer to our [CLI documentation](https://cortex.so/docs/cli). + +## REST API +Cortex.cpp includes a REST API accessible at `localhost:39281`. For a complete list of endpoints and their usage, visit our [API documentation](https://cortex.so/api-reference). + +## Advanced Installation + +### Local Installer: Beta & Nightly Versions +Beta is an early preview for new versions of Cortex. It is for users who want to try new features early - we appreciate your feedback. + +Nightly is our development version of Brave. It is released every night and may contain bugs and experimental features. - - - + + + - + + + + + + + + + + + + +
Version TypeWindowsMacOSLinuxWindowsMacOSLinux
Stable (Recommended)Stable - + - Coming soon + cortex-local-installer.exe - + + + cortex-local-installer.pkg + + + + + cortex-local-installer.deb + +
Beta (Preview) + - Coming soon + cortex-local-installer.exe - + - Coming soon + cortex-local-installer.pkg - + + + cortex-local-installer.deb + +
Nightly Build (Experimental) + + + cortex-local-installer.exe + + + - Coming soon + cortex-local-installer.pkg - + + cortex-local-installer.deb + +
+ +### Network Installer +Cortex.cpp is available with a Network Installer, which is a smaller installer but requires internet connection during installation to download the necessary dependencies. + + + + + + + + + + + @@ -80,36 +203,18 @@ Cortex.cpp is a multi-engine that uses `llama.cpp` as the default engine but als - - - - - -
Version TypeWindowsMacOSLinux
Stable (Recommended) + + + Coming soon + + + + Coming soon
Beta (Preview) - - - cortex-local-installer.exe - - cortex-network-installer.exe - - - cortex-local-installer.pkg - - cortex-network-installer.pkg - - - cortex-local-installer.deb - - @@ -120,35 +225,17 @@ Cortex.cpp is a multi-engine that uses `llama.cpp` as the default engine but als
Nightly Build (Experimental) - - - cortex-local-installer.exe - - - + cortex-network-installer.exe - - - cortex-local-installer.pkg - - cortex-network-installer.pkg - - - cortex-local-installer.deb - - @@ -158,89 +245,6 @@ Cortex.cpp is a multi-engine that uses `llama.cpp` as the default engine but als
-### Instructions - -The local installer packages all required dependencies inside the installer itself, so you don’t need an internet connection during the installation process. -The network installer downloads the necessary dependencies from the internet during the installation. This option provides a smaller installer, but requires an internet connection. - -After installation, you can run Cortex.cpp from the command line by typing `cortex --help`. For beta and nightly builds, you can run `cortex-beta --help` and `cortex-nightly --help` respectively. - -#### Windows and MacOS - -Download the installer and double-click to the exe file to start the installation process. Follow the on-screen instructions to complete the installation. - -For MacOS, there is a uninstaller script comes with the binary and added to the `/usr/local/bin/` directory. The script is named `cortex-uninstall.sh` for stable builds, `cortex-beta-uninstall.sh` for beta builds and `cortex-nightly-uninstall.sh` for nightly builds. - -#### Linux - -Download the installer and run the following command in the terminal: - -```bash - sudo apt install ./cortex-local-installer.deb - # or - sudo apt install ./cortex-network-installer.deb -``` - -The binary will be installed in the `/usr/bin/` directory. - -## Built-in Model Library - -Cortex.cpp supports various models available on the [Cortex Hub](https://huggingface.co/cortexso). Once downloaded, all model source files will be stored in `~\cortexcpp\models`. - -Example models: - -| Model | llama.cpp
`:gguf` | TensorRT
`:tensorrt` | ONNXRuntime
`:onnx` | Command | -|------------------|-----------------------|------------------------------------------|----------------------------|-------------------------------| -| llama3.1 | ✅ | | ✅ | cortex run llama3.1:gguf | -| llama3 | ✅ | ✅ | ✅ | cortex run llama3 | -| mistral | ✅ | ✅ | ✅ | cortex run mistral | -| qwen2 | ✅ | | | cortex run qwen2:7b-gguf | -| codestral | ✅ | | | cortex run codestral:22b-gguf | -| command-r | ✅ | | | cortex run command-r:35b-gguf | -| gemma | ✅ | | ✅ | cortex run gemma | -| mixtral | ✅ | | | cortex run mixtral:7x8b-gguf | -| openhermes-2.5 | ✅ | ✅ | ✅ | cortex run openhermes-2.5 | -| phi3 (medium) | ✅ | | ✅ | cortex run phi3:medium | -| phi3 (mini) | ✅ | | ✅ | cortex run phi3:mini | -| tinyllama | ✅ | | | cortex run tinyllama:1b-gguf | - -> **Note**: -> You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 14B models, and 32 GB to run the 32B models. - -## Cortex.cpp CLI Commands -For complete details on CLI commands, please refer to our [CLI documentation](https://cortex.so/docs/cli). - -## REST API -Cortex.cpp includes a REST API accessible at `localhost:3928`. For a complete list of endpoints and their usage, visit our [API documentation](https://cortex.so/api-reference). - -## Uninstallation - -### Windows -1. Open the Windows Control Panel. -2. Navigate to `Add or Remove Programs`. -3. Search for `cortexcpp` and double click to uninstall. (for beta and nightly builds, search for `cortexcpp-beta` and `cortexcpp-nightly` respectively) - -### MacOs -Run the uninstaller script: -```bash -# For stable builds -sudo sh cortex-uninstall.sh -# For beta builds -sudo sh cortex-beta-uninstall.sh -# For nightly builds -sudo sh cortex-nightly-uninstall.sh -``` - -### Linux -```bash -# For stable builds -sudo apt remove cortexcpp -# For beta builds -sudo apt remove cortexcpp-beta -# For nightly builds -sudo apt remove cortexcpp-nightly -``` - ### Build from Source #### Windows @@ -318,9 +322,26 @@ make -j4 cortex ``` +## Uninstallation +### Windows +1. Open the Windows Control Panel. +2. Navigate to `Add or Remove Programs`. +3. Search for `cortexcpp` and double click to uninstall. (for beta and nightly builds, search for `cortexcpp-beta` and `cortexcpp-nightly` respectively) + +### MacOs +Run the uninstaller script: +```bash +sudo sh cortex-uninstall.sh +``` +For MacOS, there is a uninstaller script comes with the binary and added to the `/usr/local/bin/` directory. The script is named `cortex-uninstall.sh` for stable builds, `cortex-beta-uninstall.sh` for beta builds and `cortex-nightly-uninstall.sh` for nightly builds. + +### Linux +```bash +# For stable builds +sudo apt remove cortexcpp +``` + ## Contact Support -- For support, please file a [GitHub ticket](https://github.com/janhq/cortex/issues/new/choose). +- For support, please file a [GitHub ticket](https://github.com/janhq/cortex.cpp/issues/new/choose). - For questions, join our Discord [here](https://discord.gg/FTk2MvZwJH). - For long-form inquiries, please email [hello@jan.ai](mailto:hello@jan.ai). - -