diff --git a/README.md b/README.md
index 786c339f1..a4d27d09c 100644
--- a/README.md
+++ b/README.md
@@ -29,45 +29,168 @@ Cortex.cpp is a multi-engine that uses `llama.cpp` as the default engine but als
- [`tensorrt-llm`](https://github.com/janhq/cortex.tensorrt-llm)
## Installation
+This Local Installer packages all required dependencies, so that you don’t need an internet connection during the installation process.
+
+Alternatively, Cortex is available with a [Network Installer](#network-installer) which downloads the necessary dependencies from the internet during the installation.
+### Beta Preview (Stable version coming soon!)
+### Windows:
+
+
+ cortex-local-installer.exe
+
+
+### MacOS:
+
+
+ cortex-local-installer.pkg
+
+
+### Linux:
+
+
+ cortex-local-installer.deb
+
+
+Download the installer and run the following command in terminal:
+```bash
+ sudo apt install ./cortex-local-installer.deb
+ # or
+ sudo apt install ./cortex-network-installer.deb
+```
+The binary will be installed in the `/usr/bin/` directory.
-### Download
+## Usage
+After installation, you can run Cortex.cpp from the command line by typing `cortex --help`. For Beta preview, you can run `cortex-beta --help`.
+
+## Built-in Model Library
+
+Cortex.cpp supports various models available on the [Cortex Hub](https://huggingface.co/cortexso). Once downloaded, all model source files will be stored in `~\cortexcpp\models`.
+
+Example models:
+
+| Model | llama.cpp
`:gguf` | TensorRT
`:tensorrt` | ONNXRuntime
`:onnx` | Command |
+|------------------|-----------------------|------------------------------------------|----------------------------|-------------------------------|
+| llama3.1 | ✅ | | ✅ | cortex run llama3.1:gguf |
+| llama3 | ✅ | ✅ | ✅ | cortex run llama3 |
+| mistral | ✅ | ✅ | ✅ | cortex run mistral |
+| qwen2 | ✅ | | | cortex run qwen2:7b-gguf |
+| codestral | ✅ | | | cortex run codestral:22b-gguf |
+| command-r | ✅ | | | cortex run command-r:35b-gguf |
+| gemma | ✅ | | ✅ | cortex run gemma |
+| mixtral | ✅ | | | cortex run mixtral:7x8b-gguf |
+| openhermes-2.5 | ✅ | ✅ | ✅ | cortex run openhermes-2.5 |
+| phi3 (medium) | ✅ | | ✅ | cortex run phi3:medium |
+| phi3 (mini) | ✅ | | ✅ | cortex run phi3:mini |
+| tinyllama | ✅ | | | cortex run tinyllama:1b-gguf |
+
+> **Note**:
+> You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 14B models, and 32 GB to run the 32B models.
+
+## Cortex.cpp CLI Commands
+For complete details on CLI commands, please refer to our [CLI documentation](https://cortex.so/docs/cli).
+
+## REST API
+Cortex.cpp includes a REST API accessible at `localhost:39281`. For a complete list of endpoints and their usage, visit our [API documentation](https://cortex.so/api-reference).
+
+## Advanced Installation
+
+### Local Installer: Beta & Nightly Versions
+Beta is an early preview for new versions of Cortex. It is for users who want to try new features early - we appreciate your feedback.
+
+Nightly is our development version of Brave. It is released every night and may contain bugs and experimental features.
Version Type | -Windows | -MacOS | -Linux | +Windows | +MacOS | +Linux | |||
Stable (Recommended) | +Stable | - + - Coming soon + cortex-local-installer.exe | - + + + cortex-local-installer.pkg + + | ++ + + cortex-local-installer.deb + + | +|||||
Beta (Preview) | ++ - Coming soon + cortex-local-installer.exe | - + - Coming soon + cortex-local-installer.pkg | - + + + cortex-local-installer.deb + + | +||||||
Nightly Build (Experimental) | ++ + + cortex-local-installer.exe + + | ++ - Coming soon + cortex-local-installer.pkg | - + + cortex-local-installer.deb + + | +