From 43cf5a4747712860bea5eba3482db523940bb0b4 Mon Sep 17 00:00:00 2001
From: 0xSage
Date: Tue, 24 Sep 2024 10:15:21 +0800
Subject: [PATCH 1/2] docs: remove outdated links
---
README.md | 221 +++---------------------------------------------------
1 file changed, 9 insertions(+), 212 deletions(-)
diff --git a/README.md b/README.md
index 6adc5a224..2e9c5b2f5 100644
--- a/README.md
+++ b/README.md
@@ -18,7 +18,7 @@
- Changelog - Bug reports - Discord
-> ⚠️ **Cortex.cpp is currently in Development. This documentation outlines the intended behavior of Cortex, which may not yet be fully implemented in the codebase.**
+> ⚠️ **Cortex.cpp is currently in active development. This outlines the intended behavior of Cortex, which may not yet be fully implemented in the codebase.**
## Overview
Cortex.cpp is a Local AI engine that is used to run and customize LLMs. Cortex can be deployed as a standalone server, or integrated into apps like [Jan.ai](https://jan.ai/).
@@ -28,142 +28,15 @@ Cortex.cpp is a multi-engine that uses `llama.cpp` as the default engine but als
- [`onnx`](https://github.com/janhq/cortex.onnx)
- [`tensorrt-llm`](https://github.com/janhq/cortex.tensorrt-llm)
-To install Cortex.cpp, download the installer for your operating system from the following options:
-
-
-
-> **Note**:
-> You can also build Cortex.cpp from source by following the steps [here](#build-from-source).
-
-
-### Libraries
-- [cortex.js](https://github.com/janhq/cortex.js)
-- [cortex.py](https://github.com/janhq/cortex-python)
-
-## Quickstart
-### CLI
-```bash
-# 1. Start the Cortex.cpp server (The server is running at localhost:3928)
-cortex
-
-# 2. Start a model
-cortex run :[engine_name]
-
-# 3. Stop a model
-cortex stop :[engine_name]
-
-# 4. Stop the Cortex.cpp server
-cortex stop
-```
-### API
-1. Start the API server using `cortex` command.
-2. **Pull a Model**
-```bash
-curl --request POST \
- --url http://localhost:3928/v1/models/{model_id}/pull
-```
-
-3. **Start a Model**
-```bash
-curl --request POST \
- --url http://localhost:3928/v1/models/{model_id}/start \
- --header 'Content-Type: application/json' \
- --data '{
- "prompt_template": "system\n{system_message}\nuser\n{prompt}\nassistant",
- "stop": [],
- "ngl": 4096,
- "ctx_len": 4096,
- "cpu_threads": 10,
- "n_batch": 2048,
- "caching_enabled": true,
- "grp_attn_n": 1,
- "grp_attn_w": 512,
- "mlock": false,
- "flash_attn": true,
- "cache_type": "f16",
- "use_mmap": true,
- "engine": "llamacpp"
-}'
-```
-
-4. **Chat with a Model**
-```bash
-curl http://localhost:3928/v1/chat/completions \
--H "Content-Type: application/json" \
--d '{
- "model": "",
- "messages": [
- {
- "role": "user",
- "content": "Hello"
- },
- ],
- "model": "mistral",
- "stream": true,
- "max_tokens": 1,
- "stop": [
- null
- ],
- "frequency_penalty": 1,
- "presence_penalty": 1,
- "temperature": 1,
- "top_p": 1
-}'
-```
+## Uninstallation
-5. **Stop a Model**
-```bash
-curl --request POST \
- --url http://localhost:3928/v1/models/mistral/stop
-```
-6. Stop the Cortex.cpp server using `cortex stop` command.
-> **Note**:
-> Our API server is fully compatible with the OpenAI API, making it easy to integrate with any systems or tools that support OpenAI-compatible APIs.
+You can install a nightly (unstable) version of Cortex from Discord here: https://discord.gg/nGp6PMrUqS
## Built-in Model Library
-Cortex.cpp supports various models available on the [Cortex Hub](https://huggingface.co/cortexso). Once downloaded, all model source files will be stored at `C:\Users\\AppData\Local\cortexcpp\models`.
-Here are example of models that you can use based on each supported engine:
+Cortex.cpp supports various models available on the [Cortex Hub](https://huggingface.co/cortexso). Once downloaded, all model source files will be stored in `~\cortexcpp\models`.
+
+Example models:
| Model | llama.cpp
`:gguf` | TensorRT
`:tensorrt` | ONNXRuntime
`:onnx` | Command |
|------------------|-----------------------|------------------------------------------|----------------------------|-------------------------------|
@@ -190,6 +63,7 @@ For complete details on CLI commands, please refer to our [CLI documentation](ht
Cortex.cpp includes a REST API accessible at `localhost:3928`. For a complete list of endpoints and their usage, visit our [API documentation](https://cortex.so/api-reference).
## Uninstallation
+
### Windows
1. Navigate to `Add or Remove Programs`.
2. Search for Cortex.cpp and click `Uninstall`.
@@ -205,83 +79,6 @@ sudo sh cortex-uninstall.sh
sudo apt remove cortexcpp
```
-## Alternate Installation
-We also provide Beta and Nightly version.
-
-
### Build from Source
#### Windows
@@ -355,8 +152,8 @@ make -j4
6. Verify that Cortex.cpp is installed correctly by getting help information.
```sh
-# Get the help information
-cortex -h
+# Get help
+cortex
```
## Contact Support
From e607fbdd782fe5ae13947441d66c55dbdcf808b8 Mon Sep 17 00:00:00 2001
From: 0xSage
Date: Tue, 24 Sep 2024 10:16:23 +0800
Subject: [PATCH 2/2] fix: header
---
README.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/README.md b/README.md
index 2e9c5b2f5..3cc1353c8 100644
--- a/README.md
+++ b/README.md
@@ -28,7 +28,7 @@ Cortex.cpp is a multi-engine that uses `llama.cpp` as the default engine but als
- [`onnx`](https://github.com/janhq/cortex.onnx)
- [`tensorrt-llm`](https://github.com/janhq/cortex.tensorrt-llm)
-## Uninstallation
+## Installation
You can install a nightly (unstable) version of Cortex from Discord here: https://discord.gg/nGp6PMrUqS