You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Was able to globally install the CLI, and complete a pull, but found on both linux(Github codespaces 16GB) and Mac(Apple m2 8G), ran into issues with cortex models run llama3. When I ran the command it seemed like nothing happened. Additionally serve would run successfully but when visited lead to a 404.
Looks Successful
When visited:
To Reproduce
Steps to reproduce the behavior, follow these instructions:
1. Install Cortex using NPM
npm i -g @janhq/cortex
2. Download a GGUF model
cortex models pull llama3
3. Run the model to start chatting
cortex models run llama3
4. (Optional) Run Cortex in OpenAI-compatible server mode
cortex serve
Expected behavior
Success message or interaction on run {model}, and swagger ui on cortex serve
Desktop (please complete the following information):
OS: Linux, Mac
The text was updated successfully, but these errors were encountered:
I tried to run cortex run <model_id> and it seem working correctly.
cortex run llama3
Inorder to exit, type'exit()'.
>> tell me a joke
Here's one:Why couldn't the bicycle stand up by itself?
(wait for it...)
Because it was two-tired!
Hope that made you laugh!>>
@louis-jan did you try on a github codespaces using linux? Also I think try a fresh install, aka for users that haven't installed cortex yet or pulled a model
Was able to globally install the CLI, and complete a pull, but found on both linux(Github codespaces 16GB) and Mac(Apple m2 8G), ran into issues with
cortex models run llama3
. When I ran the command it seemed like nothing happened. Additionally serve would run successfully but when visited lead to a 404.Looks Successful
When visited:
To Reproduce
Steps to reproduce the behavior, follow these instructions:
1. Install Cortex using NPM
npm i -g @janhq/cortex
2. Download a GGUF model
cortex models pull llama3
3. Run the model to start chatting
cortex models run llama3
4. (Optional) Run Cortex in OpenAI-compatible server mode
cortex serve
Expected behavior
Success message or interaction on run {model}, and swagger ui on cortex serve
Desktop (please complete the following information):
The text was updated successfully, but these errors were encountered: