Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bug: Issues with cli on Linux and Mac #746

Closed
avb-is-me opened this issue Jun 21, 2024 · 2 comments · Fixed by #749
Closed

bug: Issues with cli on Linux and Mac #746

avb-is-me opened this issue Jun 21, 2024 · 2 comments · Fixed by #749
Labels
type: bug Something isn't working

Comments

@avb-is-me
Copy link

Was able to globally install the CLI, and complete a pull, but found on both linux(Github codespaces 16GB) and Mac(Apple m2 8G), ran into issues with cortex models run llama3. When I ran the command it seemed like nothing happened. Additionally serve would run successfully but when visited lead to a 404.

Looks Successful

Screenshot 2024-06-20 at 11 38 52 PM

When visited:

Screenshot 2024-06-20 at 11 39 45 PM

To Reproduce

Steps to reproduce the behavior, follow these instructions:

1. Install Cortex using NPM

npm i -g @janhq/cortex

2. Download a GGUF model

cortex models pull llama3

3. Run the model to start chatting

cortex models run llama3

4. (Optional) Run Cortex in OpenAI-compatible server mode

cortex serve

Expected behavior
Success message or interaction on run {model}, and swagger ui on cortex serve

Desktop (please complete the following information):

  • OS: Linux, Mac
@avb-is-me avb-is-me added the type: bug Something isn't working label Jun 21, 2024
@l2D
Copy link
Contributor

l2D commented Jun 22, 2024

Edit:

I tried to run cortex run <model_id> and it seem working correctly.

cortex run llama3
Inorder to exit, type 'exit()'.
>> tell me a joke
Here's one:

Why couldn't the bicycle stand up by itself?

(wait for it...)

Because it was two-tired!

Hope that made you laugh!

>>

My device:

  • Chip: Apple M3 Pro
  • Memory: 18 GB
  • OS: Sonoma 14.5 (23F79)

You should run the model with:

cortex models start <model_id>

# Output:
# cortex models start llama3
# { message: 'Model loaded successfully', modelId: 'llama3' }

and Swagger UI available at http://${host}:${port}/api e.g. http://localhost:1337/api

Screenshot

20240622_0328PM_K1QAGndP@2x

@avb-is-me
Copy link
Author

@louis-jan did you try on a github codespaces using linux? Also I think try a fresh install, aka for users that haven't installed cortex yet or pulled a model

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: bug Something isn't working
Projects
Archived in project
Development

Successfully merging a pull request may close this issue.

2 participants