-
Notifications
You must be signed in to change notification settings - Fork 148
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: cortex run model(:gguf)
#1076
Comments
|
see: #1303 |
@vansangpfiev @0xSage Can I ask about our understanding of I thought the previous Cortex JS implementation had the following logic:
The idea was that if you are running Cortex in a docker container, you would not want to have an interactive chat shell. This would be similar to I'm open to either path - however, I think we need to think from a UX point of view on this. |
We can
I think the model id is |
@dan-homebrew I'm good with the old CortexJs design. Let me change the implementation.
Actually, 2 commands to do the same task does not make sense to me
|
@vansangpfiev This is how I see it:
|
Got it. Let me change the implementation. |
@dan-homebrew Since new command |
Yes, sorry to block on this, |
@vansangpfiev @0xSage I think the correct way to do this, is to overload the
I am not very much in favor of creating an additional CLI method for chat-completions, would prefer to just simplify. |
@vansangpfiev This is probably not the correct UX. We should have a "main" branch equivalent,
Let me discuss with @gabrielle-ong. We will likely have a In the long term, this will likely be replaced by something that automatically detects users' hardware and picks the correct engine. |
QA: Commands as listed on #1351:
Clarified: There is no To be followed by #1362 - dont require |
Goals
Tasklist
cortex run model:llamacpp
dylib
issue is fixed bug: libengine.dylib not found #953The text was updated successfully, but these errors were encountered: