-
Notifications
You must be signed in to change notification settings - Fork 145
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
epic: llama.cpp is installed by default #1217
Comments
@hiento09 has worked on a PR that moves the llama.cpp Engine Install to Installer, but I am concerned that it's still not great UX |
Just saw this today. |
QA Updates (v75)
|
We are downloading the CUDA dependencies that Nvidia driver supports.
cc: @hiento09 @dan-homebrew |
Goal
Cortex.cpp should have a super easy UX to on par with market alternatives
Idea
I wonder whether the solution to this is a way to have an optional local lookup, as part of
cortex engines install
:cortex engines install
)Out-of-scope (future)
Outcomes
Key Questions
sycl
)Appendix
Why?
Our current cortex.cpp v0.1 onboarding UX is not user friendly:
The text was updated successfully, but these errors were encountered: