-
Notifications
You must be signed in to change notification settings - Fork 347
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature]: local model support #236
Comments
Is there any specific (and stable) setup you have in mind for running the model in docker? BR |
@malpou im constantly thinking about adding local llama support, this would be just killing. i imagine e.g. setting i suggest taking most smart and most lightweight (so download time isnt more than ~20-30 sec), as the package is installed and updated globally once 2-3 months then waiting for 30 sec once in a while is ok (imo) |
Yes that's exactly my thought. Haven't gotten around to playing with llama2 yet, is there a standard way to run it in docker, as far as I can see there is just multiple smaller projects. |
I don’t know any setup, need to googleOn 7 Sep 2023, at 17:52, Malthe Poulsen ***@***.***> wrote:
Yes that's exactly my thought.
Haven't gotten around to playing with llama2 yet, is there a standard way to run it in docker, as far as I can see there is just multiple smaller projects.
If you can point me in the right direction on what we would like to use for llama locally then I can do the rest of the implementation.
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you commented.Message ID: ***@***.***>
|
@di-sukharev |
I would love to see local model support for this! Edit: I've seen Simon Willison play around a ton with local models and although I don't have anything off the top of my mind I expect he'd have helpful blog posts to guide this feature Edit 2: found this in my stars https://github.com/nat/openplayground for playing with LLMs locally... |
Me too! Recently I came across with ollama implementation, and maybe would be helpful for you: https://ollama.ai/. Edit: After checking your PR draft, LocalAI seems to be more robust, at least seems to have bigger community, so currently it's a good idea to keep that. Only if your issue wouldn't be fixed, this is a good alternative option to try. |
Stale issue message |
we now support Ollama |
@di-sukharev I tried with the (Update: I see the issue. The documentation needs to be updated to state that you need to set |
Description
In some organizations it is prohibited to send the code to 3rd party.
Suggested Solution
Support of dockerized llama-2 model that running locally?
Alternatives
No response
Additional Context
No response
The text was updated successfully, but these errors were encountered: