-
-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
llama_bootstrap: failed to load model from '/model.bin' #9
Comments
models are not bundled in the image due to licensing - models like gpt4all, alpaca, and vicuna are based on LLaMA from Facebook, which prohibits modifications, alterations, and re-distributions of the weights in every form. See for instance nomic-ai/gpt4all#75. Sadly, until there is a model with a free license that allows re-distribution, we can't embed it in the image, or we risk yet another DCMA takedown. You need to get the model somehow, and specify it as described in https://github.com/go-skynet/llama-cli#using-other-models |
I get this error despite mounting. Here's my command: |
Can you try by using the
Just noticed this is being set on the main container image, a fix is landing in master! (bf85a31) |
@regstuff now the |
@mudler Its not working for me either....
even with the latest image |
The project is great, but I'd recommend refactoring the documentation to make it clearer. It's kind of confusing to understand what to do. I'm also preparing a docker-compose.yml file which I can share when its done |
Hi @jonit-dev , You need to specify a volume to docker so it mounts a path local to the host inside the container with -v, see the instructions here For a docker compose file, have a look at #10 On the other hand I do completely agree, I will rework the documentation as soon as possible, there are many lacunas and also other new features being added that needs to be documented too. |
documentation is getting some revamp, and @mkellerman worked a nice integration with chatgpt-web: an e2e docker-compose file would be just great! |
instructions updated to run with docker-compose, and multi-model support too: https://github.com/go-skynet/llama-cli#usage I'd close this issue for now, if you are still facing issues, just re-open it! |
Look's like the latest is failing. Perhaps a broken path?
The text was updated successfully, but these errors were encountered: