We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Thank you for the wonderful project. I was able to run it by building locally and adding the coi-serviceworker mentioned in the README.md and #5.
However, when using a different model, the following behavior occurs:
Is there a specification, for example, that does not support models above a certain size?
const models = [ // ok "https://huggingface.co/stabilityai/stablelm-2-zephyr-1_6b/resolve/main/stablelm-2-zephyr-1_6b-Q4_1.gguf", // ng "https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf/resolve/main/Phi-3-mini-4k-instruct-q4.gguf", // 2.32GB // ng "https://huggingface.co/QuantFactory/Meta-Llama-3-8B-GGUF/resolve/main/Meta-Llama-3-8B.Q4_K_M.gguf", // 4.92GB ];
The text was updated successfully, but these errors were encountered:
The maximum file size is 2Gb. Have a look at Wllama for a way around this.
Sorry, something went wrong.
@flatsiedatsie Thank you! I was able to split it. However, a similar issue has arisen with another issue, so I will proceed with resolving that issue.
split-tool https://github.com/ggerganov/llama.cpp/tree/master/examples/gguf-split
new issue ngxson/wllama#12
Let me guess: memory issues :-)
No branches or pull requests
Thank you for the wonderful project. I was able to run it by building locally and adding the coi-serviceworker mentioned in the README.md and #5.
However, when using a different model, the following behavior occurs:
Is there a specification, for example, that does not support models above a certain size?
Example model
The text was updated successfully, but these errors were encountered: