Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possibility a good idea to replace llama.cpp with candle to run quantized models? #276

Open
andychenbruce opened this issue Feb 27, 2024 · 1 comment

Comments

@andychenbruce
Copy link
Contributor

Yesterday running quantized models with CUDA was merged into candle:

huggingface/candle#1754

I haven't tested it yet, but right now it looks very unstable as its in a very early version. From the pull request it looks like it only works for Q4_0 quantization and has some bugs. But in the near future it may become as good as llama.cpp.

This would have some benefits. A native rust solution means no more build.rs, cmake, bindgen, no unsafe calls. Possibly the entire llm-chain-llama-sys could removed. The llama.cpp submodule could be removed, and people don't have to always modify the rust code when updating llama.cpp to get the new features. Also candle supports .safetensor files in addition to .gguf and the legacy .ggml.

@oddpxl
Copy link

oddpxl commented Mar 14, 2024

I'm using llama.cpp via https://github.com/utilityai/llama-cpp-rs

I get 51 t/s with a 7B model... Candle gives me 19 t/s with the same model.

Agree we need Rust native - but at the moment it seems the price for that is performance ? I'd love to be wrong on this.

( that said - on Mac M1 Max 64Gb - using metal not Cuda )

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants