Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(llama-cpp): consistently select fallback #3789

Merged
merged 5 commits into from
Oct 11, 2024
Merged

Conversation

mudler
Copy link
Owner

@mudler mudler commented Oct 11, 2024

Description

We didn't took in consideration the case where the host has the CPU flagset, but the binaries were not actually present in the asset dir. This problem was extended also to GPU detection, as it was relying on the fallback backend to be present and be compiled with GPU support.

This made possible for instance for models that specified the llama-cpp backend directly in the config to not eventually pick-up the fallback binary in case the optimized binaries were not present.

To reproduce: have a model specifying the backend "llama-cpp" manually, and have only the fallback binaries in the build assets ( neither AVX specific or GPU ones )

Notes for Reviewers

It does some refactoring around picking up the correct binary, and reduces complexity of the code.

Should fix #3727 and also fix #3673

Signed commits

  • Yes, I signed my commits.

@mudler mudler added the bug Something isn't working label Oct 11, 2024
Copy link

netlify bot commented Oct 11, 2024

Deploy Preview for localai ready!

Name Link
🔨 Latest commit 8055c87
🔍 Latest deploy log https://app.netlify.com/sites/localai/deploys/6709256ed990630008369115
😎 Deploy Preview https://deploy-preview-3789--localai.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

We didn't took in consideration the case where the host has the CPU
flagset, but the binaries were not actually present in the asset dir.

This made possible for instance for models that specified the llama-cpp
backend directly in the config to not eventually pick-up the fallback
binary in case the optimized binaries were not present.

Signed-off-by: Ettore Di Giacinto <[email protected]>
@mudler mudler force-pushed the fix/llama-cpp-fallback branch from ead515c to c47a451 Compare October 11, 2024 10:23
Signed-off-by: Ettore Di Giacinto <[email protected]>
Signed-off-by: Ettore Di Giacinto <[email protected]>
@mudler mudler force-pushed the fix/llama-cpp-fallback branch from 7deb85f to 53cff5c Compare October 11, 2024 10:50
Signed-off-by: Ettore Di Giacinto <[email protected]>
@mudler mudler merged commit be6c4e6 into master Oct 11, 2024
31 checks passed
@mudler mudler deleted the fix/llama-cpp-fallback branch October 11, 2024 14:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
1 participant