Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(llama-cpp-server): fix vulkan build by setting GGML_VULKAN #3133

Closed
wants to merge 1 commit into from

Conversation

michalwarda
Copy link

Similarly to #2835 the VULKAN flag was also renamed to GGML_VULKAN in llama.cpp. This fixes setting this flag inside llama-cpp-server.

This fixes #2810

@wsxiaoys
Copy link
Member

Hi - thanks for PR, have you tested the built artifacts? (e.g before / after the PR)

@zwpaper
Copy link
Member

zwpaper commented Jan 18, 2025

As there has been no response and the issue has been resolved, I am closing this pull request.

Thank you for your contribution. Please feel free to address another issue!

@zwpaper zwpaper closed this Jan 18, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

ROCm and Vulkan seems like doesn't work
3 participants