Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

epic: llama.cpp should support sycl for Intel-based CPUs #1252

Closed
3 tasks
dan-menlo opened this issue Sep 19, 2024 · 1 comment
Closed
3 tasks

epic: llama.cpp should support sycl for Intel-based CPUs #1252

dan-menlo opened this issue Sep 19, 2024 · 1 comment
Labels
category: hardware management Related to hardware & compute

Comments

@dan-menlo
Copy link
Contributor

dan-menlo commented Sep 19, 2024

Goal

  • Should support sycl for Intel-based CPUs

Tasklist

  • What should be the hardware heuristic to opt for LLVM?
  • Should Installer support this?
  • This may require us to have cortex engines be able to define a "default" llama.cpp version (if two or more exist)
@dan-menlo dan-menlo added this to Menlo Sep 19, 2024
@dan-menlo dan-menlo converted this from a draft issue Sep 19, 2024
@dan-menlo
Copy link
Contributor Author

Closing to handle as part of #1453

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
category: hardware management Related to hardware & compute
Projects
Archived in project
Development

No branches or pull requests

2 participants