We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Thanks for the FOSS!
Suggestion for future possible backends runtimes: Vulkan, OpenCL, SYCL/OpenVino/intel GPU, AMD gpu/ROCm/HIP.
Vulkan and OpenCL both have the possibility of being very portable to GPUs and also to some extent CPUs that have supporting SW for it.
SYCL can run on various CPU / GPU platforms; it / openvino etc. is the primary ideal target to support intel gpus.
The text was updated successfully, but these errors were encountered:
This software is based on Candle. It is like PyTorch in Rust. So if you want more acceleration you should look for the development there.
Sorry, something went wrong.
i like people suggesting a bulk of new inferences backend like it's nothing :D
So GPU acceleration through ROCm has to be implemented in Candle first?
Their discussion about amd support can be found here
There is also a WIP CUDA implementation for non NVIDIA GPUs: https://github.com/vosen/ZLUDA
No branches or pull requests
Thanks for the FOSS!
Suggestion for future possible backends runtimes: Vulkan, OpenCL, SYCL/OpenVino/intel GPU, AMD gpu/ROCm/HIP.
Vulkan and OpenCL both have the possibility of being very portable to GPUs and also to some extent CPUs that have supporting SW for it.
SYCL can run on various CPU / GPU platforms; it / openvino etc. is the primary ideal target to support intel gpus.
The text was updated successfully, but these errors were encountered: