Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Thanks for the FOSS! Suggestion for future possible backends runtimes: Vulkan, OpenCL, SYCL/OpenVino/intel GPU, AMD gpu/ROCm/HIP. #20

Open
ghchris2021 opened this issue Jul 19, 2024 · 4 comments
Labels
enhancement New feature or request

Comments

@ghchris2021
Copy link

Thanks for the FOSS!

Suggestion for future possible backends runtimes: Vulkan, OpenCL, SYCL/OpenVino/intel GPU, AMD gpu/ROCm/HIP.

Vulkan and OpenCL both have the possibility of being very portable to GPUs and also to some extent CPUs that have supporting SW for it.

SYCL can run on various CPU / GPU platforms; it / openvino etc. is the primary ideal target to support intel gpus.

@James4Ever0
Copy link

James4Ever0 commented Jul 20, 2024

This software is based on Candle. It is like PyTorch in Rust. So if you want more acceleration you should look for the development there.

@evilsocket
Copy link
Owner

i like people suggesting a bulk of new inferences backend like it's nothing :D

@evilsocket evilsocket added the enhancement New feature or request label Jul 21, 2024
@malikwirin
Copy link

So GPU acceleration through ROCm has to be implemented in Candle first?

@malikwirin
Copy link

So GPU acceleration through ROCm has to be implemented in Candle first?

Their discussion about amd support can be found here

There is also a WIP CUDA implementation for non NVIDIA GPUs: https://github.com/vosen/ZLUDA

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants