You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi. Currently I'm trying to implement some large language models (LLM) with TorchSharp and got a nice demo (here). But when moving forward to more features I found some lacking features required for LLMs:
Custom operators
LLMs heavily depend on custom operators like flash attention, RMS norm and GPTQ int4 matmul for faster inference speed and reduced model size with quantization.
PyTorch allows defining custom operators with native c++ and cuda source files in two ways: pybind11 and torch library. The latter one seems working fine with torch.jit.script and is potentially to be working with TorchSharp torch.jit.compile and torch.ops.xxx. But loading it requires calling a torch native method. Also, TorchSharp may have some specialized modules for custom ops.
BTW, openai/triton uses MLIR and LLVM to create custom ops, but is almost bound to python.
NCCL ops
I've also tried to implement a thread-based distributed approach with TorchSharp (at here). The required communication ops are: broadcast, scatter, gather and all-gather. I'm using the naive _copy operator to implement them, but are very slow. Is it possible to have these NCCL related ops provided?
The text was updated successfully, but these errors were encountered:
Wow, this work looks very interesting and potentially very useful! Basic distributed training/inference (one host, multiple GPU) is currently a gap for Torchsharp and your implementation could be a step toward addressing that.
in order to increase the chances that your request could be addressed, you might consider laying out specifically what you would need from NCCL. For example, it might be helpful if you could provide a pointer to the minimal specific set of torch APIs (in some header file) needed, and a few lines of sample C# code that would use these APIs and demonstrate that they are working correctly
Hi. Currently I'm trying to implement some large language models (LLM) with TorchSharp and got a nice demo (here). But when moving forward to more features I found some lacking features required for LLMs:
Custom operators
LLMs heavily depend on custom operators like flash attention, RMS norm and GPTQ int4 matmul for faster inference speed and reduced model size with quantization.
PyTorch allows defining custom operators with native c++ and cuda source files in two ways: pybind11 and torch library. The latter one seems working fine with
torch.jit.script
and is potentially to be working with TorchSharptorch.jit.compile
andtorch.ops.xxx
. But loading it requires calling a torch native method. Also, TorchSharp may have some specialized modules for custom ops.BTW, openai/triton uses MLIR and LLVM to create custom ops, but is almost bound to python.
NCCL ops
I've also tried to implement a thread-based distributed approach with TorchSharp (at here). The required communication ops are: broadcast, scatter, gather and all-gather. I'm using the naive
_copy
operator to implement them, but are very slow. Is it possible to have these NCCL related ops provided?The text was updated successfully, but these errors were encountered: