Replies: 1 comment 1 reply
-
It says |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi all,
I've tried to implement the rpc feature of llama.cpp between my devices including Macs and windows pc using CUDA backend via wifi network. When all devices completed to implement, the main host's terminal indicated that
Then after loading tensors, it has been displaying that
It seems that the main host running the llama-cli is not connected with the rpc servers as well. Moreover, all devices shall be using the same wifi network and being able to find all under the network with each other I believe, please let me know how. Thank you so much.
Beta Was this translation helpful? Give feedback.
All reactions