You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Really glad that LiveKit uses Rust, this opens so many possibilities! 🙂 Having an excellent and easy-to-use library for real-time communication is really cool!
I've shortly gone through the example code and published tracks to see if there is a simple and straightforward way to add screen-sharing (something as easy to use as in the JS/TS SDK). From what I've understood so far, it seems like currently the only way to have screen sharing is to capture the frames in the user code and then convert them to YUV with one of the helpers, then pass it to the capture_frame which will encode the shelf and send it down the line to the RTP engine, etc.
I.e. is my understanding correct that at the moment there is no simple way to e.g. use the screen capturing engine from libwebrtc, so the user is expected to capture the frames by other means and supply them to the tracks?
The text was updated successfully, but these errors were encountered:
Hey, you're correct, at the moment there is no simple way to do screenshares.
But exposing the native libwebrtc capture engine is planned (as well as microphones and webcams).
EDIT: Just started working on it here
Really glad that LiveKit uses Rust, this opens so many possibilities! 🙂 Having an excellent and easy-to-use library for real-time communication is really cool!
I've shortly gone through the example code and published tracks to see if there is a simple and straightforward way to add screen-sharing (something as easy to use as in the JS/TS SDK). From what I've understood so far, it seems like currently the only way to have screen sharing is to capture the frames in the user code and then convert them to YUV with one of the helpers, then pass it to the
capture_frame
which will encode the shelf and send it down the line to the RTP engine, etc.I.e. is my understanding correct that at the moment there is no simple way to e.g. use the screen capturing engine from
libwebrtc
, so the user is expected to capture the frames by other means and supply them to the tracks?The text was updated successfully, but these errors were encountered: