You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In traditional HLS/DASH-style live streaming, if the buffer is sufficiently big, the video player can do networking in bursts, rather than receive media continuously; in some circumstances (see the discussion linked above) this can lead to improved battery life. MoQT is focused on getting media sent as soon as possible, meaning that we can't do anything of this nature even for cases where the buffer is big. We should think about whether we want to do anyhting about this.
The text was updated successfully, but these errors were encountered:
I think we already have the semantics in SUBSCRIBE to allow burst networking under a large forward buffer. Assuming your player is willing to tolerate a large buffer (say 20s), you could issue absolute SUBSCRIBES for 20s blocks of groups every 20s. You could choose how close to the live edge you wanted to retrieve the content. Since the data was produced in the past, it would be delivered at line speed, not encode speed, further reducing radio time.
For live content, today SUBSCRIBE doesn't convey how delay tolerant the subscription is, so it seems like you'd always want to forward bytes on as quickly as possible with the assumption they have a small buffer.
I think this as well as a number of other issues mean explicitly communicating a latency target/jitter buffer could be very helpful.
(prompted by w3c/media-source#320 and w3c/webtransport#522)
In traditional HLS/DASH-style live streaming, if the buffer is sufficiently big, the video player can do networking in bursts, rather than receive media continuously; in some circumstances (see the discussion linked above) this can lead to improved battery life. MoQT is focused on getting media sent as soon as possible, meaning that we can't do anything of this nature even for cases where the buffer is big. We should think about whether we want to do anyhting about this.
The text was updated successfully, but these errors were encountered: