-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
v1.0.0 (LHLS Chunked Transfer Support, IMSC1, Refactor) #2370
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
changes sound great!
only skimmed through them
Shouldn't the CI be green also? :) |
I emailed netlify to get more info on why the demo build/deploy is failing. Wondering if it's hitting a memory limit |
@robwalch
|
@tjenkinson I narrowed down the eslint issue to both level-controller and level-helper. I'll continue the process of elimination by deleting sections of code and comments until I find the culprit. Here's one. I guess it doesn't like inline typing.
|
Fixed the docs task locally, but netlify is still failing. Would that task really be handled during the "preparing repo" stage?
|
@robwalch no that’s unrelated. Seen it happen a few times now :/ If it happens again I’ll email them about that too. I restarted it and it worked |
7df1dc7
to
0eb146f
Compare
8d45a12
to
eeeb11a
Compare
Got a reply back about this. It was a known issue with the image we were using. I updated it to the latest one and it should be fixed now 🤞 |
See #2370 (comment) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks great, @johnBartos @robwalch ! 🥇
I scanned through the entire diff and focused on the areas which mattered (the progressive bits) as many of the changed files only have minor, unrelated changes (e.g. let
->const
).
I haven't tested out this build yet, but I plan on doing so in the near future and will report back on my experiences
Cheers 🍻
src/controller/buffer-controller.ts
Outdated
// in case any error occured while appending, put back segment in segments table | ||
logger.error(`[buffer-controller]: Error encountered while trying to append to the ${type} SourceBuffer`, err); | ||
const event = { type: ErrorTypes.MEDIA_ERROR, parent: frag.type, details: '', fatal: false }; | ||
if (err.code === 22) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor, would be nice to have an enum
with the possible codes added, for readability purposes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not finding any good sources for error documentation. So I would also guess that vendors have implemented these errors differently across browsers.
One that I can reproduce in Chrome (I called load()
on the video after clearing the source) with code 11
is Error encountered while trying to append to the video SourceBuffer DOMException: Failed to execute 'appendBuffer' on 'SourceBuffer': This SourceBuffer has been removed from the parent media source.
We shouldn't retry in that case but it's no BUFFER_FULL_ERROR
. I'd love to improve the error handling here. The biggest thing missing IMO is an error
or err
property on the event with the source Error object.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@robwalch - Good point on the lack of good documentation, I couldn't find much else either. All I found was: https://www.w3.org/TR/WebIDL-1/#quotaexceedederror
I fiddled with this a bit and found out that this is part of the TypeScript library out-of-box:
let x = DOMException.QUOTA_EXCEEDED_ERR; // number
See screenshot of it in action in the browser (Google Chrome 77):
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very cool. Support is good too. Thanks @michaelcunningham19! I made the change to check:
if (err.code === DOMException.QUOTA_EXCEEDED_ERR) {
src/types/transmuxer.ts
Outdated
|
||
public transmuxing: HlsChunkPerformanceTiming = { start: 0, executeStart: 0, executeEnd: 0, end: 0 }; | ||
public buffering: { [key in SourceBufferName]: HlsChunkPerformanceTiming } = { | ||
audio: { start: 0, executeStart: 0, executeEnd: 0, end: 0 }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could have a named type for this for audio
, video
, and audiovideo
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's HlsChunkPerformanceTiming
. Am I missing something?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah yes, I mis-read this initially. I thought this was an inline type definition. The type definition is correct.
My first comment was about the multiple inline usage of implementing those timing types for the initial values ({ start: 0, executeStart: 0, executeEnd: 0, end: 0 }
)
Thoughts on making a helper for this?
e.g.
function getNewPerformanceTiming(): HlsChunkPerformanceTiming {
return { start: 0, executeStart: 0, executeEnd: 0, end: 0 }
}
const timing = getNewPerformanceTiming()
Having this helper may make future refactoring easier
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done!
… requests when block reload is available
…ck and subtitle-track controllers
…ng live start position
Run type checks on test files
Cleanup code TODOs
… when unnecessary or past attempts failed
- Mark LL-HLS test streams as live - Comment out unavailable test stream - Comment out HEVC test stream (functional tests expect streams that contain codec variants playable by Chrome desktop)
- Do not adjustSliding twice with delta playlist - Add LevelDetail adjustedSliding distinguished from PTSKnown (+5 squashed commits) - Align streams without DISCONTINUITY, PROGRAM-DATE-TIME, or known-PTS on SEQUENCE-NUMBER to improve first fragment requested - Fix delta playlist merging without fragment PTS (adjustSliding with skipped segments) - Fix playlist sliding check - Sync levels on PTS by setting transmuxer defaultInitPTS Add playbackRate controls to demo - Add lowLatencyMode config (doesn't yet disable part playlist loading) - Add SKIP delta playlist tag to README
…reams with audio PES carryover
Cleanup live backbuffer removal Cleanup fragment-tracker eviction from buffer flushes Load main fragment at `this.nextLoadPosition` when loading candidate is already loaded
HLS v10 support
* upstream_hls.js/master: Minor clarification of exactly what occurs at initialLiveManifestSize. Update API.md Bump sinon from 9.0.3 to 9.1.0 Bump eslint-plugin-import from 2.22.0 to 2.22.1 Bump @types/sinon-chai from 3.2.4 to 3.2.5 Bump karma from 5.2.2 to 5.2.3 Bump netlify-cli from 2.64.0 to 2.64.1 Bump netlify-cli from 2.63.3 to 2.64.0 # Conflicts: # package-lock.json
This PR has been replaced by #3072 |
v1.0.0
Overview
Earlier this year, the JW Player team set out to add low-latency streaming support to Hls.js. The focus of this work was around progressive streaming - the ability to stream a segment as a series of chunks, from fetch to buffer. To do so we had to refactor the core segment flow (load, transmux, buffer) so that it could handle streaming chunks instead of whole segments.
At a high level, we refactored the architecture to propagate data via return statements and direct callbacks, instead of through the Hls event bus. For example, fragment loading is now done through a promise returned by the fragment loader, instead of being triggered by
FRAG_LOADING
and received byFRAG_LOADED
. We made changes similar to this throughout the pipeline.We needed to make these changes because of an assumption built into the core of Hls.js - only one segment will be loading at any given time (per controller), and that segment will be only in one state (loading, parsing, or buffering). A lot of logic in Hls.js was built around this assumption. However, when progressively streaming, this isn't true - a segment can be both loading, transmuxing, and buffering at the same time. Unfortunately the event bus architecture made this very difficult to change, since it assumes that events received outside of their expected state are invalid. For example, Hls.js will throw away
FRAG_PARSING_DATA
events if the current state is not parsing. Direct data flow made it very easy to break these assumptions because we could effectively do away with using state to control fragment loading flow. This new architecture also has a side benefit where handlers are not receiving events for things they don't care about (for example, theaudio-stream-controller
handling allFRAG_LOADED
events, even from ones it did not initiate).Transmuxing data flow was refactored in a similar way - we moved away from events in favor of returns and directly-set callbacks. But more importantly, we refactored each demuxer and remuxer so that it can process segments as a series of chunks instead of a single, whole unit. Along the way we also made a lot of improvements to how we transmux, especially when it comes to CMAF/fmp4.
Our hypothesis for results is that progressive streaming will give us a faster time-to-first-frame, with a modest increase to the amount of stalls per play session. This is because when on a slow connection, you'll probably stall more often when buffering small chunks as opposed to whole segments. Over the past few months we've been doing A/B tests in production, and the results are in-line with our hypothesis. Approximately 10% more viewers have a TTFF of under 1 second compared to non-progressive, but those viewers also stall for longer than non-progressive viewers (95% stall for under 1 second non-progressive, while only 93% stall for under 1s with the progressive provider). Furthermore, we see that on old devices performance isn't as good as non-progressive streaming. We hypothesize that this is because of the additional overhead incurred when progressively streaming - some devices are too slow to take advantage of it.
On the JW side, we're continuing to iterate and improve performance. For example, we put in a change that enforces the minimum chunk size to be at least 16kb when streaming progressive. We think that this will help balance the scheduling overhead of chunks with the benefit of buffering faster. We're also continuing to work to determine which devices benefit most from progressive, so that we can intelligently set the
progressive
config flag.This PR is not yet complete, but I think it's better to get this pushed up for reviewer sooner than later. I think there's a bug or two with this merge right now (see Known Issues below), but those will be worked out soon. Overall we're seeing that almost all of our streams are playing as expected - we'd love for you to test it too.
Changelist
Breaking Changes
stats
object has changed. See below for more infoenableSoftwareAES
will always be used if set to trueprogressive
is trueFragment
object:hasElementaryStream
function has been removedsetElementaryStream
function has been removed_elementaryStreams
property has been removedGeneral
progressive
(boolean)fetch-loader
as the fragment and playlist loader, which enables progressive streamingrenderNatively
(boolean, false by default)bitrateTest
(boolean)Fragment Loaders
fragment-loader
now returns a promise when loading is completeonProgress
is called whenever a chunk of data is availablehighWaterMark
option, which defaults to 16kb. We found this to be a good default size.base-stream-controler
, which emits loading events with the same signature/payloadfetch-loader
always streams progressively when enabled;xhr-loader
is used for non-progressive streamingStream Controllers
stream-controller
andaudio-stream-controller
. The bulk of segment logic is now found in thebase-stream-controller
fragment-loader
FRAGMENT_LOADING
event is no longer used to move the binary segment data from loader to controller; loading is now handled via promisesTransmuxer
to receive transmuxed chunkspendingBuffering
andappended
flags have been removedFRAG_BUFFERED
eventAudio Stream Controller
pendingData
) when loading the init segment during audio track switchTransmuxer
demuxer
has been renamed totransmuxer-interface
demuxer-worker
has been renamed totransmuxer-worker
demuxer-inline
has been renamed totransmuxer
handleTransmuxComplete
is called whenever data has been transmuxedhandleTransmuxerFlush
is called when the current segment has finished transmuxingERROR
andFRAG_DECRYPTED
events are emitted from the Transmuxerpush
andflush
push
is called for each chunkflush
is called when segment loading has completedTransmuxResult
objects, or a promise which resolves aTransmuxResult
(in the case of AES)See jwplayer#199 for a more in-depth list of changes
Buffer Controller
updating
SourceBuffer flag)FRAG_BUFFERED
event when all chunks for the current segment have been bufferedStats
load-stats.ts
See jwplayer#239 for more detail on the changes
Playlist Loading
Captions
renderNatively
above)LHLS
PREFETCH
segments (as defined in the open LHLS spec) has been added