-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
what audio-only tests are needed? #51
Comments
From requirements analysis of the DPCTF spec as presented in DPCTF 9/1/2021:
These requirements lead to: Also, from discussion in Mezzanine/Issue 37, additional utility tests: System SNR Estimation test -- Automated test; use to verify equipment meets basic minimum quality-of-transmission requirements A final note, there are TBDs in the DPCTF spec regarding some audio observations. As these are resolved into actual requirements, new tests may be required. |
@cta-source I may be missing something but the list above appears to start from a clean sheet whereas I was expecting we would be re-using the template HTML+JS from the video-only tests. Look at sections 8 and 9 of the DPCTF spec. I was expecting something like this; |
I also assume that we will re-use the template HTML+JS. I'm not sure, but I think we're at different points in implementation--my list is still an intermediary step. I'm mapping the tests I identified to 8.2, 8.3, etc. So rather than craft a unique test for each item, there will eventually be buried into the test suite certain tests or functions that need to be called for each case. So 8.2 requires the following:
@gitwjr and I have been discussing and documenting this mapping. However, eventually someone needs to do the integration that would presumably re-use the template HTML+JSON. |
It may be that I'm missing something about what information is being embedded in the audio & with what granularity. For video, each frame includes stream identifier, timecode of the frame, frame number, frame rate. |
Short answer: No encoded data other than a long always-the-same PN string, from which we can extract timing but nothing else. Bitrate is effectively 0 after you extract timing information. I'm assuming we get all that data (stream identifier, etc.) from the video portion of the test, and the audio portion of the test is just verifying the audio playout characteristics. (This "timing-only" limitation is obviously a problem for audio-only stream testing, should that become a priority. See below for options in that case.) Long answer (get coffee now): From this always-the-same string, we can extract timing information but nothing else. That's what we're doing so far. After you have timing information, you can do more (which we are not yet doing): To encode useful data, we need a modulation scheme. Here I would take some subframe of the full 2.8M bits--say, 1000 bits at a time--and do something to it that you can work out later, like invert all of a subframe's individual 1's and 0's to encode a '0', but leave them the same for '1'. That would allow you to take the 2,880,000 bits of PN sequence and have 2,880 bits of encoded data. The penalty is that the 1000:1 loss of data rate is exactly 30dB of SNR penalty in demodulating (for time in the first part, and useful data in the second part). We can no longer use a full 2,880,000 string of data to check the start of timing, we've broken it up into 1000 smaller pieces and have to look for each piece separately. As you know, we're already worried about SNR issues for some test environment scenarios. However, we're already incurring that penalty because we need 20mS resolution on timing--which means we check about 1000 bits at a time anyway (actually, 960 bits, but close enough). We could encode one bit per 20mS sequence eventually, with (I think) about a 2:1 penalty on test time. We'd get an effective rate of 50bps encoded 'raw' data. In 60 seconds that would be roughly 150-200 effective bytes (assuming usual overhead for this sort of data transmission, putting the raw data into packets, checksums, HDLC bit stuffing, etc.) This is a theory, I've never done it this way (inverting segments of a PN sequence). I can think of a reason it might not work, but it might be fine. Happy to put it into the queue to try it someday, but I'd like to continue to focus on the current goals, ofc. |
Thanks @jpiesing this is inline with what we are thinking.
|
@yanj-github for now yes we can play video only or video+audio tests. For audio only tests a minor change in the core player is needed but easy to support audio only tests from our side. In case the group decide for audio only tests, this is not a blocker.
All QR codes produced by the test runner will be displayed independent if the underlying |
PS: audio-only tests are now supported in the feature-multi-mpd branch which will be merged into master soon after @yanj-github confirms |
Just to add, splicing and switching tests are not included in e-ac-3 and ac-4 tests that we have generated. These are still under discussion. |
@louaybassbouss and @FritzHeiden we are about to get a new OF realese out soon by the end of this week. |
@yanj-github the feature-multi-mpd is now merged into the master branch can you please check. |
Thanks @louaybassbouss it looks good to me. |
@yanj-github I think we can close this issue can you confirm? |
The original subject of this issue has been moved to cta-wave/Test-Content#24 so I think what's left can be closed. |
I think this one is fine to be closed a well. |
What audio-only tests are needed?
For video, I created a sparse-matrix identifying which of the HTML+JavaScript test code (templates) needed running with which of the test content. This helped confirm that we had the correct test content. It was used to create a .csv file which generated the actual tests.
This issue is to track the creation of that .csv file for the audio-only tests, e.g. the Dolby ones whose test content is being developed by Eurofins.
The text was updated successfully, but these errors were encountered: