[Webkit-unassigned] [Bug 194747] [MSE] SourceBuffer sample time increment vs. last frame duration check is broken
bugzilla-daemon at webkit.org
bugzilla-daemon at webkit.org
Mon Feb 18 14:27:36 PST 2019
https://bugs.webkit.org/show_bug.cgi?id=194747
--- Comment #6 from Jer Noble <jer.noble at apple.com> ---
(In reply to up from comment #5)
> @jer Thank you for the fast response. I was referring to decode time and
> decode duration, as this is what is checked in this case. dts vs. pts does
> not really matter, or is a more special question, because for video without
> frame reordering, e.g. H.264 Baseline Profile, dts and pts are the same,
> even if there is a constant offset sometimes, and will break just as
> described.
>
> Sorry, I'm not familiar (yet) with the tests in WebKit. I have seen the mock
> source, but not 100% sure how it works. Can somebody assist to model the
> first frame sequence I have described in a mock source test?
Sure, here's a representative sort of test case for this kind of test:
https://trac.webkit.org/browser/webkit/trunk/LayoutTests/media/media-source/media-source-dropped-iframe.html
Which has the expected results of:
https://trac.webkit.org/browser/webkit/trunk/LayoutTests/media/media-source/media-source-dropped-iframe-expected.txt
You can see the interface to "makeAInitSegment()" and "makeASample()" here:
https://trac.webkit.org/browser/webkit/trunk/LayoutTests/media/media-source/mock-media-source.js
>From your initial example, you would need something like:
var samples = concatenateSamples(
makeASample(0, 0, 10, 100, 1, SAMPLE_FLAG.SYNC, 1),
makeASample(10, 10, 90, 100, 1, SAMPLE_FLAG.SYNC, 1),
makeASample(100, 100, 10, 100, 1, SAMPLE_FLAG.SYNC, 1),
);
..for a sample sequence of DTS,Duration [{0,10}, {10, 90}, {100, 10}], all with a timebase of 100 where each sample was an I-frame.
> Regarding a test case, I will try to provide one, but as I said the issue
> can be understood logically and that's what I tried to explain in the
> report.
> I suggest to focus on creating the mock test first.
>
> Regarding the split up of large audio sample groups, what are the limits
> that will cause the split up? Problem is that especially in low latency live
> streaming use cases, the fMP4 fragments are kept pretty small, might be even
> video frame length.
I believe in theory that on platforms using CoreMedia (i.e., macOS and iOS), a composite audio sample can be divided up to the sample rate. So a composite sample with a DTS,Duration of {0,44100} and a timescale of 44100 (which is also the audio track's sample rate) could be cleanly divided into two samples of {0, 12345} and {12345, 31755}. (Note that this will probably never line up perfectly with the timescale of a video track, whose own timebases tend to be not be based on the audio track's sample rate.)
--
You are receiving this mail because:
You are the assignee for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.webkit.org/pipermail/webkit-unassigned/attachments/20190218/6b3583ea/attachment.html>
More information about the webkit-unassigned
mailing list