<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN"
"http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head><meta http-equiv="content-type" content="text/html; charset=utf-8" />
<title>[287157] trunk/Source/WebCore</title>
</head>
<body>

<style type="text/css"><!--
#msg dl.meta { border: 1px #006 solid; background: #369; padding: 6px; color: #fff; }
#msg dl.meta dt { float: left; width: 6em; font-weight: bold; }
#msg dt:after { content:':';}
#msg dl, #msg dt, #msg ul, #msg li, #header, #footer, #logmsg { font-family: verdana,arial,helvetica,sans-serif; font-size: 10pt;  }
#msg dl a { font-weight: bold}
#msg dl a:link    { color:#fc3; }
#msg dl a:active  { color:#ff0; }
#msg dl a:visited { color:#cc6; }
h3 { font-family: verdana,arial,helvetica,sans-serif; font-size: 10pt; font-weight: bold; }
#msg pre { overflow: auto; background: #ffc; border: 1px #fa0 solid; padding: 6px; }
#logmsg { background: #ffc; border: 1px #fa0 solid; padding: 1em 1em 0 1em; }
#logmsg p, #logmsg pre, #logmsg blockquote { margin: 0 0 1em 0; }
#logmsg p, #logmsg li, #logmsg dt, #logmsg dd { line-height: 14pt; }
#logmsg h1, #logmsg h2, #logmsg h3, #logmsg h4, #logmsg h5, #logmsg h6 { margin: .5em 0; }
#logmsg h1:first-child, #logmsg h2:first-child, #logmsg h3:first-child, #logmsg h4:first-child, #logmsg h5:first-child, #logmsg h6:first-child { margin-top: 0; }
#logmsg ul, #logmsg ol { padding: 0; list-style-position: inside; margin: 0 0 0 1em; }
#logmsg ul { text-indent: -1em; padding-left: 1em; }#logmsg ol { text-indent: -1.5em; padding-left: 1.5em; }
#logmsg > ul, #logmsg > ol { margin: 0 0 1em 0; }
#logmsg pre { background: #eee; padding: 1em; }
#logmsg blockquote { border: 1px solid #fa0; border-left-width: 10px; padding: 1em 1em 0 1em; background: white;}
#logmsg dl { margin: 0; }
#logmsg dt { font-weight: bold; }
#logmsg dd { margin: 0; padding: 0 0 0.5em 0; }
#logmsg dd:before { content:'\00bb';}
#logmsg table { border-spacing: 0px; border-collapse: collapse; border-top: 4px solid #fa0; border-bottom: 1px solid #fa0; background: #fff; }
#logmsg table th { text-align: left; font-weight: normal; padding: 0.2em 0.5em; border-top: 1px dotted #fa0; }
#logmsg table td { text-align: right; border-top: 1px dotted #fa0; padding: 0.2em 0.5em; }
#logmsg table thead th { text-align: center; border-bottom: 1px solid #fa0; }
#logmsg table th.Corner { text-align: left; }
#logmsg hr { border: none 0; border-top: 2px dashed #fa0; height: 1px; }
#header, #footer { color: #fff; background: #636; border: 1px #300 solid; padding: 6px; }
#patch { width: 100%; }
#patch h4 {font-family: verdana,arial,helvetica,sans-serif;font-size:10pt;padding:8px;background:#369;color:#fff;margin:0;}
#patch .propset h4, #patch .binary h4 {margin:0;}
#patch pre {padding:0;line-height:1.2em;margin:0;}
#patch .diff {width:100%;background:#eee;padding: 0 0 10px 0;overflow:auto;}
#patch .propset .diff, #patch .binary .diff  {padding:10px 0;}
#patch span {display:block;padding:0 10px;}
#patch .modfile, #patch .addfile, #patch .delfile, #patch .propset, #patch .binary, #patch .copfile {border:1px solid #ccc;margin:10px 0;}
#patch ins {background:#dfd;text-decoration:none;display:block;padding:0 10px;}
#patch del {background:#fdd;text-decoration:none;display:block;padding:0 10px;}
#patch .lines, .info {color:#888;background:#fff;}
--></style>
<div id="msg">
<dl class="meta">
<dt>Revision</dt> <dd><a href="http://trac.webkit.org/projects/webkit/changeset/287157">287157</a></dd>
<dt>Author</dt> <dd>youenn@apple.com</dd>
<dt>Date</dt> <dd>2021-12-16 14:02:54 -0800 (Thu, 16 Dec 2021)</dd>
</dl>

<h3>Log Message</h3>
<pre>Allow AudioSampleDataSource to increase/decrease buffered data progressively
https://bugs.webkit.org/show_bug.cgi?id=233422

Reviewed by Eric Carlson.

AudioSampleDataSource does the link between push audio sources and pull audio sinks.
As such, it needs to do buffering. If buffering is too small, data may be missing and audio glitches will be heard.
If buffering is too large, latency will be added which might be undesirable, especially if audio is being played with video.
We generally want buffered audio to stay within a certain range.

To make this happen, when buffering is too high, we convert the data with a slightly lower sample rate to push less samples, until we are back to normal buffering.
Conversely, when buffering is too low, we convert the data with a slightly higher sample rate to push more samples, until we are back to normal buffering.
We do this with 3 converters that we select based on amount of buffered data.
This behavior is encapsulated in AudioSampleDataConverter.

We simplify AudioSampleDataSource implementation by always recomputing the sample offset when there is not enough data.
In that case, we wait for 50ms of buffered data, which is the average buffer we expect, to restart pulling data.
All values owned by AudioSampleDataSource (m_expectedNextPushedSampleTimeValue, m_converterInputOffset, m_converterInputOffset, m_outputSampleOffset, m_lastBufferedAmount)
are all in the outgoing timeline/sampleRate.

This adaptation is only enabled when AudioSampleDataSource::pullSamples is called.
For pullAvailableSamplesAsChunks and pullAvailableSampleChunk, the puller is supposed to be in sync with the pusher.
For that reason, we make sure to always write the expected number of audio frames when pullSamples is called, even if converter fails.

We fix a potential busy loop in AudioSampleDataSource::pullAvailableSamplesAsChunks in case endFrame is lower than startFrame, which is computed from timeStamp input parameter.

Update MockAudioSharedUnit to be closer to a real source by increasing the queue priority and schedule rendering tasks from the queue instead of relying on
a main thread timer which can have hiccups.

Manually tested.

* SourcesCocoa.txt:
* WebCore.xcodeproj/project.pbxproj:
* platform/audio/cocoa/AudioSampleDataConverter.h: Added.
* platform/audio/cocoa/AudioSampleDataConverter.mm: Added.
* platform/audio/cocoa/AudioSampleDataSource.h:
* platform/audio/cocoa/AudioSampleDataSource.mm:
* platform/mediastream/mac/MockAudioSharedUnit.h:
* platform/mediastream/mac/MockAudioSharedUnit.mm:</pre>

<h3>Modified Paths</h3>
<ul>
<li><a href="#trunkSourceWebCoreChangeLog">trunk/Source/WebCore/ChangeLog</a></li>
<li><a href="#trunkSourceWebCoreSourcesCocoatxt">trunk/Source/WebCore/SourcesCocoa.txt</a></li>
<li><a href="#trunkSourceWebCoreWebCorexcodeprojprojectpbxproj">trunk/Source/WebCore/WebCore.xcodeproj/project.pbxproj</a></li>
<li><a href="#trunkSourceWebCoreplatformaudiococoaAudioSampleBufferListcpp">trunk/Source/WebCore/platform/audio/cocoa/AudioSampleBufferList.cpp</a></li>
<li><a href="#trunkSourceWebCoreplatformaudiococoaAudioSampleBufferListh">trunk/Source/WebCore/platform/audio/cocoa/AudioSampleBufferList.h</a></li>
<li><a href="#trunkSourceWebCoreplatformaudiococoaAudioSampleDataSourceh">trunk/Source/WebCore/platform/audio/cocoa/AudioSampleDataSource.h</a></li>
<li><a href="#trunkSourceWebCoreplatformaudiococoaAudioSampleDataSourcemm">trunk/Source/WebCore/platform/audio/cocoa/AudioSampleDataSource.mm</a></li>
<li><a href="#trunkSourceWebCoreplatformmediastreammacMockAudioSharedUnith">trunk/Source/WebCore/platform/mediastream/mac/MockAudioSharedUnit.h</a></li>
<li><a href="#trunkSourceWebCoreplatformmediastreammacMockAudioSharedUnitmm">trunk/Source/WebCore/platform/mediastream/mac/MockAudioSharedUnit.mm</a></li>
</ul>

<h3>Added Paths</h3>
<ul>
<li><a href="#trunkSourceWebCoreplatformaudiococoaAudioSampleDataConverterh">trunk/Source/WebCore/platform/audio/cocoa/AudioSampleDataConverter.h</a></li>
<li><a href="#trunkSourceWebCoreplatformaudiococoaAudioSampleDataConvertermm">trunk/Source/WebCore/platform/audio/cocoa/AudioSampleDataConverter.mm</a></li>
</ul>

</div>
<div id="patch">
<h3>Diff</h3>
<a id="trunkSourceWebCoreChangeLog"></a>
<div class="modfile"><h4>Modified: trunk/Source/WebCore/ChangeLog (287156 => 287157)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/WebCore/ChangeLog   2021-12-16 22:00:44 UTC (rev 287156)
+++ trunk/Source/WebCore/ChangeLog      2021-12-16 22:02:54 UTC (rev 287157)
</span><span class="lines">@@ -1,3 +1,45 @@
</span><ins>+2021-12-16  Youenn Fablet  <youenn@apple.com>
+
+        Allow AudioSampleDataSource to increase/decrease buffered data progressively
+        https://bugs.webkit.org/show_bug.cgi?id=233422
+
+        Reviewed by Eric Carlson.
+
+        AudioSampleDataSource does the link between push audio sources and pull audio sinks.
+        As such, it needs to do buffering. If buffering is too small, data may be missing and audio glitches will be heard.
+        If buffering is too large, latency will be added which might be undesirable, especially if audio is being played with video.
+        We generally want buffered audio to stay within a certain range.
+
+        To make this happen, when buffering is too high, we convert the data with a slightly lower sample rate to push less samples, until we are back to normal buffering.
+        Conversely, when buffering is too low, we convert the data with a slightly higher sample rate to push more samples, until we are back to normal buffering.
+        We do this with 3 converters that we select based on amount of buffered data.
+        This behavior is encapsulated in AudioSampleDataConverter.
+
+        We simplify AudioSampleDataSource implementation by always recomputing the sample offset when there is not enough data.
+        In that case, we wait for 50ms of buffered data, which is the average buffer we expect, to restart pulling data.
+        All values owned by AudioSampleDataSource (m_expectedNextPushedSampleTimeValue, m_converterInputOffset, m_converterInputOffset, m_outputSampleOffset, m_lastBufferedAmount)
+        are all in the outgoing timeline/sampleRate.
+
+        This adaptation is only enabled when AudioSampleDataSource::pullSamples is called.
+        For pullAvailableSamplesAsChunks and pullAvailableSampleChunk, the puller is supposed to be in sync with the pusher.
+        For that reason, we make sure to always write the expected number of audio frames when pullSamples is called, even if converter fails.
+
+        We fix a potential busy loop in AudioSampleDataSource::pullAvailableSamplesAsChunks in case endFrame is lower than startFrame, which is computed from timeStamp input parameter.
+
+        Update MockAudioSharedUnit to be closer to a real source by increasing the queue priority and schedule rendering tasks from the queue instead of relying on
+        a main thread timer which can have hiccups.
+
+        Manually tested.
+
+        * SourcesCocoa.txt:
+        * WebCore.xcodeproj/project.pbxproj:
+        * platform/audio/cocoa/AudioSampleDataConverter.h: Added.
+        * platform/audio/cocoa/AudioSampleDataConverter.mm: Added.
+        * platform/audio/cocoa/AudioSampleDataSource.h:
+        * platform/audio/cocoa/AudioSampleDataSource.mm:
+        * platform/mediastream/mac/MockAudioSharedUnit.h:
+        * platform/mediastream/mac/MockAudioSharedUnit.mm:
+
</ins><span class="cx"> 2021-12-16  Sihui Liu  <sihui_liu@apple.com>
</span><span class="cx"> 
</span><span class="cx">         REGRESSION (r286601): storage/filesystemaccess/sync-access-handle-read-write-worker.html and file-system-access/sandboxed_FileSystemSyncAccessHandle-truncate.https.tentative.worker.html are consistently failing
</span></span></pre></div>
<a id="trunkSourceWebCoreSourcesCocoatxt"></a>
<div class="modfile"><h4>Modified: trunk/Source/WebCore/SourcesCocoa.txt (287156 => 287157)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/WebCore/SourcesCocoa.txt    2021-12-16 22:00:44 UTC (rev 287156)
+++ trunk/Source/WebCore/SourcesCocoa.txt       2021-12-16 22:02:54 UTC (rev 287157)
</span><span class="lines">@@ -224,6 +224,7 @@
</span><span class="cx"> platform/audio/cocoa/AudioFileReaderCocoa.cpp
</span><span class="cx"> platform/audio/cocoa/AudioOutputUnitAdaptor.cpp
</span><span class="cx"> platform/audio/cocoa/AudioSampleBufferList.cpp
</span><ins>+platform/audio/cocoa/AudioSampleDataConverter.mm
</ins><span class="cx"> platform/audio/cocoa/AudioSampleDataSource.mm
</span><span class="cx"> platform/audio/cocoa/CAAudioStreamDescription.cpp
</span><span class="cx"> platform/audio/cocoa/CARingBuffer.cpp
</span><span class="lines">@@ -315,7 +316,7 @@
</span><span class="cx"> platform/graphics/avfoundation/objc/InbandChapterTrackPrivateAVFObjC.mm @no-unify
</span><span class="cx"> platform/graphics/avfoundation/objc/LocalSampleBufferDisplayLayer.mm
</span><span class="cx"> platform/graphics/avfoundation/objc/MediaPlaybackTargetPickerMac.mm
</span><del>-platform/graphics/avfoundation/objc/MediaPlayerPrivateAVFoundationObjC.mm
</del><ins>+platform/graphics/avfoundation/objc/MediaPlayerPrivateAVFoundationObjC.mm @no-unify
</ins><span class="cx"> platform/graphics/avfoundation/objc/MediaPlayerPrivateMediaSourceAVFObjC.mm @no-unify
</span><span class="cx"> platform/graphics/avfoundation/objc/MediaPlayerPrivateMediaStreamAVFObjC.mm
</span><span class="cx"> platform/graphics/avfoundation/objc/MediaSampleAVFObjC.mm
</span><span class="lines">@@ -322,7 +323,7 @@
</span><span class="cx"> platform/graphics/avfoundation/objc/MediaSourcePrivateAVFObjC.mm
</span><span class="cx"> platform/graphics/avfoundation/objc/SourceBufferPrivateAVFObjC.mm @no-unify
</span><span class="cx"> platform/graphics/avfoundation/objc/SourceBufferParserAVFObjC.mm
</span><del>-platform/graphics/avfoundation/objc/VideoLayerManagerObjC.mm
</del><ins>+platform/graphics/avfoundation/objc/VideoLayerManagerObjC.mm @no-unify
</ins><span class="cx"> platform/graphics/avfoundation/objc/VideoTrackPrivateAVFObjC.cpp
</span><span class="cx"> platform/graphics/avfoundation/objc/VideoTrackPrivateMediaSourceAVFObjC.mm
</span><span class="cx"> platform/graphics/avfoundation/objc/WebCoreAVFResourceLoader.mm
</span></span></pre></div>
<a id="trunkSourceWebCoreWebCorexcodeprojprojectpbxproj"></a>
<div class="modfile"><h4>Modified: trunk/Source/WebCore/WebCore.xcodeproj/project.pbxproj (287156 => 287157)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/WebCore/WebCore.xcodeproj/project.pbxproj   2021-12-16 22:00:44 UTC (rev 287156)
+++ trunk/Source/WebCore/WebCore.xcodeproj/project.pbxproj      2021-12-16 22:02:54 UTC (rev 287157)
</span><span class="lines">@@ -1228,6 +1228,9 @@
</span><span class="cx">          41FCCC3B2746675600892AD6 /* CoreAudioCaptureSource.h in Headers */ = {isa = PBXBuildFile; fileRef = 3F3BB5831E709EE400C701F2 /* CoreAudioCaptureSource.h */; settings = {ATTRIBUTES = (Private, ); }; };
</span><span class="cx">          41FCD6B923CE015500C62567 /* SampleBufferDisplayLayer.h in Headers */ = {isa = PBXBuildFile; fileRef = 414598BE23C8AAB8002B9CC8 /* SampleBufferDisplayLayer.h */; settings = {ATTRIBUTES = (Private, ); }; };
</span><span class="cx">          41FCD6BB23CE027700C62567 /* LocalSampleBufferDisplayLayer.h in Headers */ = {isa = PBXBuildFile; fileRef = 414598C023C8AD78002B9CC8 /* LocalSampleBufferDisplayLayer.h */; settings = {ATTRIBUTES = (Private, ); }; };
</span><ins>+               41FFD2C327563E0D00501BBF /* AudioSampleDataConverter.h in Headers */ = {isa = PBXBuildFile; fileRef = 41FFD2C027563DFF00501BBF /* AudioSampleDataConverter.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               41FFD2C42756570F00501BBF /* VideoLayerManagerObjC.mm in Sources */ = {isa = PBXBuildFile; fileRef = 52D5A18D1C54590300DE34A3 /* VideoLayerManagerObjC.mm */; };
+               41FFD2C62756573E00501BBF /* MediaPlayerPrivateAVFoundationObjC.mm in Sources */ = {isa = PBXBuildFile; fileRef = DF9AFD7113FC31D80015FEB7 /* MediaPlayerPrivateAVFoundationObjC.mm */; };
</ins><span class="cx">           427DA71D13735DFA007C57FB /* JSServiceWorkerInternals.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 427DA71B13735DFA007C57FB /* JSServiceWorkerInternals.cpp */; };
</span><span class="cx">          427DA71E13735DFA007C57FB /* JSServiceWorkerInternals.h in Headers */ = {isa = PBXBuildFile; fileRef = 427DA71C13735DFA007C57FB /* JSServiceWorkerInternals.h */; };
</span><span class="cx">          43107BE218CC19DE00CC18E8 /* SelectorPseudoTypeMap.h in Headers */ = {isa = PBXBuildFile; fileRef = 43107BE118CC19DE00CC18E8 /* SelectorPseudoTypeMap.h */; };
</span><span class="lines">@@ -8943,6 +8946,8 @@
</span><span class="cx">          41FCB75F214866FF0038ADC6 /* RTCRtpCodecParameters.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = RTCRtpCodecParameters.h; sourceTree = "<group>"; };
</span><span class="cx">          41FCB760214867000038ADC6 /* RTCRtpRtxParameters.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = RTCRtpRtxParameters.h; sourceTree = "<group>"; };
</span><span class="cx">          41FCB761214867000038ADC6 /* RTCRtpEncodingParameters.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = RTCRtpEncodingParameters.h; sourceTree = "<group>"; };
</span><ins>+               41FFD2C027563DFF00501BBF /* AudioSampleDataConverter.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = AudioSampleDataConverter.h; sourceTree = "<group>"; };
+               41FFD2C227563E0000501BBF /* AudioSampleDataConverter.mm */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.objcpp; path = AudioSampleDataConverter.mm; sourceTree = "<group>"; };
</ins><span class="cx">           427DA71B13735DFA007C57FB /* JSServiceWorkerInternals.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = JSServiceWorkerInternals.cpp; sourceTree = "<group>"; };
</span><span class="cx">          427DA71C13735DFA007C57FB /* JSServiceWorkerInternals.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JSServiceWorkerInternals.h; sourceTree = "<group>"; };
</span><span class="cx">          43107BE118CC19DE00CC18E8 /* SelectorPseudoTypeMap.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = SelectorPseudoTypeMap.h; sourceTree = "<group>"; };
</span><span class="lines">@@ -30064,6 +30069,8 @@
</span><span class="cx">                          1DB66D37253678EA00B671B9 /* AudioOutputUnitAdaptor.h */,
</span><span class="cx">                          073B87621E43859D0071C0EC /* AudioSampleBufferList.cpp */,
</span><span class="cx">                          073B87631E43859D0071C0EC /* AudioSampleBufferList.h */,
</span><ins>+                               41FFD2C027563DFF00501BBF /* AudioSampleDataConverter.h */,
+                               41FFD2C227563E0000501BBF /* AudioSampleDataConverter.mm */,
</ins><span class="cx">                           073B87651E43859D0071C0EC /* AudioSampleDataSource.h */,
</span><span class="cx">                          073B87641E43859D0071C0EC /* AudioSampleDataSource.mm */,
</span><span class="cx">                          073B87571E40DCFD0071C0EC /* CAAudioStreamDescription.cpp */,
</span><span class="lines">@@ -33322,6 +33329,7 @@
</span><span class="cx">                          FD31608612B026F700C1A359 /* AudioResampler.h in Headers */,
</span><span class="cx">                          FD31608812B026F700C1A359 /* AudioResamplerKernel.h in Headers */,
</span><span class="cx">                          073B87671E4385AC0071C0EC /* AudioSampleBufferList.h in Headers */,
</span><ins>+                               41FFD2C327563E0D00501BBF /* AudioSampleDataConverter.h in Headers */,
</ins><span class="cx">                           073B87691E4385AC0071C0EC /* AudioSampleDataSource.h in Headers */,
</span><span class="cx">                          FD8C46EC154608E700A5910C /* AudioScheduledSourceNode.h in Headers */,
</span><span class="cx">                          CDA7982A170A3D0000D45C55 /* AudioSession.h in Headers */,
</span><span class="lines">@@ -38443,6 +38451,7 @@
</span><span class="cx">                          2D9BF7431DBFDC3E007A7D99 /* MediaKeySystemAccess.cpp in Sources */,
</span><span class="cx">                          9ACC079825C7267700DC6386 /* MediaKeySystemController.cpp in Sources */,
</span><span class="cx">                          9ACC079625C725EE00DC6386 /* MediaKeySystemRequest.cpp in Sources */,
</span><ins>+                               41FFD2C62756573E00501BBF /* MediaPlayerPrivateAVFoundationObjC.mm in Sources */,
</ins><span class="cx">                           CDC8B5A2180463470016E685 /* MediaPlayerPrivateMediaSourceAVFObjC.mm in Sources */,
</span><span class="cx">                          CDA9593524123CB800910EEF /* MediaSessionHelperIOS.mm in Sources */,
</span><span class="cx">                          07638A9A1884487200E15A1B /* MediaSessionManagerIOS.mm in Sources */,
</span><span class="lines">@@ -39072,6 +39081,7 @@
</span><span class="cx">                          7CE68344192143A800F4D928 /* UserMessageHandlerDescriptor.cpp in Sources */,
</span><span class="cx">                          7C73FB07191EF417007DE061 /* UserMessageHandlersNamespace.cpp in Sources */,
</span><span class="cx">                          3FBC4AF3189881560046EE38 /* VideoFullscreenInterfaceAVKit.mm in Sources */,
</span><ins>+                               41FFD2C42756570F00501BBF /* VideoLayerManagerObjC.mm in Sources */,
</ins><span class="cx">                           26F9A83818A046AC00AEB88A /* ViewportConfiguration.cpp in Sources */,
</span><span class="cx">                          CDED1C3C24CD305700934E12 /* VP9UtilitiesCocoa.mm in Sources */,
</span><span class="cx">                          A14832B1187F61E100DA63A6 /* WAKAppKitStubs.m in Sources */,
</span></span></pre></div>
<a id="trunkSourceWebCoreplatformaudiococoaAudioSampleBufferListcpp"></a>
<div class="modfile"><h4>Modified: trunk/Source/WebCore/platform/audio/cocoa/AudioSampleBufferList.cpp (287156 => 287157)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/WebCore/platform/audio/cocoa/AudioSampleBufferList.cpp      2021-12-16 22:00:44 UTC (rev 287156)
+++ trunk/Source/WebCore/platform/audio/cocoa/AudioSampleBufferList.cpp 2021-12-16 22:02:54 UTC (rev 287157)
</span><span class="lines">@@ -297,17 +297,12 @@
</span><span class="cx">         return 0;
</span><span class="cx">     }
</span><span class="cx"> 
</span><del>-    LOG_ERROR("AudioSampleBufferList::copyFrom(%p) AudioConverterFillComplexBuffer returned error %d (%.4s)", this, (int)err, (char*)&err);
</del><ins>+    RELEASE_LOG_ERROR(Media, "AudioSampleBufferList::copyFrom(%p) AudioConverterFillComplexBuffer returned error %d (%.4s)", this, (int)err, (char*)&err);
</ins><span class="cx">     m_sampleCount = std::min(m_sampleCapacity, static_cast<size_t>(samplesConverted));
</span><span class="cx">     zero();
</span><span class="cx">     return err;
</span><span class="cx"> }
</span><span class="cx"> 
</span><del>-OSStatus AudioSampleBufferList::copyFrom(AudioSampleBufferList& source, size_t frameCount, AudioConverterRef converter)
-{
-    return copyFrom(source.bufferList(), frameCount, converter);
-}
-
</del><span class="cx"> OSStatus AudioSampleBufferList::copyFrom(CARingBuffer& ringBuffer, size_t sampleCount, uint64_t startFrame, CARingBuffer::FetchMode mode)
</span><span class="cx"> {
</span><span class="cx">     reset();
</span></span></pre></div>
<a id="trunkSourceWebCoreplatformaudiococoaAudioSampleBufferListh"></a>
<div class="modfile"><h4>Modified: trunk/Source/WebCore/platform/audio/cocoa/AudioSampleBufferList.h (287156 => 287157)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/WebCore/platform/audio/cocoa/AudioSampleBufferList.h        2021-12-16 22:00:44 UTC (rev 287156)
+++ trunk/Source/WebCore/platform/audio/cocoa/AudioSampleBufferList.h   2021-12-16 22:02:54 UTC (rev 287157)
</span><span class="lines">@@ -50,7 +50,6 @@
</span><span class="cx"> 
</span><span class="cx">     OSStatus copyFrom(const AudioSampleBufferList&, size_t count = SIZE_MAX);
</span><span class="cx">     OSStatus copyFrom(const AudioBufferList&, size_t frameCount, AudioConverterRef);
</span><del>-    OSStatus copyFrom(AudioSampleBufferList&, size_t frameCount, AudioConverterRef);
</del><span class="cx">     OSStatus copyFrom(CARingBuffer&, size_t frameCount, uint64_t startFrame, CARingBuffer::FetchMode);
</span><span class="cx"> 
</span><span class="cx">     OSStatus mixFrom(const AudioSampleBufferList&, size_t count = SIZE_MAX);
</span></span></pre></div>
<a id="trunkSourceWebCoreplatformaudiococoaAudioSampleDataConverterh"></a>
<div class="addfile"><h4>Added: trunk/Source/WebCore/platform/audio/cocoa/AudioSampleDataConverter.h (0 => 287157)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/WebCore/platform/audio/cocoa/AudioSampleDataConverter.h                             (rev 0)
+++ trunk/Source/WebCore/platform/audio/cocoa/AudioSampleDataConverter.h        2021-12-16 22:02:54 UTC (rev 287157)
</span><span class="lines">@@ -0,0 +1,75 @@
</span><ins>+/*
+ * Copyright (C) 2021 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#pragma once
+
+typedef struct AudioBufferList AudioBufferList;
+struct AudioStreamBasicDescription;
+typedef struct OpaqueAudioConverter* AudioConverterRef;
+
+namespace WebCore {
+
+class AudioSampleBufferList;
+class CAAudioStreamDescription;
+class PlatformAudioData;
+
+class AudioSampleDataConverter {
+    WTF_MAKE_FAST_ALLOCATED;
+public:
+    AudioSampleDataConverter() = default;
+    ~AudioSampleDataConverter();
+
+    OSStatus setFormats(const CAAudioStreamDescription& inputDescription, const CAAudioStreamDescription& outputDescription);
+    bool updateBufferedAmount(size_t currentBufferedAmount);
+    OSStatus convert(const AudioBufferList&, AudioSampleBufferList&, size_t sampleCount);
+    size_t regularBufferSize() const { return m_regularBufferSize; }
+    bool isRegular() const { return m_selectedConverter == m_regularConverter; }
+
+private:
+    size_t m_highBufferSize { 0 };
+    size_t m_regularHighBufferSize { 0 };
+    size_t m_regularBufferSize { 0 };
+    size_t m_regularLowBufferSize { 0 };
+    size_t m_lowBufferSize { 0 };
+
+    class Converter {
+    public:
+        Converter() = default;
+        ~Converter();
+
+        OSStatus initialize(const AudioStreamBasicDescription& inputDescription, const AudioStreamBasicDescription& outputDescription);
+        operator AudioConverterRef() const { return m_audioConverter; }
+
+    private:
+        AudioConverterRef m_audioConverter { nullptr };
+    };
+
+    Converter m_lowConverter;
+    Converter m_regularConverter;
+    Converter m_highConverter;
+    AudioConverterRef m_selectedConverter;
+};
+
+} // namespace WebCore
</ins></span></pre></div>
<a id="trunkSourceWebCoreplatformaudiococoaAudioSampleDataConvertermm"></a>
<div class="addfile"><h4>Added: trunk/Source/WebCore/platform/audio/cocoa/AudioSampleDataConverter.mm (0 => 287157)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/WebCore/platform/audio/cocoa/AudioSampleDataConverter.mm                            (rev 0)
+++ trunk/Source/WebCore/platform/audio/cocoa/AudioSampleDataConverter.mm       2021-12-16 22:02:54 UTC (rev 287157)
</span><span class="lines">@@ -0,0 +1,114 @@
</span><ins>+/*
+ * Copyright (C) 2021 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#import "config.h"
+#import "AudioSampleDataConverter.h"
+
+#import "AudioSampleBufferList.h"
+#import <AudioToolbox/AudioConverter.h>
+#import <pal/cf/AudioToolboxSoftLink.h>
+
+namespace WebCore {
+
+AudioSampleDataConverter::~AudioSampleDataConverter()
+{
+}
+
+OSStatus AudioSampleDataConverter::setFormats(const CAAudioStreamDescription& inputDescription, const CAAudioStreamDescription& outputDescription)
+{
+    constexpr double buffer100ms = 0.100;
+    constexpr double buffer60ms = 0.060;
+    constexpr double buffer50ms = 0.050;
+    constexpr double buffer40ms = 0.040;
+    constexpr double buffer20ms = 0.020;
+    m_highBufferSize = outputDescription.sampleRate() * buffer100ms;
+    m_regularHighBufferSize = outputDescription.sampleRate() * buffer60ms;
+    m_regularBufferSize = outputDescription.sampleRate() * buffer50ms;
+    m_regularLowBufferSize = outputDescription.sampleRate() * buffer40ms;
+    m_lowBufferSize = outputDescription.sampleRate() * buffer20ms;
+
+    m_selectedConverter = nullptr;
+
+    auto converterOutputDescription = outputDescription.streamDescription();
+    constexpr double slightlyHigherPitch = 1.05;
+    converterOutputDescription.mSampleRate = slightlyHigherPitch * outputDescription.streamDescription().mSampleRate;
+    if (auto error = m_lowConverter.initialize(inputDescription.streamDescription(), converterOutputDescription); error != noErr)
+        return error;
+
+    constexpr double slightlyLowerPitch = 0.95;
+    converterOutputDescription.mSampleRate = slightlyLowerPitch * outputDescription.streamDescription().mSampleRate;
+    if (auto error = m_highConverter.initialize(inputDescription.streamDescription(), converterOutputDescription); error != noErr)
+        return error;
+
+    if (inputDescription == outputDescription)
+        return noErr;
+
+    if (auto error = m_regularConverter.initialize(inputDescription.streamDescription(), outputDescription.streamDescription()); error != noErr)
+        return error;
+
+    m_selectedConverter = m_regularConverter;
+    return noErr;
+}
+
+bool AudioSampleDataConverter::updateBufferedAmount(size_t currentBufferedAmount)
+{
+    if (currentBufferedAmount) {
+        if (m_selectedConverter == m_regularConverter) {
+            if (currentBufferedAmount <= m_lowBufferSize)
+                m_selectedConverter = m_lowConverter;
+            else if (currentBufferedAmount >= m_highBufferSize)
+                m_selectedConverter = m_highConverter;
+        } else if (m_selectedConverter == m_highConverter) {
+            if (currentBufferedAmount < m_regularLowBufferSize)
+                m_selectedConverter = m_regularConverter;
+        } else if (currentBufferedAmount > m_regularHighBufferSize)
+            m_selectedConverter = m_regularConverter;
+    }
+    return !!m_selectedConverter;
+}
+
+OSStatus AudioSampleDataConverter::convert(const AudioBufferList& inputBuffer, AudioSampleBufferList& outputBuffer, size_t sampleCount)
+{
+    outputBuffer.reset();
+    return outputBuffer.copyFrom(inputBuffer, sampleCount, m_selectedConverter);
+}
+
+OSStatus AudioSampleDataConverter::Converter::initialize(const AudioStreamBasicDescription& inputDescription, const AudioStreamBasicDescription& outputDescription)
+{
+    if (m_audioConverter) {
+        PAL::AudioConverterDispose(m_audioConverter);
+        m_audioConverter = nullptr;
+    }
+
+    return PAL::AudioConverterNew(&inputDescription, &outputDescription, &m_audioConverter);
+}
+
+AudioSampleDataConverter::Converter::~Converter()
+{
+    if (m_audioConverter)
+        PAL::AudioConverterDispose(m_audioConverter);
+}
+
+} // namespace WebCore
</ins></span></pre></div>
<a id="trunkSourceWebCoreplatformaudiococoaAudioSampleDataSourceh"></a>
<div class="modfile"><h4>Modified: trunk/Source/WebCore/platform/audio/cocoa/AudioSampleDataSource.h (287156 => 287157)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/WebCore/platform/audio/cocoa/AudioSampleDataSource.h        2021-12-16 22:00:44 UTC (rev 287156)
+++ trunk/Source/WebCore/platform/audio/cocoa/AudioSampleDataSource.h   2021-12-16 22:02:54 UTC (rev 287157)
</span><span class="lines">@@ -25,6 +25,7 @@
</span><span class="cx"> 
</span><span class="cx"> #pragma once
</span><span class="cx"> 
</span><ins>+#include "AudioSampleDataConverter.h"
</ins><span class="cx"> #include "CARingBuffer.h"
</span><span class="cx"> #include <CoreAudio/CoreAudioTypes.h>
</span><span class="cx"> #include <wtf/LoggerHelper.h>
</span><span class="lines">@@ -102,13 +103,15 @@
</span><span class="cx"> 
</span><span class="cx">     uint64_t m_lastPushedSampleCount { 0 };
</span><span class="cx">     size_t m_waitToStartForPushCount { 2 };
</span><del>-    MediaTime m_expectedNextPushedSampleTime { MediaTime::invalidTime() };
-    bool m_isFirstPull { true };
</del><span class="cx"> 
</span><del>-    MediaTime m_inputSampleOffset;
</del><ins>+    int64_t m_expectedNextPushedSampleTimeValue { 0 };
+    int64_t m_converterInputOffset { 0 };
+    std::optional<int64_t> m_inputSampleOffset;
</ins><span class="cx">     int64_t m_outputSampleOffset { 0 };
</span><ins>+    uint64_t m_lastBufferedAmount { 0 };
</ins><span class="cx"> 
</span><del>-    AudioConverterRef m_converter;
</del><ins>+    AudioSampleDataConverter m_converter;
+
</ins><span class="cx">     RefPtr<AudioSampleBufferList> m_scratchBuffer;
</span><span class="cx"> 
</span><span class="cx">     UniqueRef<CARingBuffer> m_ringBuffer;
</span><span class="lines">@@ -117,10 +120,8 @@
</span><span class="cx">     float m_volume { 1.0 };
</span><span class="cx">     bool m_muted { false };
</span><span class="cx">     bool m_shouldComputeOutputSampleOffset { true };
</span><del>-    uint64_t m_endFrameWhenNotEnoughData { 0 };
</del><span class="cx"> 
</span><span class="cx">     bool m_isInNeedOfMoreData { false };
</span><del>-
</del><span class="cx"> #if !RELEASE_LOG_DISABLED
</span><span class="cx">     Ref<const Logger> m_logger;
</span><span class="cx">     const void* m_logIdentifier;
</span></span></pre></div>
<a id="trunkSourceWebCoreplatformaudiococoaAudioSampleDataSourcemm"></a>
<div class="modfile"><h4>Modified: trunk/Source/WebCore/platform/audio/cocoa/AudioSampleDataSource.mm (287156 => 287157)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/WebCore/platform/audio/cocoa/AudioSampleDataSource.mm       2021-12-16 22:00:44 UTC (rev 287156)
+++ trunk/Source/WebCore/platform/audio/cocoa/AudioSampleDataSource.mm  2021-12-16 22:02:54 UTC (rev 287157)
</span><span class="lines">@@ -1,5 +1,5 @@
</span><span class="cx"> /*
</span><del>- * Copyright (C) 2017 Apple Inc. All rights reserved.
</del><ins>+ * Copyright (C) 2017-2021 Apple Inc. All rights reserved.
</ins><span class="cx">  *
</span><span class="cx">  * Redistribution and use in source and binary forms, with or without
</span><span class="cx">  * modification, are permitted provided that the following conditions
</span><span class="lines">@@ -51,7 +51,6 @@
</span><span class="cx"> 
</span><span class="cx"> AudioSampleDataSource::AudioSampleDataSource(size_t maximumSampleCount, LoggerHelper& loggerHelper, size_t waitToStartForPushCount)
</span><span class="cx">     : m_waitToStartForPushCount(waitToStartForPushCount)
</span><del>-    , m_inputSampleOffset(MediaTime::invalidTime())
</del><span class="cx">     , m_ringBuffer(makeUniqueRef<CARingBuffer>())
</span><span class="cx">     , m_maximumSampleCount(maximumSampleCount)
</span><span class="cx"> #if !RELEASE_LOG_DISABLED
</span><span class="lines">@@ -66,31 +65,12 @@
</span><span class="cx"> 
</span><span class="cx"> AudioSampleDataSource::~AudioSampleDataSource()
</span><span class="cx"> {
</span><del>-    if (m_converter)
-        PAL::AudioConverterDispose(m_converter);
</del><span class="cx"> }
</span><span class="cx"> 
</span><span class="cx"> OSStatus AudioSampleDataSource::setupConverter()
</span><span class="cx"> {
</span><span class="cx">     ASSERT(m_inputDescription && m_outputDescription);
</span><del>-
-    if (m_converter) {
-        PAL::AudioConverterDispose(m_converter);
-        m_converter = nullptr;
-    }
-
-    if (*m_inputDescription == *m_outputDescription)
-        return 0;
-
-    OSStatus err = PAL::AudioConverterNew(&m_inputDescription->streamDescription(), &m_outputDescription->streamDescription(), &m_converter);
-    if (err) {
-        RunLoop::main().dispatch([this, protectedThis = Ref { *this }, err] {
-            ERROR_LOG("AudioConverterNew returned error ", err);
-        });
-    }
-
-    return err;
-
</del><ins>+    return m_converter.setFormats(*m_inputDescription, *m_outputDescription);
</ins><span class="cx"> }
</span><span class="cx"> 
</span><span class="cx"> OSStatus AudioSampleDataSource::setInputFormat(const CAAudioStreamDescription& format)
</span><span class="lines">@@ -109,6 +89,9 @@
</span><span class="cx">     ASSERT(m_inputDescription);
</span><span class="cx">     ASSERT(format.sampleRate() >= 0);
</span><span class="cx"> 
</span><ins>+    if (m_outputDescription && *m_outputDescription == format)
+        return noErr;
+
</ins><span class="cx">     m_outputDescription = CAAudioStreamDescription { format };
</span><span class="cx"> 
</span><span class="cx">     {
</span><span class="lines">@@ -117,6 +100,7 @@
</span><span class="cx">         DisableMallocRestrictionsForCurrentThreadScope disableMallocRestrictions;
</span><span class="cx">         m_ringBuffer->allocate(format, static_cast<size_t>(m_maximumSampleCount));
</span><span class="cx">         m_scratchBuffer = AudioSampleBufferList::create(m_outputDescription->streamDescription(), m_maximumSampleCount);
</span><ins>+        m_converterInputOffset = 0;
</ins><span class="cx">     }
</span><span class="cx"> 
</span><span class="cx">     return setupConverter();
</span><span class="lines">@@ -140,35 +124,45 @@
</span><span class="cx"> 
</span><span class="cx"> void AudioSampleDataSource::pushSamplesInternal(const AudioBufferList& bufferList, const MediaTime& presentationTime, size_t sampleCount)
</span><span class="cx"> {
</span><del>-    MediaTime sampleTime = presentationTime;
</del><ins>+    int64_t ringBufferIndexToWrite = presentationTime.toTimeScale(m_outputDescription->sampleRate()).timeValue();
</ins><span class="cx"> 
</span><ins>+    int64_t offset = 0;
</ins><span class="cx">     const AudioBufferList* sampleBufferList;
</span><del>-    if (m_converter) {
</del><ins>+
+    if (m_converter.updateBufferedAmount(m_lastBufferedAmount)) {
</ins><span class="cx">         m_scratchBuffer->reset();
</span><del>-        OSStatus err = m_scratchBuffer->copyFrom(bufferList, sampleCount, m_converter);
-        if (err)
-            return;
</del><ins>+        m_converter.convert(bufferList, *m_scratchBuffer, sampleCount);
+        auto expectedSampleCount = sampleCount * m_outputDescription->sampleRate() / m_inputDescription->sampleRate();
</ins><span class="cx"> 
</span><ins>+        if (m_converter.isRegular() && expectedSampleCount > m_scratchBuffer->sampleCount()) {
+            // Sometimes converter is not writing enough data, for instance on first chunk conversion.
+            // Pretend this is the case to keep pusher and puller in sync.
+            offset = 0;
+            sampleCount = expectedSampleCount;
+            if (m_scratchBuffer->sampleCount() > sampleCount)
+                m_scratchBuffer->setSampleCount(sampleCount);
+        } else {
+            offset = m_scratchBuffer->sampleCount() - expectedSampleCount;
+            sampleCount = m_scratchBuffer->sampleCount();
+        }
</ins><span class="cx">         sampleBufferList = m_scratchBuffer->bufferList().list();
</span><del>-        sampleCount = m_scratchBuffer->sampleCount();
-        sampleTime = presentationTime.toTimeScale(m_outputDescription->sampleRate(), MediaTime::RoundingFlags::TowardZero);
</del><span class="cx">     } else
</span><span class="cx">         sampleBufferList = &bufferList;
</span><span class="cx"> 
</span><del>-    if (m_expectedNextPushedSampleTime.isValid() && abs(m_expectedNextPushedSampleTime - sampleTime).timeValue() == 1)
-        sampleTime = m_expectedNextPushedSampleTime;
-    m_expectedNextPushedSampleTime = sampleTime + MediaTime(sampleCount, sampleTime.timeScale());
</del><ins>+    if (!m_inputSampleOffset) {
+        m_inputSampleOffset = 0 - ringBufferIndexToWrite;
+        ringBufferIndexToWrite = 0;
+    } else
+        ringBufferIndexToWrite += *m_inputSampleOffset;
</ins><span class="cx"> 
</span><del>-    if (m_inputSampleOffset == MediaTime::invalidTime())
-        m_inputSampleOffset = MediaTime(1 - sampleTime.timeValue(), sampleTime.timeScale());
-    sampleTime += m_inputSampleOffset;
</del><ins>+    if (m_converterInputOffset)
+        ringBufferIndexToWrite += m_converterInputOffset;
</ins><span class="cx"> 
</span><del>-#if !LOG_DISABLED
-    uint64_t startFrame1 = 0;
-    uint64_t endFrame1 = 0;
-    m_ringBuffer->getCurrentFrameBounds(startFrame1, endFrame1);
-#endif
</del><ins>+    if (m_expectedNextPushedSampleTimeValue && abs((float)m_expectedNextPushedSampleTimeValue - (float)ringBufferIndexToWrite) <= 1)
+        ringBufferIndexToWrite = m_expectedNextPushedSampleTimeValue;
</ins><span class="cx"> 
</span><ins>+    m_expectedNextPushedSampleTimeValue = ringBufferIndexToWrite + sampleCount;
+
</ins><span class="cx">     if (m_isInNeedOfMoreData) {
</span><span class="cx">         m_isInNeedOfMoreData = false;
</span><span class="cx">         DisableMallocRestrictionsForCurrentThreadScope disableMallocRestrictions;
</span><span class="lines">@@ -176,7 +170,10 @@
</span><span class="cx">             ALWAYS_LOG(logIdentifier, "needed more data, pushing ", sampleCount, " samples");
</span><span class="cx">         });
</span><span class="cx">     }
</span><del>-    m_ringBuffer->store(sampleBufferList, sampleCount, sampleTime.timeValue());
</del><ins>+
+    m_ringBuffer->store(sampleBufferList, sampleCount, ringBufferIndexToWrite);
+
+    m_converterInputOffset += offset;
</ins><span class="cx">     m_lastPushedSampleCount = sampleCount;
</span><span class="cx"> }
</span><span class="cx"> 
</span><span class="lines">@@ -194,21 +191,6 @@
</span><span class="cx">     pushSamplesInternal(*downcast<WebAudioBufferList>(audioData).list(), sampleTime, sampleCount);
</span><span class="cx"> }
</span><span class="cx"> 
</span><del>-static inline int64_t computeOffsetDelay(double sampleRate, uint64_t lastPushedSampleCount)
-{
-    const double twentyMS = .02;
-    const double tenMS = .01;
-    const double fiveMS = .005;
-
-    if (lastPushedSampleCount > sampleRate * twentyMS)
-        return sampleRate * twentyMS;
-    if (lastPushedSampleCount > sampleRate * tenMS)
-        return sampleRate * tenMS;
-    if (lastPushedSampleCount > sampleRate * fiveMS)
-        return sampleRate * fiveMS;
-    return 0;
-}
-
</del><span class="cx"> bool AudioSampleDataSource::pullSamples(AudioBufferList& buffer, size_t sampleCount, uint64_t timeStamp, double /*hostTime*/, PullMode mode)
</span><span class="cx"> {
</span><span class="cx">     size_t byteCount = sampleCount * m_outputDescription->bytesPerFrame();
</span><span class="lines">@@ -220,7 +202,7 @@
</span><span class="cx">         return false;
</span><span class="cx">     }
</span><span class="cx"> 
</span><del>-    if (m_muted || m_inputSampleOffset == MediaTime::invalidTime()) {
</del><ins>+    if (m_muted || !m_inputSampleOffset) {
</ins><span class="cx">         if (mode != AudioSampleDataSource::Mix)
</span><span class="cx">             AudioSampleBufferList::zeroABL(buffer, byteCount);
</span><span class="cx">         return false;
</span><span class="lines">@@ -230,33 +212,19 @@
</span><span class="cx">     uint64_t endFrame = 0;
</span><span class="cx">     m_ringBuffer->getCurrentFrameBounds(startFrame, endFrame);
</span><span class="cx"> 
</span><ins>+    ASSERT(m_waitToStartForPushCount);
+
+    uint64_t buffered = endFrame - startFrame;
</ins><span class="cx">     if (m_shouldComputeOutputSampleOffset) {
</span><del>-        uint64_t buffered = endFrame - startFrame;
-        if (m_isFirstPull) {
-            auto minimumBuffer = m_waitToStartForPushCount * m_lastPushedSampleCount;
-            if (buffered >= minimumBuffer) {
-                m_outputSampleOffset = startFrame - timeStamp;
-                m_shouldComputeOutputSampleOffset = false;
-                m_endFrameWhenNotEnoughData = 0;
-            } else {
-                // We wait for one chunk of value before starting to play.
-                if (mode != AudioSampleDataSource::Mix)
-                    AudioSampleBufferList::zeroABL(buffer, byteCount);
-                return false;
-            }
-        } else {
-            if (buffered < sampleCount * 2 || (m_endFrameWhenNotEnoughData && m_endFrameWhenNotEnoughData == endFrame)) {
-                if (mode != AudioSampleDataSource::Mix)
-                    AudioSampleBufferList::zeroABL(buffer, byteCount);
-                return false;
-            }
-
-            m_shouldComputeOutputSampleOffset = false;
-            m_endFrameWhenNotEnoughData = 0;
-
-            m_outputSampleOffset = (endFrame - sampleCount) - timeStamp;
-            m_outputSampleOffset -= computeOffsetDelay(m_outputDescription->sampleRate(), m_lastPushedSampleCount);
</del><ins>+        auto minimumBuffer = std::max<size_t>(m_waitToStartForPushCount * m_lastPushedSampleCount, m_converter.regularBufferSize());
+        if (buffered < minimumBuffer) {
+            // We wait for one chunk of value before starting to play.
+            if (mode != AudioSampleDataSource::Mix)
+                AudioSampleBufferList::zeroABL(buffer, byteCount);
+            return false;
</ins><span class="cx">         }
</span><ins>+        m_outputSampleOffset = endFrame - timeStamp - minimumBuffer;
+        m_shouldComputeOutputSampleOffset = false;
</ins><span class="cx">     }
</span><span class="cx"> 
</span><span class="cx">     timeStamp += m_outputSampleOffset;
</span><span class="lines">@@ -269,23 +237,13 @@
</span><span class="cx">                 ERROR_LOG(logIdentifier, "need more data, sample ", timeStamp, " with offset ", outputSampleOffset, ", trying to get ", sampleCount, " samples, but not completely in range [", startFrame, " .. ", endFrame, "]");
</span><span class="cx">             });
</span><span class="cx">         }
</span><del>-        if (timeStamp < startFrame || timeStamp >= endFrame) {
-            // We are out of the window, let's restart the offset computation.
-            m_shouldComputeOutputSampleOffset = true;
-
-            if (timeStamp >= endFrame)
-                m_endFrameWhenNotEnoughData = endFrame;
-        } else {
-            // We are too close from endFrame, let's wait for more data to be pushed.
-            m_outputSampleOffset -= sampleCount;
-        }
</del><ins>+        m_shouldComputeOutputSampleOffset = true;
</ins><span class="cx">         if (mode != AudioSampleDataSource::Mix)
</span><span class="cx">             AudioSampleBufferList::zeroABL(buffer, byteCount);
</span><span class="cx">         return false;
</span><span class="cx">     }
</span><span class="cx"> 
</span><del>-    m_isFirstPull = false;
-
</del><ins>+    m_lastBufferedAmount = endFrame - timeStamp - sampleCount;
</ins><span class="cx">     return pullSamplesInternal(buffer, sampleCount, timeStamp, mode);
</span><span class="cx"> }
</span><span class="cx"> 
</span><span class="lines">@@ -320,12 +278,12 @@
</span><span class="cx">     if (buffer.mNumberBuffers != m_ringBuffer->channelCount())
</span><span class="cx">         return false;
</span><span class="cx"> 
</span><del>-    if (m_muted)
</del><ins>+    if (m_muted || !m_inputSampleOffset)
</ins><span class="cx">         return false;
</span><span class="cx"> 
</span><span class="cx">     if (m_shouldComputeOutputSampleOffset) {
</span><span class="cx">         m_shouldComputeOutputSampleOffset = false;
</span><del>-        m_outputSampleOffset = m_inputSampleOffset.timeValue() * m_outputDescription->sampleRate() / m_inputSampleOffset.timeScale();
</del><ins>+        m_outputSampleOffset = *m_inputSampleOffset;
</ins><span class="cx">     }
</span><span class="cx"> 
</span><span class="cx">     timeStamp += m_outputSampleOffset;
</span><span class="lines">@@ -354,6 +312,10 @@
</span><span class="cx"> 
</span><span class="cx">     startFrame = timeStamp;
</span><span class="cx"> 
</span><ins>+    ASSERT(endFrame >= startFrame);
+    if (endFrame < startFrame)
+        return false;
+
</ins><span class="cx">     if (m_muted) {
</span><span class="cx">         AudioSampleBufferList::zeroABL(buffer, sampleCountPerChunk * m_outputDescription->bytesPerFrame());
</span><span class="cx">         while (endFrame - startFrame >= sampleCountPerChunk) {
</span></span></pre></div>
<a id="trunkSourceWebCoreplatformmediastreammacMockAudioSharedUnith"></a>
<div class="modfile"><h4>Modified: trunk/Source/WebCore/platform/mediastream/mac/MockAudioSharedUnit.h (287156 => 287157)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/WebCore/platform/mediastream/mac/MockAudioSharedUnit.h      2021-12-16 22:00:44 UTC (rev 287156)
+++ trunk/Source/WebCore/platform/mediastream/mac/MockAudioSharedUnit.h 2021-12-16 22:02:54 UTC (rev 287157)
</span><span class="lines">@@ -62,15 +62,16 @@
</span><span class="cx"> 
</span><span class="cx">     void delaySamples(Seconds) final;
</span><span class="cx"> 
</span><ins>+    void start();
</ins><span class="cx">     CapabilityValueOrRange sampleRateCapacities() const final { return CapabilityValueOrRange(44100, 48000); }
</span><span class="cx"> 
</span><span class="cx">     void tick();
</span><span class="cx"> 
</span><del>-    void render(Seconds);
</del><ins>+    void render(MonotonicTime);
</ins><span class="cx">     void emitSampleBuffers(uint32_t frameCount);
</span><span class="cx">     void reconfigure();
</span><span class="cx"> 
</span><del>-    static Seconds renderInterval() { return 60_ms; }
</del><ins>+    static Seconds renderInterval() { return 20_ms; }
</ins><span class="cx"> 
</span><span class="cx">     std::unique_ptr<WebAudioBufferList> m_audioBufferList;
</span><span class="cx"> 
</span><span class="lines">@@ -83,6 +84,7 @@
</span><span class="cx"> 
</span><span class="cx">     Vector<float> m_bipBopBuffer;
</span><span class="cx">     bool m_hasAudioUnit { false };
</span><ins>+    bool m_isProducingData { false };
</ins><span class="cx"> 
</span><span class="cx">     RunLoop::Timer<MockAudioSharedUnit> m_timer;
</span><span class="cx">     MonotonicTime m_lastRenderTime { MonotonicTime::nan() };
</span></span></pre></div>
<a id="trunkSourceWebCoreplatformmediastreammacMockAudioSharedUnitmm"></a>
<div class="modfile"><h4>Modified: trunk/Source/WebCore/platform/mediastream/mac/MockAudioSharedUnit.mm (287156 => 287157)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/WebCore/platform/mediastream/mac/MockAudioSharedUnit.mm     2021-12-16 22:00:44 UTC (rev 287156)
+++ trunk/Source/WebCore/platform/mediastream/mac/MockAudioSharedUnit.mm        2021-12-16 22:02:54 UTC (rev 287157)
</span><span class="lines">@@ -101,8 +101,8 @@
</span><span class="cx"> }
</span><span class="cx"> 
</span><span class="cx"> MockAudioSharedUnit::MockAudioSharedUnit()
</span><del>-    : m_timer(RunLoop::current(), this, &MockAudioSharedUnit::tick)
-    , m_workQueue(WorkQueue::create("MockAudioSharedUnit Capture Queue"))
</del><ins>+    : m_timer(RunLoop::current(), this, &MockAudioSharedUnit::start)
+    , m_workQueue(WorkQueue::create("MockAudioSharedUnit Capture Queue", WorkQueue::QOS::UserInteractive))
</ins><span class="cx"> {
</span><span class="cx"> }
</span><span class="cx"> 
</span><span class="lines">@@ -127,13 +127,11 @@
</span><span class="cx">     if (!hasAudioUnit())
</span><span class="cx">         return 0;
</span><span class="cx"> 
</span><del>-    m_timer.stop();
</del><span class="cx">     m_lastRenderTime = MonotonicTime::nan();
</span><span class="cx">     m_workQueue->dispatch([this] {
</span><span class="cx">         reconfigure();
</span><span class="cx">         callOnMainThread([this] {
</span><del>-            m_lastRenderTime = MonotonicTime::now();
-            m_timer.startRepeating(renderInterval());
</del><ins>+            startInternal();
</ins><span class="cx">         });
</span><span class="cx">     });
</span><span class="cx">     return 0;
</span><span class="lines">@@ -142,57 +140,45 @@
</span><span class="cx"> void MockAudioSharedUnit::cleanupAudioUnit()
</span><span class="cx"> {
</span><span class="cx">     m_hasAudioUnit = false;
</span><del>-    m_timer.stop();
</del><ins>+    m_isProducingData = false;
</ins><span class="cx">     m_lastRenderTime = MonotonicTime::nan();
</span><span class="cx"> }
</span><span class="cx"> 
</span><span class="cx"> OSStatus MockAudioSharedUnit::startInternal()
</span><span class="cx"> {
</span><ins>+    start();
+    return 0;
+}
+
+void MockAudioSharedUnit::start()
+{
</ins><span class="cx">     if (!m_hasAudioUnit)
</span><span class="cx">         m_hasAudioUnit = true;
</span><span class="cx"> 
</span><span class="cx">     m_lastRenderTime = MonotonicTime::now();
</span><del>-    m_timer.startRepeating(renderInterval());
-    return 0;
</del><ins>+    m_isProducingData = true;
+    m_workQueue->dispatch([this, renderTime = m_lastRenderTime] {
+        render(renderTime);
+    });
</ins><span class="cx"> }
</span><span class="cx"> 
</span><span class="cx"> void MockAudioSharedUnit::stopInternal()
</span><span class="cx"> {
</span><ins>+    m_isProducingData = false;
</ins><span class="cx">     if (!m_hasAudioUnit)
</span><span class="cx">         return;
</span><del>-    m_timer.stop();
</del><span class="cx">     m_lastRenderTime = MonotonicTime::nan();
</span><span class="cx"> }
</span><span class="cx"> 
</span><span class="cx"> bool MockAudioSharedUnit::isProducingData() const
</span><span class="cx"> {
</span><del>-    return m_timer.isActive();
</del><ins>+    return m_isProducingData;
</ins><span class="cx"> }
</span><span class="cx"> 
</span><del>-void MockAudioSharedUnit::tick()
-{
-    if (std::isnan(m_lastRenderTime))
-        m_lastRenderTime = MonotonicTime::now();
-
-    MonotonicTime now = MonotonicTime::now();
-
-    if (m_delayUntil) {
-        if (m_delayUntil < now)
-            return;
-        m_delayUntil = MonotonicTime();
-    }
-
-    Seconds delta = now - m_lastRenderTime;
-    m_lastRenderTime = now;
-
-    m_workQueue->dispatch([this, delta] {
-        render(delta);
-    });
-}
-
</del><span class="cx"> void MockAudioSharedUnit::delaySamples(Seconds delta)
</span><span class="cx"> {
</span><del>-    m_delayUntil = MonotonicTime::now() + delta;
</del><ins>+    stopInternal();
+    m_timer.startOneShot(delta);
</ins><span class="cx"> }
</span><span class="cx"> 
</span><span class="cx"> void MockAudioSharedUnit::reconfigure()
</span><span class="lines">@@ -244,9 +230,24 @@
</span><span class="cx">     audioSamplesAvailable(PAL::toMediaTime(startTime), *m_audioBufferList, CAAudioStreamDescription(m_streamFormat), frameCount);
</span><span class="cx"> }
</span><span class="cx"> 
</span><del>-void MockAudioSharedUnit::render(Seconds delta)
</del><ins>+void MockAudioSharedUnit::render(MonotonicTime renderTime)
</ins><span class="cx"> {
</span><span class="cx">     ASSERT(!isMainThread());
</span><ins>+    if (!isProducingData())
+        return;
+
+    auto delta = renderInterval();
+    auto currentTime = MonotonicTime::now();
+    auto nextRenderTime = renderTime + delta;
+    Seconds nextRenderDelay = nextRenderTime.secondsSinceEpoch() - currentTime.secondsSinceEpoch();
+    if (nextRenderDelay.seconds() < 0) {
+        nextRenderTime = currentTime;
+        nextRenderDelay = 0_s;
+    }
+    m_workQueue->dispatchAfter(nextRenderDelay, [this, nextRenderTime] {
+        render(nextRenderTime);
+    });
+
</ins><span class="cx">     if (!m_audioBufferList || !m_bipBopBuffer.size())
</span><span class="cx">         reconfigure();
</span><span class="cx"> 
</span></span></pre>
</div>
</div>

</body>
</html>