[Webkit-unassigned] [Bug 253952] AudioContext will play with 20ms callback only a MediaStream and AudioWorkletNode in graph

bugzilla-daemon at webkit.org bugzilla-daemon at webkit.org
Wed Mar 15 03:01:34 PDT 2023


https://bugs.webkit.org/show_bug.cgi?id=253952

--- Comment #1 from wangweisi.night12138 at bytedance.com ---
(In reply to wangweisi.night12138 from comment #0)
> Created attachment 465444 [details]
> reproduction
> 
> tiny example:
> index.html
> ```html
> <!DOCTYPE html>
> <html>
> <script type="application/javascript">
>     class MyAudioNode extends AudioWorkletNode {
>         constructor(ctx) {
>             super(ctx, "my-worklet-processor");
>         }
>     }
>     (async () => {
>         const ctx = new AudioContext();
>         await ctx.audioWorklet.addModule('bypass.worklet.js');
>         const node = new MyAudioNode(ctx);
>         const stream = await navigator.mediaDevices.getUserMedia({ audio:
> true });
>         const source = ctx.createMediaStreamSource(stream);
>         source.connect(node).connect(ctx.destination);
>         ctx.resume();
>     })();
> </script>
> 
> </html>
> ```
> bypass.worklet.js
> ```js
> class MyWorkletProcessor extends AudioWorkletProcessor {
>     constructor() {
>         super();
>     }
> 
>     process(inputs, outputs) {
>         // Use the 1st input and output only to make the example simpler.
> |input|
>         // and |output| here have the similar structure with the AudioBuffer
>         // interface. (i.e. An array of Float32Array)
>         const input = inputs[0];
>         if (!(input?.length)) return true;
>         const output = outputs[0];
>         if (!(output?.length)) return true;
> 
>         // Copy-in, process and copy-out.
>         output.forEach((e, i) => e.set(input[i % input.length]));
>         return true;
>     }
> }
> 
> registerProcessor("my-worklet-processor", MyWorkletProcessor);
> 
> 
> ```
> 
> bug occurs in code
> `Source/WebCore/platform/audio/cocoa/MediaSessionManagerCocoa.mm:168`
> ```cpp
>     size_t bufferSize = m_defaultBufferSize;
>     if (webAudioCount)
>         bufferSize = AudioUtilities::renderQuantumSize;
>     else if (captureCount || audioMediaStreamTrackCount) {
>         // In case of audio capture or audio MediaStreamTrack playing, we
> want to grab 20 ms chunks to limit the latency so that it is not noticeable
> by users
>         // while having a large enough buffer so that the audio rendering
> remains stable, hence a computation based on sample rate.
>         bufferSize =
> WTF::roundUpToPowerOfTwo(AudioSession::sharedSession().sampleRate() / 50);
>     } else if (m_supportedAudioHardwareBufferSizes &&
> DeprecatedGlobalSettings::lowPowerVideoAudioBufferSizeEnabled())
>         bufferSize =
> m_supportedAudioHardwareBufferSizes.nearest(kLowPowerVideoBufferSize);
> 
>     AudioSession::sharedSession().setPreferredBufferSize(bufferSize);
> ```

More details: When webkit creates an audio callback, `webAudioCount` is 0, so webkit applies to the system for a callback of `sampleRate() / 50` size, but after the AudioContext is created, cocoa requests data with a size of 128 samples, Causes the callback size to be misaligned and glitch on speakers.

-- 
You are receiving this mail because:
You are the assignee for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.webkit.org/pipermail/webkit-unassigned/attachments/20230315/53620643/attachment.htm>


More information about the webkit-unassigned mailing list