<html>
<head>
<base href="https://bugs.webkit.org/" />
</head>
<body>
<p>
<div>
<b><a class="bz_bug_link
bz_status_NEW "
title="NEW - Web Audio restores wrong sample rate and sounds distorted"
href="https://bugs.webkit.org/show_bug.cgi?id=154538#c8">Comment # 8</a>
on <a class="bz_bug_link
bz_status_NEW "
title="NEW - Web Audio restores wrong sample rate and sounds distorted"
href="https://bugs.webkit.org/show_bug.cgi?id=154538">bug 154538</a>
from <span class="vcard"><a class="email" href="mailto:ashley@scirra.com" title="Ashley Gullen <ashley@scirra.com>"> <span class="fn">Ashley Gullen</span></a>
</span></b>
<pre>(In reply to <a href="show_bug.cgi?id=154538#c5">comment #5</a>)
<span class="quote">> All I'm saying is that Web Audio developers can't assume that the output
> sample rate will always be 44.1kHz. If they are manually creating and
> filling buffers assuming that the sample rate is 44.1kHz (when instead it's
> 48kHz), they could playback artifacts.</span >
I should point out Web Audio developers make no such assumptions - we don't choose the sample rate of decoded buffers, the AudioContext does. I suppose if anything, the spec itself is making the assumption the AudioContext keeps running at the same sample rate.
I think the spec requires that anything played at a different sample rate to the AudioContext be resampled, since you can call createBuffer() to create a buffer at a given sample rate (e.g. 22050) and play it, and it presumably is resampled to the AudioContext rate. If the AudioContext sample rate changes after creation, shouldn't this kick in for existing buffers?</pre>
</div>
</p>
<hr>
<span>You are receiving this mail because:</span>
<ul>
<li>You are the assignee for the bug.</li>
</ul>
</body>
</html>