On 30 Apr 2025, at 9:57 pm, Yury Yarashevich via webkit-dev <webkit-dev@lists.webkit.org> wrote:
Hi WebKit team,
I’m curious about the original rationale behind the restriction that prevents concurrent playback of multiple <video> elements. Was it primarily introduced to save battery life?
I’m not aware of any such restrictions. Here is a test page that will literally let you play hundreds of video elements at the same time. https://webkit.apple.com/demos/bin/video/many-video-elements.html It works on any Apple devices: iPhone, iPad, Vision Pro, Mac .
In practice, this behavior appears to have unintended side effects. There’s a reproducible issue where playback can be started with the video muted and then immediately unmuted, effectively bypassing the restriction. However, this often results in videos being randomly paused later—sometimes very frequently—leading to a “play/pause ping-pong” between Safari/WebKit and JavaScript restarting playback. This erratic behavior may actually increase battery consumption, despite appearing to work smoothly from the user’s perspective.
Are you referring to audible video elements requiring a user gesture for playback to start? This behaviour isn’t erratic and is well defined and and up to know I thought it was well understood. For an audible element to play, the user needs to first interact with the video element such as clicking on the video or its controls. This behaviour is even defined through HTML5 specifications including how you can detect if the User Agent will allow video to start autoplaying without a user gesture. Similar policies are implemented by other user agents such as Firefox and Chrome. There’s an article on MDN on how to deal with them https://developer.mozilla.org/en-US/docs/Web/Media/Guides/Autoplay Autoplay guide for media and Web Audio APIs - Media technologies on the web | MDN developer.mozilla.org Here is a more WebKit-focus article https://webkit.org/blog/7734/auto-play-policy-changes-for-macos/ Here is a similar article written by the Chrome team on the same topic https://developer.chrome.com/blog/autoplay/ Autoplay policy in Chrome | Blog | Chrome for Developers developer.chrome.com Here is the HTML5 specifications related to this matter https://html.spec.whatwg.org/multipage/media.html#eligible-for-autoplay
Even if this workaround is eventually blocked, developers who rely on concurrent playback (e.g., outside of WebRTC contexts) will turn to more complex solutions, such as decoding video and audio using WebCodecs or/and WebAssembly, and rendering via <canvas> and AudioContext. While technically feasible, these approaches are likely to be significantly less power-efficient than simply allowing multiple <video> elements to play concurrently. Another similarly inefficient workaround would be to synthesize a MediaStream using the VideoTrackGenerator API and AudioContext.createMediaStreamDestination().
Lastly, another issue is that creating a MediaElementSource from a <video> element and routing its audio through a shared AudioContext also does not disable the playback restriction—whereas it is disabled when the <video> element itself is muted. This feels inconsistent and may point to a separate bug.
Could you please clarify the motivation behind this restriction, and whether there are any plans to revisit or improve its behavior?
I believe I answered this question above, and through the various links provided. Cheers JY