[Webkit-unassigned] [Bug 110760] WebSpeech: need global speech controller

bugzilla-daemon at webkit.org bugzilla-daemon at webkit.org
Mon Feb 25 08:50:20 PST 2013


https://bugs.webkit.org/show_bug.cgi?id=110760





--- Comment #1 from chris fleizach <cfleizach at apple.com>  2013-02-25 08:52:44 PST ---
(In reply to comment #0)
> The current implementation of speech synthesis has a queue inside the SpeechSynthesis object that's owned by one DOMWindow. This isn't likely to work very well if multiple windows try to speak at the same time.
> 

If multiple windows try talking at the same time, its unlikely the results will be good. A question that comes up is whether you want one window to know about speech synthesis usage in another window.

at the same time, web synthesis will have no idea what's happening outside the browser, where there could also be something speaking. since you won't know the state outside the browser it didn't seem that useful to know the state outside the window either. 

> I think that the queue needs to be pushed into the WebKit layer so that a multi-process browser can implement a single speech queue.

Why would it need to be in the WebKit layer?

> 
> I filed this bug against the speech API to clarify the exact semantics of what should happen if multiple windows try to speak, but I think that no matter how this is resolved, we'll want at least some global state.
> 
> https://www.w3.org/Bugs/Public/show_bug.cgi?id=21110

-- 
Configure bugmail: https://bugs.webkit.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug.


More information about the webkit-unassigned mailing list