[Webkit-unassigned] [Bug 110760] WebSpeech: need global speech controller

bugzilla-daemon at webkit.org bugzilla-daemon at webkit.org
Mon Feb 25 09:09:14 PST 2013


https://bugs.webkit.org/show_bug.cgi?id=110760





--- Comment #2 from Dominic Mazzoni <dmazzoni at google.com>  2013-02-25 09:11:38 PST ---
(In reply to comment #1)
> If multiple windows try talking at the same time, its unlikely the results will be good. A question that comes up is whether you want one window to know about speech synthesis usage in another window.

It's not necessarily a bad experience. A page with multiple frames might want to let more than one frame talk, for example. A page that speaks the current time once an hour might coexist with another page that speaks more interactively; it seems fine for the current time to just enqueue its utterance.

> at the same time, web synthesis will have no idea what's happening outside the browser, where there could also be something speaking. since you won't know the state outside the browser it didn't seem that useful to know the state outside the window either. 

It's true, if another app outside the browser is tying up speech, you might just get an error.

> > I think that the queue needs to be pushed into the WebKit layer so that a multi-process browser can implement a single speech queue.
> 
> Why would it need to be in the WebKit layer?

I mean, to expose some APIs for the embedder to implement the queuing if it wants, rather than having it part of WebCore.

Let's see what the consensus is on the spec and go from there.

-- 
Configure bugmail: https://bugs.webkit.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug.



More information about the webkit-unassigned mailing list