[webkit-reviews] review canceled: [Bug 218855] Implement audio capture for SpeechRecognition on macOS : [Attachment 414314] Patch

bugzilla-daemon at webkit.org bugzilla-daemon at webkit.org
Tue Nov 17 12:52:39 PST 2020

Sihui Liu <sihui_liu at apple.com> has canceled Sihui Liu <sihui_liu at apple.com>'s
request for review:
Bug 218855: Implement audio capture for SpeechRecognition on macOS

Attachment 414314: Patch


--- Comment #6 from Sihui Liu <sihui_liu at apple.com> ---
Comment on attachment 414314
  --> https://bugs.webkit.org/attachment.cgi?id=414314

View in context: https://bugs.webkit.org/attachment.cgi?id=414314&action=review

>> Source/WebCore/Modules/speech/SpeechRecognizerDelegate.h:39
>> +	virtual CaptureSourceOrError createCaptureSource() = 0;
> It is a bit weird to have this callback here.
> It might be simpler to focus this interface on reporting state only.

Okay, will remove.

>> Source/WebCore/Modules/speech/cocoa/SpeechRecognizerCocoa.mm:54
>> +	auto sourceOrError = m_delegate->createCaptureSource();
> Instead of doing that, it seems SpeechRecognizer could get a
Ref<RealtimeMediaSource> as input.
> If needed, we could add a completion handler to start method as well.

Okay, will add Ref<RealtimeMediaSource> as input.

>> Source/WebCore/Modules/speech/cocoa/SpeechRecognizerCocoa.mm:73
>> +	m_delegate->didStop();
> Should we add a completion handler to stop instead?

Probably not, because completion handler of stop may not be called if abort()
is called after stop().

>> Source/WebKit/UIProcess/SpeechRecognitionServer.cpp:101
>> +	    startNextGrantedRequest();
> These changes seem somehow independent of the audio capture part.
> Maybe we should split them.

That sounds good. Added this here as I figured out what SpeechRecognitionServer
should do after creating SpeechRecognizer.

>> Source/WebKit/UIProcess/SpeechRecognitionServer.cpp:118
>> +	m_grantedRequests.append(makeWeakPtr(request));
> I would have expected that a granted request would override the previous one,
i.e stop it I guess.
> Is this behavior shared with Chrome and Firefox.

You are right; tried on Chrome and it aborts ongoing request when another
request is granted. Will change this.

>> Source/WebKit/UIProcess/SpeechRecognitionServer.cpp:137
>> +	m_recognizer->start();
> Is it needed to recreate a recogniser for every request?

Not really I guess. It is just easier to distinguish between abort and stop.
(stop: wait until recognizer acknowledges it stops; abort: throw away
everything immediately).
And technically we will need different recognizers for requests of different
languages, which is not needed for now.

>> Source/WebKit/UIProcess/SpeechRecognitionServer.cpp:168
>> +	}
> I would try to search for the default device and use that one, otherwise pick
the first enabled device.
> This does not require a Vector though.

Sure, will change.

>> Source/WebKit/UIProcess/SpeechRecognitionServer.cpp:175
>> +	return CaptureSourceOrError { "createCaptureSource is not implemented"
> on iOS, we might need to send IPC to WebProcess to create a realtime media
source there instead of in UIProcess.

Yes, seems we need to create something like RemoteRealtimeMediaSource in UI
process. Will try that in another patch.

>> Source/WebKit/UIProcess/SpeechRecognitionServer.cpp:186
>> +void SpeechRecognitionServer::didStartCapturingAudio()
> Maybe we could simplify the API so that there is a sendUpdate delegate taking
a WebCore::SpeechRecognitionUpdateType?
> That would reduce the delegate API a bit.

You mean simplifying interfaces of SpeechRecognizerDelegate? Okay will try.

More information about the webkit-reviews mailing list