[webkit-reviews] review canceled: [Bug 217780] Add stubs for SpeechRecognition : [Attachment 411532] Patch

bugzilla-daemon at webkit.org bugzilla-daemon at webkit.org
Fri Oct 16 11:09:27 PDT 2020

Sihui Liu <sihui_liu at apple.com> has canceled Sihui Liu <sihui_liu at apple.com>'s
request for review:
Bug 217780: Add stubs for SpeechRecognition

Attachment 411532: Patch


--- Comment #11 from Sihui Liu <sihui_liu at apple.com> ---
Comment on attachment 411532
  --> https://bugs.webkit.org/attachment.cgi?id=411532

View in context: https://bugs.webkit.org/attachment.cgi?id=411532&action=review

>> Source/WebCore/Modules/speech/DOMWindow+SpeechRecognition.idl:31
>> +	[ImplementedAs=speechRecognition] readonly attribute SpeechRecognition
> Why webkitSpeechRecognition and not speechRecognition?
> This is probably not as per spec.
> Also, it might be good to add a runtime flag.

Some sites are using webkit prefix version as it's what originally supported by
Blink, so I just added it here. Will add a comment.

Sure, will add a runtime flag.

>> Source/WebCore/Modules/speech/DOMWindowSpeechRecognition.cpp:81
>> +	if (page->isClosing())
> Could be if (!page || page->isClosing())
> I do not see those checks though in some other DOMWindow supplements.
> Why are we adding them?

I may get this from storage code. Taking a deeper look now we may not need
this, as long as we stop processing requests correctly when objects are
destroyed, so will remove.

>> Source/WebCore/Modules/speech/DOMWindowSpeechRecognition.h:31
>> +#include "SpeechRecognition.h"
> It seems we could forward declare SpeechRecognition

Sure, will move.

>> Source/WebCore/Modules/speech/SpeechRecognition.cpp:46
>> +	return adoptRef(*new SpeechRecognition(document));
> Should we add a call to suspendIfNeeded?

Yes based on the test failure. Will add.

>> Source/WebCore/Modules/speech/SpeechRecognition.h:58
>> +	void stop();
> There might be a name conflict with ActiveDOMObject stop() that we might want
to implement in the future.
> Maybe we should name it stopRecognition and startRecognition as well.

Okay, will rename.

>> Source/WebCore/Modules/speech/SpeechRecognition.h:86
>> +	const char* activeDOMObjectName() const;
> I would add // ActiveDOMObject.
> Should be final.
> What is our plan for suspend/resume?

I did not consider about this. Looks like we can abort outstanding request when

>> Source/WebCore/Modules/speech/SpeechRecognitionAlternative.cpp:37
>> +Ref<SpeechRecognitionAlternative>
SpeechRecognitionAlternative::create(const String& transcript, double
> Could be String&& here and constructor


>> Source/WebCore/Modules/speech/SpeechRecognitionAlternative.h:31
>> +#include <wtf/RefCounted.h>
> Might require <wtf/text/WTFString.h> ?


>> Source/WebCore/Modules/speech/SpeechRecognitionError.h:46
>> +	Optional<SpeechRecognitionErrorType> type;
> Why Optional? Should we add a general/unknown error instead?

Optional here is to decide if this is an error or not... no type means no
As I consider about this now, maybe I should use
Optional<SpeechRecognitionError> to check if there is an error, instead of
asking SpeechRecognitionError.

>> Source/WebCore/Modules/speech/SpeechRecognitionRequest.h:39
>> +	WEBCORE_EXPORT static Ref<SpeechRecognitionRequest>
create(SpeechRecognitionClient*, const String& lang, bool continuous, bool
interimResults, uint64_t maxAlternatives);
> Can we pass a SpeechRecognitionClient&?


>> Source/WebCore/Modules/speech/SpeechRecognitionRequest.h:42
>> +	SpeechRecognitionRequestData data() const { return m_data; }
> const SpeechRecognitionRequestData&


>> Source/WebCore/Modules/speech/SpeechRecognitionRequestData.h:32
>> +#include "SpeechRecognitionClient.h"
> Is this one needed?

Will check and remove if possible.

>> Source/WebCore/Modules/speech/SpeechRecognitionRequestData.h:39
>> +*/
> To remove?

oh I was about to add identifier to request so server and client can verify
they are processing the same request.
This may not be necessary but nice to have. Will add.

>> Source/WebCore/page/SpeechRecognitionConnection.h:40
>> +	virtual void abort(SpeechRecognitionClient*) = 0;
> SpeechRecognitionClient& would be better.


>> +void
WebSpeechRecognitionConnection::start(WebCore::SpeechRecognitionClient* client,
const String& lang, bool continuous, bool interimResults, uint64_t
> SpeechRecognitionClient& here and below


>> +	processPendingRequest();
> There is a SpeechRecognition/Client per document.
> Does this mean that requests should be processed one-by-one on a per page
scope or on a per document scope?
> It seems this is per page now but I want to confirm this is the right

ah you've pointed out a mistake I made! I revisited the document and found
SpeechRecognition is not [SameObject]; it should be multiple
SpeechRecognition/Clients per document. Will update that.

I think ultimately an app can only handle one speechrecognition request at a
time (like mediarecorder with specified stream?), so maybe it's more
appropriate to handle it one-by-one per process. 
I just followed SpeechSynthesis and make it per page. What do you think?

>> Source/WebKit/WebProcess/WebCoreSupport/WebSpeechRecognitionConnection.h:42
>> +class WebSpeechRecognitionConnection : public
WebCore::SpeechRecognitionConnection {
> final?


>> Source/WebKit/WebProcess/WebCoreSupport/WebSpeechRecognitionConnection.h:45
>> +	WebSpeechRecognitionConnection(WebPage&);
> explicit


>> Source/WebKit/WebProcess/WebCoreSupport/WebSpeechRecognitionConnection.h:49
>> +	void abort(WebCore::SpeechRecognitionClient*) final;
> Can be all private?

Will move.

>> Source/WebKit/WebProcess/WebCoreSupport/WebSpeechRecognitionConnection.h:60
>> +	Deque<RefPtr<WebCore::SpeechRecognitionRequest>> m_pendingRequests;
> Deque<Ref<>>


More information about the webkit-reviews mailing list