[webkit-reviews] review canceled: [Bug 150358] Support for promise rejection events (unhandledrejection) : [Attachment 307850] [PATCH] Proposed Fix

bugzilla-daemon at webkit.org bugzilla-daemon at webkit.org
Tue Apr 25 14:27:36 PDT 2017


Joseph Pecoraro <joepeck at webkit.org> has canceled Joseph Pecoraro
<joepeck at webkit.org>'s request for review:
Bug 150358: Support for promise rejection events (unhandledrejection)
https://bugs.webkit.org/show_bug.cgi?id=150358

Attachment 307850: [PATCH] Proposed Fix

https://bugs.webkit.org/attachment.cgi?id=307850&action=review




--- Comment #164 from Joseph Pecoraro <joepeck at webkit.org> ---
Comment on attachment 307850
  --> https://bugs.webkit.org/attachment.cgi?id=307850
[PATCH] Proposed Fix

View in context: https://bugs.webkit.org/attachment.cgi?id=307850&action=review

>> LayoutTests/ChangeLog:38
>> +	    the work is done as it may eliminate non-deterministic debug test
output.
> 
> I'm afraid that some tests might be reporting some errors in completion
handlers.
> It seems to me there isn't much harm to rebaselining the tests with unhandled
promise rejection logs so long as they're deterministic
> although we may want to do that in a separate patch since I'd expect there
will be a lot of tests that have unhandled rejected promises.

I understand the concern but that seems like it would be an inappropriate use
of testharness.js.

add_completion_callback is used to add a callback _after_ a set of tests
completes. It is provided the status (PASS, FAIL, TIMEOUT). So effectively it
is being run after tests have completed. If a test had asynchronous work that
would have caused the test to fail it should have failed the test earlier.

This also appears to match the reporting out of PASS/FAIL on w3c-test.org.
Those tests may have unexpected promise rejections reported after the tests
completed, but the test still shows a PASS at the top. They have something
like:

    WindowTestEnvironment.prototype.on_tests_ready = function() {
	...
	add_completion_callback(function (tests, harness_status) {
	    this_obj.output_handler.show_results(tests, harness_status);
	});
	...
    };

Which roughly appears to match us here.

The only reason I left in the setTimeout is that some tests decide to "clean
up" the page in completion handlers. In one Resource Timing tests this is
important as it hides a lot of debug output written to the page containing raw
timing numbers (which are dynamic) and would need to be hidden in order to the
test to have deterministic output.

We could be stricter here and disable testharness.js's window.onerror /
window.onunhandledpromiserejection once the test is done, but I think that
might hide unintended issues that should instead be addressed instead of
hidden.

>> LayoutTests/js/dom/dom-static-property-for-in-iteration-expected.txt:115
>> +PASS a["offsetTop"] is 1719
> 
> Why did this change? Does this change each time a new global property is
added!?

Correct, this is logging all of the properties of an element using the
equivalent of `for (var i in element)`. Which is likely happening because we
insert the <p id="description"> and #console as the first children of <body>.
So as the console fills up with pass messages for earlier properties, then when
we evaluate offsetTop it includes all of the console messages preceding it. I
can clean this up but it doesn't seem important.

>> LayoutTests/js/dom/unhandled-promise-rejection-basic.html:18
>> +	shouldBe(`error.type`, `"unhandledrejection"`);
> 
> It seems template literal isn't needed in all these shouldBe statements.

This is a style used heavily by JavaScriptCore tests that I tend to agree with.
Anytime we use a string to evaluate CODE then template strings are vastly
friendlier then using string literals.

Most of these comments on the LayoutTests are style preferences that I'd prefer
to keep. They are mostly canonical patterns used in tests, and given the tests
are pretty small they should be easy to follow.

>> LayoutTests/js/dom/unhandled-promise-rejection-console-no-report.html:27
>> +}, 100);
> 
> Why 100ms? setTimeout(0) should be sufficient given that would be in the next
runloop.
> Promise rejection happens at the end of the current micro task.

While that is technically true, this is trying to detect an incorrect
implementation! So if an incorrect implementation incorrectly emitted things
later then a micro-task it would not have been caught by setTimeout(0).

>> LayoutTests/js/dom/unhandled-promise-rejection-console-report-expected.txt:4
>> +Test unhandled promise rejection event produces console messages.
> 
> The condition under which this test should be included in the description.
> e.g. "The test passes when you see three console messages for unhandled
promise rejection"

Sounds good!

>> LayoutTests/js/dom/unhandled-promise-rejection-console-report.html:19
>> +setTimeout(function () { finishJSTest(); }, 100);
> 
> Ditto about not needing 100ms delay.

In this case we should be able to do setTimeout 0, so I'll update this one.

>> LayoutTests/js/dom/unhandled-promise-rejection-handle-during-event.html:31
>> +	window.promise[i] = Promise.reject(i);
> 
> What's the significance of having three rejected promises?
> Also, I'd prefer if this was done inside evalAndLog so that it's apparent in
the expected result.

This is testing that the unhandled promise handler is called with only the
unhandled promise, the others are handled so should not have triggered the
handler.

>> LayoutTests/js/dom/unhandled-promise-rejection-handle-in-handler.html:31
>> +	}, 100);
> 
> Ditto about not needing 100ms delay.

A 0 here should be correct, I'll change.

>> LayoutTests/js/dom/unhandled-promise-rejection-order.html:28
>> +	    finishJSTest();
> 
> Manually managing index, why don't we push each error into a global array,
> and then call a separate function in setTimeout(~, 0) to assert them all at
once?
> That'd make it possible to always call finishJSTest in that function.

Sounds good.


More information about the webkit-reviews mailing list