[Webkit-unassigned] [Bug 186359] Add script to update web-platform-tests expectations after import
bugzilla-daemon at webkit.org
bugzilla-daemon at webkit.org
Mon Jun 11 13:56:12 PDT 2018
--- Comment #9 from Brendan McLoughlin <brendan at bocoup.com> ---
Comment on attachment 342430
View in context: https://bugs.webkit.org/attachment.cgi?id=342430&action=review
>> - self._filesystem.remove(results_json_path)
> Is this change actually needed?
The update-w3c-test-expectations script reads the failing tests from the `full_results.json` file and uses them to figure out which test expectations to update. If this `full_results.json` get removed there is no way to get the this level of detail on why the tests failed.
>> +# add unittests
> We need some unit tests before landing this script.
> Maybe there could be a way to pass a run_webkit_test caller as part of the creation of TestExpectationUpdater.
> Then, we could try to get the results of the updated files, such as TestExpectations.
>> + args = ['Tools/Scripts/run-webkit-tests'] + self._option_args
> I wonder whether we should try to call python directly if possible since run-webkit-tests is already python.
> If this is not straightforward, please add a FIXME.
Oops. I already refactored this to call run-webkit-tests via python. I'll remove this reference to the executable.
>> + myfile.write('\n%s [ %s ]\n' % (test, expectation))
> We might want to use filesystem.py here so that this works in unit tests.
> Maybe open_text_file_for_writing would work with should_append equal to true.
> I also wonder whether it would not be good to buffer all changes to the set expectation file.
> That way, we can log it in the console (debug level maybe) and open/write the file once.
I originally went down that path of buffering all the changes until then end. However it made it difficult to isolate flaky tests when re-running run-webkit-tests after updating the expectations to test the new expectations.
I tried that before grouping the expectations by failure type. I'll do another experiment to test it since the additional context of the original failure type buckets may make it easier to test identify the flaky tests.
>> + tests = self._pre_process_tests(data['tests'])
> Ditto there, it might be better to keep the list of tests once.
> Next line seems to suggest we could want to have a matching_tests member that would organize test information keyed by the name.
After updating an expectation the test most of the methods call `self._run_webkit_tests` and re-run the test to check if the expectation worked. This helps it catch flaky tests or testharness tests that always generate unique output. `_extract_failing_test_result` makes a fresh read from the updated `full_results.json` to read the new test status.
>> + pass
> Why do we need pass here?
No. I will remote that.
You are receiving this mail because:
You are the assignee for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the webkit-unassigned