<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN"
"http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head><meta http-equiv="content-type" content="text/html; charset=utf-8" />
<title>[247569] trunk/Tools</title>
</head>
<body>

<style type="text/css"><!--
#msg dl.meta { border: 1px #006 solid; background: #369; padding: 6px; color: #fff; }
#msg dl.meta dt { float: left; width: 6em; font-weight: bold; }
#msg dt:after { content:':';}
#msg dl, #msg dt, #msg ul, #msg li, #header, #footer, #logmsg { font-family: verdana,arial,helvetica,sans-serif; font-size: 10pt;  }
#msg dl a { font-weight: bold}
#msg dl a:link    { color:#fc3; }
#msg dl a:active  { color:#ff0; }
#msg dl a:visited { color:#cc6; }
h3 { font-family: verdana,arial,helvetica,sans-serif; font-size: 10pt; font-weight: bold; }
#msg pre { overflow: auto; background: #ffc; border: 1px #fa0 solid; padding: 6px; }
#logmsg { background: #ffc; border: 1px #fa0 solid; padding: 1em 1em 0 1em; }
#logmsg p, #logmsg pre, #logmsg blockquote { margin: 0 0 1em 0; }
#logmsg p, #logmsg li, #logmsg dt, #logmsg dd { line-height: 14pt; }
#logmsg h1, #logmsg h2, #logmsg h3, #logmsg h4, #logmsg h5, #logmsg h6 { margin: .5em 0; }
#logmsg h1:first-child, #logmsg h2:first-child, #logmsg h3:first-child, #logmsg h4:first-child, #logmsg h5:first-child, #logmsg h6:first-child { margin-top: 0; }
#logmsg ul, #logmsg ol { padding: 0; list-style-position: inside; margin: 0 0 0 1em; }
#logmsg ul { text-indent: -1em; padding-left: 1em; }#logmsg ol { text-indent: -1.5em; padding-left: 1.5em; }
#logmsg > ul, #logmsg > ol { margin: 0 0 1em 0; }
#logmsg pre { background: #eee; padding: 1em; }
#logmsg blockquote { border: 1px solid #fa0; border-left-width: 10px; padding: 1em 1em 0 1em; background: white;}
#logmsg dl { margin: 0; }
#logmsg dt { font-weight: bold; }
#logmsg dd { margin: 0; padding: 0 0 0.5em 0; }
#logmsg dd:before { content:'\00bb';}
#logmsg table { border-spacing: 0px; border-collapse: collapse; border-top: 4px solid #fa0; border-bottom: 1px solid #fa0; background: #fff; }
#logmsg table th { text-align: left; font-weight: normal; padding: 0.2em 0.5em; border-top: 1px dotted #fa0; }
#logmsg table td { text-align: right; border-top: 1px dotted #fa0; padding: 0.2em 0.5em; }
#logmsg table thead th { text-align: center; border-bottom: 1px solid #fa0; }
#logmsg table th.Corner { text-align: left; }
#logmsg hr { border: none 0; border-top: 2px dashed #fa0; height: 1px; }
#header, #footer { color: #fff; background: #636; border: 1px #300 solid; padding: 6px; }
#patch { width: 100%; }
#patch h4 {font-family: verdana,arial,helvetica,sans-serif;font-size:10pt;padding:8px;background:#369;color:#fff;margin:0;}
#patch .propset h4, #patch .binary h4 {margin:0;}
#patch pre {padding:0;line-height:1.2em;margin:0;}
#patch .diff {width:100%;background:#eee;padding: 0 0 10px 0;overflow:auto;}
#patch .propset .diff, #patch .binary .diff  {padding:10px 0;}
#patch span {display:block;padding:0 10px;}
#patch .modfile, #patch .addfile, #patch .delfile, #patch .propset, #patch .binary, #patch .copfile {border:1px solid #ccc;margin:10px 0;}
#patch ins {background:#dfd;text-decoration:none;display:block;padding:0 10px;}
#patch del {background:#fdd;text-decoration:none;display:block;padding:0 10px;}
#patch .lines, .info {color:#888;background:#fff;}
--></style>
<div id="msg">
<dl class="meta">
<dt>Revision</dt> <dd><a href="http://trac.webkit.org/projects/webkit/changeset/247569">247569</a></dd>
<dt>Author</dt> <dd>aakash_jain@apple.com</dd>
<dt>Date</dt> <dd>2019-07-18 12:03:17 -0700 (Thu, 18 Jul 2019)</dd>
</dl>

<h3>Log Message</h3>
<pre>[ews-build] Add build step to AnalyzeLayoutTestsResults
https://bugs.webkit.org/show_bug.cgi?id=199877

Reviewed by Jonathan Bedard.

Logic is ported from webkitpy/tool/bot/patchanalysistask.py::_retry_layout_tests()

* BuildSlaveSupport/ews-build/steps.py:
(RunWebKitTestsWithoutPatch.evaluateCommand): invoke AnalyzeLayoutTestsResults step.
(AnalyzeLayoutTestsResults): Build step to analyze layout-test results.
(AnalyzeLayoutTestsResults.report_failure):
(AnalyzeLayoutTestsResults.report_pre_existing_failures):
(AnalyzeLayoutTestsResults.retry_build):
(AnalyzeLayoutTestsResults._results_failed_different_tests):
(AnalyzeLayoutTestsResults._report_flaky_tests):
(AnalyzeLayoutTestsResults.start):
* BuildSlaveSupport/ews-build/steps_unittest.py: Added unit-tests.</pre>

<h3>Modified Paths</h3>
<ul>
<li><a href="#trunkToolsBuildSlaveSupportewsbuildstepspy">trunk/Tools/BuildSlaveSupport/ews-build/steps.py</a></li>
<li><a href="#trunkToolsBuildSlaveSupportewsbuildsteps_unittestpy">trunk/Tools/BuildSlaveSupport/ews-build/steps_unittest.py</a></li>
<li><a href="#trunkToolsChangeLog">trunk/Tools/ChangeLog</a></li>
</ul>

</div>
<div id="patch">
<h3>Diff</h3>
<a id="trunkToolsBuildSlaveSupportewsbuildstepspy"></a>
<div class="modfile"><h4>Modified: trunk/Tools/BuildSlaveSupport/ews-build/steps.py (247568 => 247569)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Tools/BuildSlaveSupport/ews-build/steps.py 2019-07-18 18:57:35 UTC (rev 247568)
+++ trunk/Tools/BuildSlaveSupport/ews-build/steps.py    2019-07-18 19:03:17 UTC (rev 247569)
</span><span class="lines">@@ -991,7 +991,7 @@
</span><span class="cx"> 
</span><span class="cx">     def evaluateCommand(self, cmd):
</span><span class="cx">         rc = shell.Test.evaluateCommand(self, cmd)
</span><del>-        self.build.addStepsAfterCurrentStep([ArchiveTestResults(), UploadTestResults(identifier='clean-tree'), ExtractTestResults(identifier='clean-tree')])
</del><ins>+        self.build.addStepsAfterCurrentStep([ArchiveTestResults(), UploadTestResults(identifier='clean-tree'), ExtractTestResults(identifier='clean-tree'), AnalyzeLayoutTestsResults()])
</ins><span class="cx">         return rc
</span><span class="cx"> 
</span><span class="cx">     def commandComplete(self, cmd):
</span><span class="lines">@@ -1007,6 +1007,107 @@
</span><span class="cx">         self._parseRunWebKitTestsOutput(logText)
</span><span class="cx"> 
</span><span class="cx"> 
</span><ins>+class AnalyzeLayoutTestsResults(buildstep.BuildStep):
+    name = 'analyze-layout-tests-results'
+    description = ['analyze-layout-test-results']
+    descriptionDone = ['analyze-layout-tests-results']
+
+    def report_failure(self, new_failures):
+        self.finished(FAILURE)
+        self.build.results = FAILURE
+        pluralSuffix = 's' if len(new_failures) > 1 else ''
+        new_failures_string = ', '.join([failure_name for failure_name in new_failures])
+        message = 'Found {} new Test failure{}: {}'.format(len(new_failures), pluralSuffix, new_failures_string)
+        self.descriptionDone = message
+        self.build.buildFinished([message], FAILURE)
+        return defer.succeed(None)
+
+    def report_pre_existing_failures(self, clean_tree_failures):
+        self.finished(SUCCESS)
+        self.build.results = SUCCESS
+        self.descriptionDone = 'Passed layout tests'
+        pluralSuffix = 's' if len(clean_tree_failures) > 1 else ''
+        message = 'Found {} pre-existing test failure{}'.format(len(clean_tree_failures), pluralSuffix)
+        self.build.buildFinished([message], SUCCESS)
+        return defer.succeed(None)
+
+    def retry_build(self, message=''):
+        self.finished(RETRY)
+        message = 'Unable to confirm if test failures are introduced by patch, retrying build'
+        self.descriptionDone = message
+        self.build.buildFinished([message], RETRY)
+        return defer.succeed(None)
+
+    def _results_failed_different_tests(self, first_results_failing_tests, second_results_failing_tests):
+        return first_results_failing_tests != second_results_failing_tests
+
+    def _report_flaky_tests(self, flaky_tests):
+        #TODO: implement this
+        pass
+
+    def start(self):
+        first_results_did_exceed_test_failure_limit = self.getProperty('first_results_exceed_failure_limit')
+        first_results_failing_tests = set(self.getProperty('first_run_failures', []))
+        second_results_did_exceed_test_failure_limit = self.getProperty('second_results_exceed_failure_limit')
+        second_results_failing_tests = set(self.getProperty('second_run_failures', []))
+        clean_tree_results_did_exceed_test_failure_limit = self.getProperty('clean_tree_results_exceed_failure_limit')
+        clean_tree_results_failing_tests = set(self.getProperty('clean_tree_run_failures', []))
+
+        if first_results_did_exceed_test_failure_limit and second_results_did_exceed_test_failure_limit:
+            if (len(first_results_failing_tests) - len(clean_tree_results_failing_tests)) <= 5:
+                # If we've made it here, then many tests are failing with the patch applied, but
+                # if the clean tree is also failing many tests, even if it's not quite as many,
+                # then we can't be certain that the discrepancy isn't due to flakiness, and hence we must defer judgement.
+                return self.retry_build()
+            return self.report_failure(first_results_failing_tests)
+
+        if second_results_did_exceed_test_failure_limit:
+            if clean_tree_results_did_exceed_test_failure_limit:
+                return self.retry_build()
+            failures_introduced_by_patch = first_results_failing_tests - clean_tree_results_failing_tests
+            if failures_introduced_by_patch:
+                return self.report_failure(failures_introduced_by_patch)
+            return self.retry_build()
+
+        if first_results_did_exceed_test_failure_limit:
+            if clean_tree_results_did_exceed_test_failure_limit:
+                return self.retry_build()
+            failures_introduced_by_patch = second_results_failing_tests - clean_tree_results_failing_tests
+            if failures_introduced_by_patch:
+                return self.report_failure(failures_introduced_by_patch)
+            return self.retry_build()
+
+        if self._results_failed_different_tests(first_results_failing_tests, second_results_failing_tests):
+            tests_that_only_failed_first = first_results_failing_tests.difference(second_results_failing_tests)
+            self._report_flaky_tests(tests_that_only_failed_first)
+
+            tests_that_only_failed_second = second_results_failing_tests.difference(first_results_failing_tests)
+            self._report_flaky_tests(tests_that_only_failed_second)
+
+            tests_that_consistently_failed = first_results_failing_tests.intersection(second_results_failing_tests)
+            if tests_that_consistently_failed:
+                if clean_tree_results_did_exceed_test_failure_limit:
+                    return self.retry_build()
+                failures_introduced_by_patch = tests_that_consistently_failed - clean_tree_results_failing_tests
+                if failures_introduced_by_patch:
+                    return self.report_failure(failures_introduced_by_patch)
+
+            # At this point we know that at least one test flaked, but no consistent failures
+            # were introduced. This is a bit of a grey-zone.
+            return self.retry_build()
+
+        if clean_tree_results_did_exceed_test_failure_limit:
+            return self.retry_build()
+        failures_introduced_by_patch = first_results_failing_tests - clean_tree_results_failing_tests
+        if failures_introduced_by_patch:
+            return self.report_failure(failures_introduced_by_patch)
+
+        # At this point, we know that the first and second runs had the exact same failures,
+        # and that those failures are all present on the clean tree, so we can say with certainty
+        # that the patch is good.
+        return self.report_pre_existing_failures(clean_tree_results_failing_tests)
+
+
</ins><span class="cx"> class RunWebKit1Tests(RunWebKitTests):
</span><span class="cx">     def start(self):
</span><span class="cx">         self.setCommand(self.command + ['--dump-render-tree'])
</span></span></pre></div>
<a id="trunkToolsBuildSlaveSupportewsbuildsteps_unittestpy"></a>
<div class="modfile"><h4>Modified: trunk/Tools/BuildSlaveSupport/ews-build/steps_unittest.py (247568 => 247569)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Tools/BuildSlaveSupport/ews-build/steps_unittest.py        2019-07-18 18:57:35 UTC (rev 247568)
+++ trunk/Tools/BuildSlaveSupport/ews-build/steps_unittest.py   2019-07-18 19:03:17 UTC (rev 247569)
</span><span class="lines">@@ -34,7 +34,7 @@
</span><span class="cx"> from twisted.python import failure, log
</span><span class="cx"> from twisted.trial import unittest
</span><span class="cx"> 
</span><del>-from steps import (AnalyzeAPITestsResults, AnalyzeCompileWebKitResults, ApplyPatch, ArchiveBuiltProduct, ArchiveTestResults,
</del><ins>+from steps import (AnalyzeAPITestsResults, AnalyzeCompileWebKitResults, AnalyzeLayoutTestsResults, ApplyPatch, ArchiveBuiltProduct, ArchiveTestResults,
</ins><span class="cx">                    CheckOutSource, CheckOutSpecificRevision, CheckPatchRelevance, CheckStyle, CleanBuild, CleanUpGitIndexLock, CleanWorkingDirectory,
</span><span class="cx">                    CompileJSCOnly, CompileJSCOnlyToT, CompileWebKit, CompileWebKitToT, ConfigureBuild,
</span><span class="cx">                    DownloadBuiltProduct, ExtractBuiltProduct, ExtractTestResults, InstallGtkDependencies, InstallWpeDependencies, KillOldProcesses,
</span><span class="lines">@@ -1135,6 +1135,111 @@
</span><span class="cx">         return self.runStep()
</span><span class="cx"> 
</span><span class="cx"> 
</span><ins>+class TestAnalyzeLayoutTestsResults(BuildStepMixinAdditions, unittest.TestCase):
+    def setUp(self):
+        self.longMessage = True
+        return self.setUpBuildStep()
+
+    def tearDown(self):
+        return self.tearDownBuildStep()
+
+    def configureStep(self):
+        self.setupStep(AnalyzeLayoutTestsResults())
+        self.setProperty('first_results_exceed_failure_limit', False)
+        self.setProperty('second_results_exceed_failure_limit', False)
+        self.setProperty('clean_tree_results_exceed_failure_limit', False)
+        self.setProperty('clean_tree_run_failures', [])
+
+    def test_failure_introduced_by_patch(self):
+        self.configureStep()
+        self.setProperty('first_run_failures', ["jquery/offset.html"])
+        self.setProperty('second_run_failures', ["jquery/offset.html"])
+        self.expectOutcome(result=FAILURE, state_string='Found 1 new Test failure: jquery/offset.html (failure)')
+        return self.runStep()
+
+    def test_failure_on_clean_tree(self):
+        self.configureStep()
+        self.setProperty('first_run_failures', ["jquery/offset.html"])
+        self.setProperty('second_run_failures', ["jquery/offset.html"])
+        self.setProperty('clean_tree_run_failures', ["jquery/offset.html"])
+        self.expectOutcome(result=SUCCESS, state_string='Passed layout tests')
+        return self.runStep()
+
+    def test_flaky_and_consistent_failures_without_clean_tree_failures(self):
+        self.configureStep()
+        self.setProperty('first_run_failures', ['test1', 'test2'])
+        self.setProperty('second_run_failures', ['test1'])
+        self.expectOutcome(result=FAILURE, state_string='Found 1 new Test failure: test1 (failure)')
+        return self.runStep()
+
+    def test_flaky_and_inconsistent_failures_without_clean_tree_failures(self):
+        self.configureStep()
+        self.setProperty('first_run_failures', ['test1', 'test2'])
+        self.setProperty('second_run_failures', ['test3'])
+        self.expectOutcome(result=RETRY, state_string='Unable to confirm if test failures are introduced by patch, retrying build (retry)')
+        return self.runStep()
+
+    def test_flaky_and_inconsistent_failures_with_clean_tree_failures(self):
+        self.configureStep()
+        self.setProperty('first_run_failures', ['test1', 'test2'])
+        self.setProperty('second_run_failures', ['test3'])
+        self.setProperty('clean_tree_run_failures', ['test1', 'test2', 'test3'])
+        self.expectOutcome(result=RETRY, state_string='Unable to confirm if test failures are introduced by patch, retrying build (retry)')
+        return self.runStep()
+
+    def test_flaky_and_consistent_failures_with_clean_tree_failures(self):
+        self.configureStep()
+        self.setProperty('first_run_failures', ['test1', 'test2'])
+        self.setProperty('second_run_failures', ['test1'])
+        self.setProperty('clean_tree_run_failures', ['test1', 'test2'])
+        self.expectOutcome(result=RETRY, state_string='Unable to confirm if test failures are introduced by patch, retrying build (retry)')
+        return self.runStep()
+
+    def test_first_run_exceed_failure_limit(self):
+        self.configureStep()
+        self.setProperty('first_results_exceed_failure_limit', True)
+        self.setProperty('first_run_failures',  ['test{}'.format(i) for i in range(0, 30)])
+        self.setProperty('second_run_failures', [])
+        self.expectOutcome(result=RETRY, state_string='Unable to confirm if test failures are introduced by patch, retrying build (retry)')
+        return self.runStep()
+
+    def test_second_run_exceed_failure_limit(self):
+        self.configureStep()
+        self.setProperty('first_run_failures', [])
+        self.setProperty('second_results_exceed_failure_limit', True)
+        self.setProperty('second_run_failures',  ['test{}'.format(i) for i in range(0, 30)])
+        self.expectOutcome(result=RETRY, state_string='Unable to confirm if test failures are introduced by patch, retrying build (retry)')
+        return self.runStep()
+
+    def test_clean_tree_exceed_failure_limit(self):
+        self.configureStep()
+        self.setProperty('first_run_failures', ['test1'])
+        self.setProperty('second_run_failures', ['test1'])
+        self.setProperty('clean_tree_results_exceed_failure_limit', True)
+        self.setProperty('clean_tree_run_failures',  ['test{}'.format(i) for i in range(0, 30)])
+        self.expectOutcome(result=RETRY, state_string='Unable to confirm if test failures are introduced by patch, retrying build (retry)')
+        return self.runStep()
+
+    def test_clean_tree_has_lot_of_failures(self):
+        self.configureStep()
+        self.setProperty('first_results_exceed_failure_limit', True)
+        self.setProperty('first_run_failures', ['test{}'.format(i) for i in range(0, 30)])
+        self.setProperty('second_results_exceed_failure_limit', True)
+        self.setProperty('second_run_failures', ['test{}'.format(i) for i in range(0, 30)])
+        self.setProperty('clean_tree_run_failures', ['test{}'.format(i) for i in range(0, 27)])
+        self.expectOutcome(result=RETRY, state_string='Unable to confirm if test failures are introduced by patch, retrying build (retry)')
+        return self.runStep()
+
+    def test_clean_tree_has_some_failures(self):
+        self.configureStep()
+        self.setProperty('first_results_exceed_failure_limit', True)
+        self.setProperty('first_run_failures', ['test{}'.format(i) for i in range(0, 30)])
+        self.setProperty('second_results_exceed_failure_limit', True)
+        self.setProperty('second_run_failures', ['test{}'.format(i) for i in range(0, 30)])
+        self.setProperty('clean_tree_run_failures', ['test{}'.format(i) for i in range(0, 10)])
+        self.expectOutcome(result=FAILURE, state_string='Found 30 new Test failures: test1, test0, test3, test2, test5, test4, test7, test6, test9, test8, test24, test25, test26, test27, test20, test21, test22, test23, test28, test29, test19, test18, test11, test10, test13, test12, test15, test14, test17, test16 (failure)')
+        return self.runStep()
+
</ins><span class="cx"> class TestCheckOutSpecificRevision(BuildStepMixinAdditions, unittest.TestCase):
</span><span class="cx">     def setUp(self):
</span><span class="cx">         self.longMessage = True
</span></span></pre></div>
<a id="trunkToolsChangeLog"></a>
<div class="modfile"><h4>Modified: trunk/Tools/ChangeLog (247568 => 247569)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Tools/ChangeLog    2019-07-18 18:57:35 UTC (rev 247568)
+++ trunk/Tools/ChangeLog       2019-07-18 19:03:17 UTC (rev 247569)
</span><span class="lines">@@ -1,3 +1,23 @@
</span><ins>+2019-07-18  Aakash Jain  <aakash_jain@apple.com>
+
+        [ews-build] Add build step to AnalyzeLayoutTestsResults
+        https://bugs.webkit.org/show_bug.cgi?id=199877
+
+        Reviewed by Jonathan Bedard.
+
+        Logic is ported from webkitpy/tool/bot/patchanalysistask.py::_retry_layout_tests()
+
+        * BuildSlaveSupport/ews-build/steps.py:
+        (RunWebKitTestsWithoutPatch.evaluateCommand): invoke AnalyzeLayoutTestsResults step.
+        (AnalyzeLayoutTestsResults): Build step to analyze layout-test results.
+        (AnalyzeLayoutTestsResults.report_failure):
+        (AnalyzeLayoutTestsResults.report_pre_existing_failures):
+        (AnalyzeLayoutTestsResults.retry_build):
+        (AnalyzeLayoutTestsResults._results_failed_different_tests):
+        (AnalyzeLayoutTestsResults._report_flaky_tests):
+        (AnalyzeLayoutTestsResults.start):
+        * BuildSlaveSupport/ews-build/steps_unittest.py: Added unit-tests.
+
</ins><span class="cx"> 2019-07-18  Alex Christensen  <achristensen@webkit.org>
</span><span class="cx"> 
</span><span class="cx">         Move NetworkCache ownership from NetworkProcess to NetworkSession
</span></span></pre>
</div>
</div>

</body>
</html>