<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN"
"http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head><meta http-equiv="content-type" content="text/html; charset=utf-8" />
<title>[176211] trunk/Tools</title>
</head>
<body>

<style type="text/css"><!--
#msg dl.meta { border: 1px #006 solid; background: #369; padding: 6px; color: #fff; }
#msg dl.meta dt { float: left; width: 6em; font-weight: bold; }
#msg dt:after { content:':';}
#msg dl, #msg dt, #msg ul, #msg li, #header, #footer, #logmsg { font-family: verdana,arial,helvetica,sans-serif; font-size: 10pt;  }
#msg dl a { font-weight: bold}
#msg dl a:link    { color:#fc3; }
#msg dl a:active  { color:#ff0; }
#msg dl a:visited { color:#cc6; }
h3 { font-family: verdana,arial,helvetica,sans-serif; font-size: 10pt; font-weight: bold; }
#msg pre { overflow: auto; background: #ffc; border: 1px #fa0 solid; padding: 6px; }
#logmsg { background: #ffc; border: 1px #fa0 solid; padding: 1em 1em 0 1em; }
#logmsg p, #logmsg pre, #logmsg blockquote { margin: 0 0 1em 0; }
#logmsg p, #logmsg li, #logmsg dt, #logmsg dd { line-height: 14pt; }
#logmsg h1, #logmsg h2, #logmsg h3, #logmsg h4, #logmsg h5, #logmsg h6 { margin: .5em 0; }
#logmsg h1:first-child, #logmsg h2:first-child, #logmsg h3:first-child, #logmsg h4:first-child, #logmsg h5:first-child, #logmsg h6:first-child { margin-top: 0; }
#logmsg ul, #logmsg ol { padding: 0; list-style-position: inside; margin: 0 0 0 1em; }
#logmsg ul { text-indent: -1em; padding-left: 1em; }#logmsg ol { text-indent: -1.5em; padding-left: 1.5em; }
#logmsg > ul, #logmsg > ol { margin: 0 0 1em 0; }
#logmsg pre { background: #eee; padding: 1em; }
#logmsg blockquote { border: 1px solid #fa0; border-left-width: 10px; padding: 1em 1em 0 1em; background: white;}
#logmsg dl { margin: 0; }
#logmsg dt { font-weight: bold; }
#logmsg dd { margin: 0; padding: 0 0 0.5em 0; }
#logmsg dd:before { content:'\00bb';}
#logmsg table { border-spacing: 0px; border-collapse: collapse; border-top: 4px solid #fa0; border-bottom: 1px solid #fa0; background: #fff; }
#logmsg table th { text-align: left; font-weight: normal; padding: 0.2em 0.5em; border-top: 1px dotted #fa0; }
#logmsg table td { text-align: right; border-top: 1px dotted #fa0; padding: 0.2em 0.5em; }
#logmsg table thead th { text-align: center; border-bottom: 1px solid #fa0; }
#logmsg table th.Corner { text-align: left; }
#logmsg hr { border: none 0; border-top: 2px dashed #fa0; height: 1px; }
#header, #footer { color: #fff; background: #636; border: 1px #300 solid; padding: 6px; }
#patch { width: 100%; }
#patch h4 {font-family: verdana,arial,helvetica,sans-serif;font-size:10pt;padding:8px;background:#369;color:#fff;margin:0;}
#patch .propset h4, #patch .binary h4 {margin:0;}
#patch pre {padding:0;line-height:1.2em;margin:0;}
#patch .diff {width:100%;background:#eee;padding: 0 0 10px 0;overflow:auto;}
#patch .propset .diff, #patch .binary .diff  {padding:10px 0;}
#patch span {display:block;padding:0 10px;}
#patch .modfile, #patch .addfile, #patch .delfile, #patch .propset, #patch .binary, #patch .copfile {border:1px solid #ccc;margin:10px 0;}
#patch ins {background:#dfd;text-decoration:none;display:block;padding:0 10px;}
#patch del {background:#fdd;text-decoration:none;display:block;padding:0 10px;}
#patch .lines, .info {color:#888;background:#fff;}
--></style>
<div id="msg">
<dl class="meta">
<dt>Revision</dt> <dd><a href="http://trac.webkit.org/projects/webkit/changeset/176211">176211</a></dd>
<dt>Author</dt> <dd>commit-queue@webkit.org</dd>
<dt>Date</dt> <dd>2014-11-17 11:06:38 -0800 (Mon, 17 Nov 2014)</dd>
</dl>

<h3>Log Message</h3>
<pre>Having 30+ flaky failures breaks EWS
https://bugs.webkit.org/show_bug.cgi?id=138743

Patch by Jake Nielsen &lt;jacob_nielsen@apple.com&gt; on 2014-11-17
Reviewed by Alexey Proskuryakov.

Adds tests to ensure that the problem has been solved.
* Scripts/webkitpy/tool/bot/commitqueuetask_unittest.py:
(test_first_failure_limit):
(test_first_failure_limit_with_some_tree_redness):
(test_second_failure_limit):
(test_tree_failure_limit_with_patch_that_potentially_fixes_some_redness):
(test_first_and_second_failure_limit):
(test_first_and_clean_failure_limit):
(test_first_second_and_clean_failure_limit):
(test_very_red_tree_retry): Deleted.
Really this was renamed to test_first_second_and_clean_failure_limit.
* Scripts/webkitpy/tool/bot/patchanalysistask.py:
Makes the appropriate changes to PatchAnalysisTask to make sure that
even when the first test run hits the failure limit, it will still try
a second run.
(PatchAnalysisTask._results_failed_different_tests):
(PatchAnalysisTask._test_patch):
(PatchAnalysisTask._continue_testing_patch_that_exceeded_failure_limit_on_first_or_second_try): Deleted.</pre>

<h3>Modified Paths</h3>
<ul>
<li><a href="#trunkToolsChangeLog">trunk/Tools/ChangeLog</a></li>
<li><a href="#trunkToolsScriptswebkitpytoolbotcommitqueuetask_unittestpy">trunk/Tools/Scripts/webkitpy/tool/bot/commitqueuetask_unittest.py</a></li>
<li><a href="#trunkToolsScriptswebkitpytoolbotpatchanalysistaskpy">trunk/Tools/Scripts/webkitpy/tool/bot/patchanalysistask.py</a></li>
</ul>

</div>
<div id="patch">
<h3>Diff</h3>
<a id="trunkToolsChangeLog"></a>
<div class="modfile"><h4>Modified: trunk/Tools/ChangeLog (176210 => 176211)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Tools/ChangeLog        2014-11-17 18:58:58 UTC (rev 176210)
+++ trunk/Tools/ChangeLog        2014-11-17 19:06:38 UTC (rev 176211)
</span><span class="lines">@@ -1,3 +1,29 @@
</span><ins>+2014-11-17  Jake Nielsen  &lt;jacob_nielsen@apple.com&gt;
+
+        Having 30+ flaky failures breaks EWS
+        https://bugs.webkit.org/show_bug.cgi?id=138743
+
+        Reviewed by Alexey Proskuryakov.
+
+        Adds tests to ensure that the problem has been solved.
+        * Scripts/webkitpy/tool/bot/commitqueuetask_unittest.py:
+        (test_first_failure_limit):
+        (test_first_failure_limit_with_some_tree_redness):
+        (test_second_failure_limit):
+        (test_tree_failure_limit_with_patch_that_potentially_fixes_some_redness):
+        (test_first_and_second_failure_limit):
+        (test_first_and_clean_failure_limit):
+        (test_first_second_and_clean_failure_limit):
+        (test_very_red_tree_retry): Deleted.
+        Really this was renamed to test_first_second_and_clean_failure_limit.
+        * Scripts/webkitpy/tool/bot/patchanalysistask.py:
+        Makes the appropriate changes to PatchAnalysisTask to make sure that
+        even when the first test run hits the failure limit, it will still try
+        a second run.
+        (PatchAnalysisTask._results_failed_different_tests):
+        (PatchAnalysisTask._test_patch):
+        (PatchAnalysisTask._continue_testing_patch_that_exceeded_failure_limit_on_first_or_second_try): Deleted.
+
</ins><span class="cx"> 2014-11-17  Ting-Wei Lan  &lt;lantw44@gmail.com&gt;
</span><span class="cx"> 
</span><span class="cx">         [GTK] Add library search paths from LDFLAGS before pkg-config --libs
</span></span></pre></div>
<a id="trunkToolsScriptswebkitpytoolbotcommitqueuetask_unittestpy"></a>
<div class="modfile"><h4>Modified: trunk/Tools/Scripts/webkitpy/tool/bot/commitqueuetask_unittest.py (176210 => 176211)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Tools/Scripts/webkitpy/tool/bot/commitqueuetask_unittest.py        2014-11-17 18:58:58 UTC (rev 176210)
+++ trunk/Tools/Scripts/webkitpy/tool/bot/commitqueuetask_unittest.py        2014-11-17 19:06:38 UTC (rev 176211)
</span><span class="lines">@@ -159,6 +159,9 @@
</span><span class="cx">     pass
</span><span class="cx"> 
</span><span class="cx"> 
</span><ins>+_lots_of_failing_tests = map(lambda num: &quot;test-%s.html&quot; % num, range(0, 100))
+
+
</ins><span class="cx"> class CommitQueueTaskTest(unittest.TestCase):
</span><span class="cx">     def _run_and_expect_patch_analysis_result(self, commit_queue, expected_analysis_result, expected_reported_flaky_tests=[], expect_clean_tests_to_run=False, expected_failure_status_id=0):
</span><span class="cx">         tool = MockTool(log_executive=True)
</span><span class="lines">@@ -398,15 +401,66 @@
</span><span class="cx"> 
</span><span class="cx">         self._run_and_expect_patch_analysis_result(commit_queue, PatchAnalysisResult.PASS, expect_clean_tests_to_run=True)
</span><span class="cx"> 
</span><del>-    def test_very_red_tree_retry(self):
-        lots_of_failing_tests = map(lambda num: &quot;test-%s.html&quot; % num, range(0, 100))
</del><ins>+    def test_first_failure_limit(self):
</ins><span class="cx">         commit_queue = MockSimpleTestPlanCommitQueue(
</span><del>-            first_test_failures=lots_of_failing_tests,
-            second_test_failures=lots_of_failing_tests,
-            clean_test_failures=lots_of_failing_tests)
</del><ins>+            first_test_failures=_lots_of_failing_tests,
+            second_test_failures=[],
+            clean_test_failures=[])
</ins><span class="cx"> 
</span><ins>+        self._run_and_expect_patch_analysis_result(commit_queue, PatchAnalysisResult.DEFER, expect_clean_tests_to_run=True, expected_failure_status_id=1)
+
+    def test_first_failure_limit_with_some_tree_redness(self):
+        commit_queue = MockSimpleTestPlanCommitQueue(
+            first_test_failures=_lots_of_failing_tests,
+            second_test_failures=[&quot;Fail1&quot;, &quot;Fail2&quot;, &quot;Fail3&quot;],
+            clean_test_failures=[&quot;Fail1&quot;, &quot;Fail2&quot;, &quot;Fail3&quot;])
+
+        self._run_and_expect_patch_analysis_result(commit_queue, PatchAnalysisResult.DEFER, expect_clean_tests_to_run=True, expected_failure_status_id=1)
+
+    def test_second_failure_limit(self):
+        # There need to be some failures in the first set of tests, or it won't even make it to the second test.
+        commit_queue = MockSimpleTestPlanCommitQueue(
+            first_test_failures=[&quot;Fail1&quot;, &quot;Fail2&quot;, &quot;Fail3&quot;],
+            second_test_failures=_lots_of_failing_tests,
+            clean_test_failures=[&quot;Fail1&quot;, &quot;Fail2&quot;, &quot;Fail3&quot;])
+
+        self._run_and_expect_patch_analysis_result(commit_queue, PatchAnalysisResult.DEFER, expect_clean_tests_to_run=True, expected_failure_status_id=2)
+
+    def test_tree_failure_limit_with_patch_that_potentially_fixes_some_redness(self):
+        commit_queue = MockSimpleTestPlanCommitQueue(
+            first_test_failures=[&quot;Fail1&quot;, &quot;Fail2&quot;, &quot;Fail3&quot;],
+            second_test_failures=[&quot;Fail1&quot;, &quot;Fail2&quot;, &quot;Fail3&quot;],
+            clean_test_failures=_lots_of_failing_tests)
+
+        # Unfortunately there are cases where the clean build will randomly fail enough tests to hit the failure limit.
+        # With that in mind, we can't actually know that this patch is good or bad until we see a clean run that doesn't
+        # exceed the failure limit.
</ins><span class="cx">         self._run_and_expect_patch_analysis_result(commit_queue, PatchAnalysisResult.DEFER, expect_clean_tests_to_run=True)
</span><span class="cx"> 
</span><ins>+    def test_first_and_second_failure_limit(self):
+        commit_queue = MockSimpleTestPlanCommitQueue(
+            first_test_failures=_lots_of_failing_tests,
+            second_test_failures=_lots_of_failing_tests,
+            clean_test_failures=[])
+
+        self._run_and_expect_patch_analysis_result(commit_queue, PatchAnalysisResult.FAIL, expect_clean_tests_to_run=True, expected_failure_status_id=1)
+
+    def test_first_and_clean_failure_limit(self):
+        commit_queue = MockSimpleTestPlanCommitQueue(
+            first_test_failures=_lots_of_failing_tests,
+            second_test_failures=[],
+            clean_test_failures=_lots_of_failing_tests)
+
+        self._run_and_expect_patch_analysis_result(commit_queue, PatchAnalysisResult.DEFER, expect_clean_tests_to_run=True)
+
+    def test_first_second_and_clean_failure_limit(self):
+        commit_queue = MockSimpleTestPlanCommitQueue(
+            first_test_failures=_lots_of_failing_tests,
+            second_test_failures=_lots_of_failing_tests,
+            clean_test_failures=_lots_of_failing_tests)
+
+        self._run_and_expect_patch_analysis_result(commit_queue, PatchAnalysisResult.DEFER, expect_clean_tests_to_run=True)
+
</ins><span class="cx">     def test_red_tree_patch_rejection(self):
</span><span class="cx">         commit_queue = MockSimpleTestPlanCommitQueue(
</span><span class="cx">             first_test_failures=[&quot;Fail1&quot;, &quot;Fail2&quot;],
</span></span></pre></div>
<a id="trunkToolsScriptswebkitpytoolbotpatchanalysistaskpy"></a>
<div class="modfile"><h4>Modified: trunk/Tools/Scripts/webkitpy/tool/bot/patchanalysistask.py (176210 => 176211)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Tools/Scripts/webkitpy/tool/bot/patchanalysistask.py        2014-11-17 18:58:58 UTC (rev 176210)
+++ trunk/Tools/Scripts/webkitpy/tool/bot/patchanalysistask.py        2014-11-17 19:06:38 UTC (rev 176211)
</span><span class="lines">@@ -182,18 +182,6 @@
</span><span class="cx">         second_failing_tests = [] if not second else second.failing_tests()
</span><span class="cx">         return first_failing_tests != second_failing_tests
</span><span class="cx"> 
</span><del>-    def _continue_testing_patch_that_exceeded_failure_limit_on_first_or_second_try(self, results, results_archive, script_error):
-        self._build_and_test_without_patch()
-
-        # If we've made it here, then many (500) tests are failing with the patch applied, but
-        # if the clean tree is also failing many tests, even if it's not quite as many (495),
-        # then we can't be certain that the discrepancy isn't due to flakiness, and hence we must
-        # defer judgement.
-        if (len(results.failing_tests()) - len(self._delegate.test_results().failing_tests())) &lt;= 5:
-            return False
-
-        return self.report_failure(results_archive, results, script_error)
-
</del><span class="cx">     def _should_defer_patch_or_throw(self, failures_with_patch, results_archive_for_failures_with_patch, script_error, failure_id):
</span><span class="cx">         self._build_and_test_without_patch()
</span><span class="cx">         clean_tree_results = self._delegate.test_results()
</span><span class="lines">@@ -223,10 +211,7 @@
</span><span class="cx">         first_script_error = self._script_error
</span><span class="cx">         first_failure_status_id = self.failure_status_id
</span><span class="cx"> 
</span><del>-        if first_results.did_exceed_test_failure_limit():
-            return self._continue_testing_patch_that_exceeded_failure_limit_on_first_or_second_try(first_results, first_results_archive, first_script_error)
-
-        if self._test():
</del><ins>+        if self._test() and not first_results.did_exceed_test_failure_limit():
</ins><span class="cx">             # Only report flaky tests if we were successful at parsing results.json and archiving results.
</span><span class="cx">             if first_results and first_results_archive:
</span><span class="cx">                 self._report_flaky_tests(first_results.failing_test_results(), first_results_archive)
</span><span class="lines">@@ -237,9 +222,25 @@
</span><span class="cx">         second_script_error = self._script_error
</span><span class="cx">         second_failure_status_id = self.failure_status_id
</span><span class="cx"> 
</span><ins>+        if second_results.did_exceed_test_failure_limit() and first_results.did_exceed_test_failure_limit():
+            self._build_and_test_without_patch()
+            clean_tree_results = self._delegate.test_results()
+
+            if (len(first_results.failing_tests()) - len(clean_tree_results.failing_tests())) &lt;= 5:
+                return False
+
+            self.failure_status_id = first_failure_status_id
+
+            return self.report_failure(first_results_archive, first_results, first_script_error)
+
</ins><span class="cx">         if second_results.did_exceed_test_failure_limit():
</span><del>-            return self._continue_testing_patch_that_exceeded_failure_limit_on_first_or_second_try(second_results, second_results_archive, second_script_error)
</del><ins>+            self._should_defer_patch_or_throw(first_results.failing_test_results(), first_results_archive, first_script_error, first_failure_status_id)
+            return False
</ins><span class="cx"> 
</span><ins>+        if first_results.did_exceed_test_failure_limit():
+            self._should_defer_patch_or_throw(second_results.failing_test_results(), second_results_archive, second_script_error, second_failure_status_id)
+            return False
+
</ins><span class="cx">         if self._results_failed_different_tests(first_results, second_results):
</span><span class="cx">             first_failing_results_set = frozenset(first_results.failing_test_results())
</span><span class="cx">             second_failing_results_set = frozenset(second_results.failing_test_results())
</span></span></pre>
</div>
</div>

</body>
</html>