<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN"
"http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head><meta http-equiv="content-type" content="text/html; charset=utf-8" />
<title>[277781] trunk/Tools</title>
</head>
<body>

<style type="text/css"><!--
#msg dl.meta { border: 1px #006 solid; background: #369; padding: 6px; color: #fff; }
#msg dl.meta dt { float: left; width: 6em; font-weight: bold; }
#msg dt:after { content:':';}
#msg dl, #msg dt, #msg ul, #msg li, #header, #footer, #logmsg { font-family: verdana,arial,helvetica,sans-serif; font-size: 10pt;  }
#msg dl a { font-weight: bold}
#msg dl a:link    { color:#fc3; }
#msg dl a:active  { color:#ff0; }
#msg dl a:visited { color:#cc6; }
h3 { font-family: verdana,arial,helvetica,sans-serif; font-size: 10pt; font-weight: bold; }
#msg pre { overflow: auto; background: #ffc; border: 1px #fa0 solid; padding: 6px; }
#logmsg { background: #ffc; border: 1px #fa0 solid; padding: 1em 1em 0 1em; }
#logmsg p, #logmsg pre, #logmsg blockquote { margin: 0 0 1em 0; }
#logmsg p, #logmsg li, #logmsg dt, #logmsg dd { line-height: 14pt; }
#logmsg h1, #logmsg h2, #logmsg h3, #logmsg h4, #logmsg h5, #logmsg h6 { margin: .5em 0; }
#logmsg h1:first-child, #logmsg h2:first-child, #logmsg h3:first-child, #logmsg h4:first-child, #logmsg h5:first-child, #logmsg h6:first-child { margin-top: 0; }
#logmsg ul, #logmsg ol { padding: 0; list-style-position: inside; margin: 0 0 0 1em; }
#logmsg ul { text-indent: -1em; padding-left: 1em; }#logmsg ol { text-indent: -1.5em; padding-left: 1.5em; }
#logmsg > ul, #logmsg > ol { margin: 0 0 1em 0; }
#logmsg pre { background: #eee; padding: 1em; }
#logmsg blockquote { border: 1px solid #fa0; border-left-width: 10px; padding: 1em 1em 0 1em; background: white;}
#logmsg dl { margin: 0; }
#logmsg dt { font-weight: bold; }
#logmsg dd { margin: 0; padding: 0 0 0.5em 0; }
#logmsg dd:before { content:'\00bb';}
#logmsg table { border-spacing: 0px; border-collapse: collapse; border-top: 4px solid #fa0; border-bottom: 1px solid #fa0; background: #fff; }
#logmsg table th { text-align: left; font-weight: normal; padding: 0.2em 0.5em; border-top: 1px dotted #fa0; }
#logmsg table td { text-align: right; border-top: 1px dotted #fa0; padding: 0.2em 0.5em; }
#logmsg table thead th { text-align: center; border-bottom: 1px solid #fa0; }
#logmsg table th.Corner { text-align: left; }
#logmsg hr { border: none 0; border-top: 2px dashed #fa0; height: 1px; }
#header, #footer { color: #fff; background: #636; border: 1px #300 solid; padding: 6px; }
#patch { width: 100%; }
#patch h4 {font-family: verdana,arial,helvetica,sans-serif;font-size:10pt;padding:8px;background:#369;color:#fff;margin:0;}
#patch .propset h4, #patch .binary h4 {margin:0;}
#patch pre {padding:0;line-height:1.2em;margin:0;}
#patch .diff {width:100%;background:#eee;padding: 0 0 10px 0;overflow:auto;}
#patch .propset .diff, #patch .binary .diff  {padding:10px 0;}
#patch span {display:block;padding:0 10px;}
#patch .modfile, #patch .addfile, #patch .delfile, #patch .propset, #patch .binary, #patch .copfile {border:1px solid #ccc;margin:10px 0;}
#patch ins {background:#dfd;text-decoration:none;display:block;padding:0 10px;}
#patch del {background:#fdd;text-decoration:none;display:block;padding:0 10px;}
#patch .lines, .info {color:#888;background:#fff;}
--></style>
<div id="msg">
<dl class="meta">
<dt>Revision</dt> <dd><a href="http://trac.webkit.org/projects/webkit/changeset/277781">277781</a></dd>
<dt>Author</dt> <dd>gsnedders@apple.com</dd>
<dt>Date</dt> <dd>2021-05-20 06:48:42 -0700 (Thu, 20 May 2021)</dd>
</dl>

<h3>Log Message</h3>
<pre>Store whether a test is slow on TestInput
https://bugs.webkit.org/show_bug.cgi?id=224563

Reviewed by Jonathan Bedard.

Additionally, notably, this makes a TestResult store a TestInput rather than a
test_name string. With that there, we then don't need to punch through multiple
layers to find out whether a test is slow or not. Note that replacing the
test_name with a Test or TestInput as part of removing the 1:1 relationship
between files and tests.

With this done, we don't have to pass around a test_is_slow_fn, as we can directly
look at the result to determine whether or not it is slow.

* Scripts/webkitpy/layout_tests/controllers/layout_test_runner.py:
(LayoutTestRunner.__init__): Remove test_is_slow_fn argument
(LayoutTestRunner._mark_interrupted_tests_as_skipped): Remove test_is_slow argument
(LayoutTestRunner._update_summary_with_result): Remove test_is_slow argument
(Worker._run_test_in_another_thread): Remove test_is_slow argument
* Scripts/webkitpy/layout_tests/controllers/layout_test_runner_unittest.py:
(LayoutTestRunnerTests._runner): Remove test_is_slow_fn argument
(LayoutTestRunnerTests.test_update_summary_with_result): TestResult arg rename
* Scripts/webkitpy/layout_tests/controllers/manager.py:
(Manager): Improve docstring
(Manager.__init__): Tidy up reading tests-options.json
(Manager._test_input_for_file): Set is_slow
(Manager.run): Remove test_is_slow_fn argument
(Manager._look_for_new_crash_logs): Remove test_is_slow_fn/test_is_slow argument
* Scripts/webkitpy/layout_tests/controllers/single_test_runner.py:
(SingleTestRunner.__init__): Store TestInput object
(SingleTestRunner._test_name): Replacement getter
(SingleTestRunner._should_run_pixel_test): Replacement getter
(SingleTestRunner._should_dump_jsconsolelog_in_stderr): Replacement getter
(SingleTestRunner._reference_files): Replacement getter
(SingleTestRunner._timeout): Replacement getter
(SingleTestRunner._compare_output): Pass TestInput to TestResult
(SingleTestRunner._run_reftest): Pass TestInput to TestResult
(SingleTestRunner._compare_output_with_reference): Pass TestInput to TestResult
* Scripts/webkitpy/layout_tests/models/test_input.py:
(TestInput): Add is_slow boolean
* Scripts/webkitpy/layout_tests/models/test_results.py:
(TestResult.__init__): Rename test_name -> test_input, construct TestInput if we must
(TestResult.test_name): Replacement getter
* Scripts/webkitpy/layout_tests/models/test_results_unittest.py:
(TestResultsTest.test_pickle_roundtrip): TestResult arg rename
* Scripts/webkitpy/layout_tests/models/test_run_results.py:
(TestRunResults.add): Remove test_is_slow argument, look at TestResult
* Scripts/webkitpy/layout_tests/models/test_run_results_unittest.py:
(summarized_results): Remove test_is_slow argument
* Scripts/webkitpy/layout_tests/run_webkit_tests_integrationtest.py:
(RunTest.test_tests_options): Add a test that test-options.json works</pre>

<h3>Modified Paths</h3>
<ul>
<li><a href="#trunkToolsChangeLog">trunk/Tools/ChangeLog</a></li>
<li><a href="#trunkToolsScriptswebkitpylayout_testscontrollerslayout_test_runnerpy">trunk/Tools/Scripts/webkitpy/layout_tests/controllers/layout_test_runner.py</a></li>
<li><a href="#trunkToolsScriptswebkitpylayout_testscontrollerslayout_test_runner_unittestpy">trunk/Tools/Scripts/webkitpy/layout_tests/controllers/layout_test_runner_unittest.py</a></li>
<li><a href="#trunkToolsScriptswebkitpylayout_testscontrollersmanagerpy">trunk/Tools/Scripts/webkitpy/layout_tests/controllers/manager.py</a></li>
<li><a href="#trunkToolsScriptswebkitpylayout_testscontrollerssingle_test_runnerpy">trunk/Tools/Scripts/webkitpy/layout_tests/controllers/single_test_runner.py</a></li>
<li><a href="#trunkToolsScriptswebkitpylayout_testsmodelstest_inputpy">trunk/Tools/Scripts/webkitpy/layout_tests/models/test_input.py</a></li>
<li><a href="#trunkToolsScriptswebkitpylayout_testsmodelstest_resultspy">trunk/Tools/Scripts/webkitpy/layout_tests/models/test_results.py</a></li>
<li><a href="#trunkToolsScriptswebkitpylayout_testsmodelstest_results_unittestpy">trunk/Tools/Scripts/webkitpy/layout_tests/models/test_results_unittest.py</a></li>
<li><a href="#trunkToolsScriptswebkitpylayout_testsmodelstest_run_resultspy">trunk/Tools/Scripts/webkitpy/layout_tests/models/test_run_results.py</a></li>
<li><a href="#trunkToolsScriptswebkitpylayout_testsmodelstest_run_results_unittestpy">trunk/Tools/Scripts/webkitpy/layout_tests/models/test_run_results_unittest.py</a></li>
<li><a href="#trunkToolsScriptswebkitpylayout_testsrun_webkit_tests_integrationtestpy">trunk/Tools/Scripts/webkitpy/layout_tests/run_webkit_tests_integrationtest.py</a></li>
</ul>

</div>
<div id="patch">
<h3>Diff</h3>
<a id="trunkToolsChangeLog"></a>
<div class="modfile"><h4>Modified: trunk/Tools/ChangeLog (277780 => 277781)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Tools/ChangeLog    2021-05-20 13:31:05 UTC (rev 277780)
+++ trunk/Tools/ChangeLog       2021-05-20 13:48:42 UTC (rev 277781)
</span><span class="lines">@@ -1,3 +1,57 @@
</span><ins>+2021-05-20  Sam Sneddon  <gsnedders@apple.com>
+
+        Store whether a test is slow on TestInput
+        https://bugs.webkit.org/show_bug.cgi?id=224563
+
+        Reviewed by Jonathan Bedard.
+
+        Additionally, notably, this makes a TestResult store a TestInput rather than a
+        test_name string. With that there, we then don't need to punch through multiple
+        layers to find out whether a test is slow or not. Note that replacing the
+        test_name with a Test or TestInput as part of removing the 1:1 relationship
+        between files and tests.
+
+        With this done, we don't have to pass around a test_is_slow_fn, as we can directly
+        look at the result to determine whether or not it is slow.
+
+        * Scripts/webkitpy/layout_tests/controllers/layout_test_runner.py:
+        (LayoutTestRunner.__init__): Remove test_is_slow_fn argument
+        (LayoutTestRunner._mark_interrupted_tests_as_skipped): Remove test_is_slow argument
+        (LayoutTestRunner._update_summary_with_result): Remove test_is_slow argument
+        (Worker._run_test_in_another_thread): Remove test_is_slow argument
+        * Scripts/webkitpy/layout_tests/controllers/layout_test_runner_unittest.py:
+        (LayoutTestRunnerTests._runner): Remove test_is_slow_fn argument
+        (LayoutTestRunnerTests.test_update_summary_with_result): TestResult arg rename
+        * Scripts/webkitpy/layout_tests/controllers/manager.py:
+        (Manager): Improve docstring
+        (Manager.__init__): Tidy up reading tests-options.json
+        (Manager._test_input_for_file): Set is_slow
+        (Manager.run): Remove test_is_slow_fn argument
+        (Manager._look_for_new_crash_logs): Remove test_is_slow_fn/test_is_slow argument
+        * Scripts/webkitpy/layout_tests/controllers/single_test_runner.py:
+        (SingleTestRunner.__init__): Store TestInput object
+        (SingleTestRunner._test_name): Replacement getter
+        (SingleTestRunner._should_run_pixel_test): Replacement getter
+        (SingleTestRunner._should_dump_jsconsolelog_in_stderr): Replacement getter
+        (SingleTestRunner._reference_files): Replacement getter
+        (SingleTestRunner._timeout): Replacement getter
+        (SingleTestRunner._compare_output): Pass TestInput to TestResult
+        (SingleTestRunner._run_reftest): Pass TestInput to TestResult
+        (SingleTestRunner._compare_output_with_reference): Pass TestInput to TestResult
+        * Scripts/webkitpy/layout_tests/models/test_input.py:
+        (TestInput): Add is_slow boolean
+        * Scripts/webkitpy/layout_tests/models/test_results.py:
+        (TestResult.__init__): Rename test_name -> test_input, construct TestInput if we must
+        (TestResult.test_name): Replacement getter
+        * Scripts/webkitpy/layout_tests/models/test_results_unittest.py:
+        (TestResultsTest.test_pickle_roundtrip): TestResult arg rename
+        * Scripts/webkitpy/layout_tests/models/test_run_results.py:
+        (TestRunResults.add): Remove test_is_slow argument, look at TestResult
+        * Scripts/webkitpy/layout_tests/models/test_run_results_unittest.py:
+        (summarized_results): Remove test_is_slow argument
+        * Scripts/webkitpy/layout_tests/run_webkit_tests_integrationtest.py:
+        (RunTest.test_tests_options): Add a test that test-options.json works
+
</ins><span class="cx"> 2021-05-19  Devin Rousso  <drousso@apple.com>
</span><span class="cx"> 
</span><span class="cx">         Add a way to create `"wheel"` events from gesture/touch events
</span></span></pre></div>
<a id="trunkToolsScriptswebkitpylayout_testscontrollerslayout_test_runnerpy"></a>
<div class="modfile"><h4>Modified: trunk/Tools/Scripts/webkitpy/layout_tests/controllers/layout_test_runner.py (277780 => 277781)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Tools/Scripts/webkitpy/layout_tests/controllers/layout_test_runner.py      2021-05-20 13:31:05 UTC (rev 277780)
+++ trunk/Tools/Scripts/webkitpy/layout_tests/controllers/layout_test_runner.py 2021-05-20 13:48:42 UTC (rev 277781)
</span><span class="lines">@@ -63,12 +63,11 @@
</span><span class="cx"> 
</span><span class="cx"> 
</span><span class="cx"> class LayoutTestRunner(object):
</span><del>-    def __init__(self, options, port, printer, results_directory, test_is_slow_fn, needs_http=False, needs_websockets=False, needs_web_platform_test_server=False):
</del><ins>+    def __init__(self, options, port, printer, results_directory, needs_http=False, needs_websockets=False, needs_web_platform_test_server=False):
</ins><span class="cx">         self._options = options
</span><span class="cx">         self._port = port
</span><span class="cx">         self._printer = printer
</span><span class="cx">         self._results_directory = results_directory
</span><del>-        self._test_is_slow = test_is_slow_fn
</del><span class="cx">         self._needs_http = needs_http
</span><span class="cx">         self._needs_websockets = needs_websockets
</span><span class="cx">         self._needs_web_platform_test_server = needs_web_platform_test_server
</span><span class="lines">@@ -146,11 +145,11 @@
</span><span class="cx">     def _mark_interrupted_tests_as_skipped(self, run_results):
</span><span class="cx">         for test_input in self._test_inputs:
</span><span class="cx">             if test_input.test_name not in run_results.results_by_name:
</span><del>-                result = test_results.TestResult(test_input.test_name, [test_failures.FailureEarlyExit()])
</del><ins>+                result = test_results.TestResult(test_input, [test_failures.FailureEarlyExit()])
</ins><span class="cx">                 # FIXME: We probably need to loop here if there are multiple iterations.
</span><span class="cx">                 # FIXME: Also, these results are really neither expected nor unexpected. We probably
</span><span class="cx">                 # need a third type of result.
</span><del>-                run_results.add(result, expected=False, test_is_slow=self._test_is_slow(test_input.test_name))
</del><ins>+                run_results.add(result, expected=False)
</ins><span class="cx"> 
</span><span class="cx">     def _interrupt_if_at_failure_limits(self, run_results):
</span><span class="cx">         # Note: The messages in this method are constructed to match old-run-webkit-tests
</span><span class="lines">@@ -183,7 +182,7 @@
</span><span class="cx">             exp_str = self._expectations.model().expectations_to_string(expectations)
</span><span class="cx">             got_str = self._expectations.model().expectation_to_string(result.type)
</span><span class="cx"> 
</span><del>-        run_results.add(result, expected, self._test_is_slow(result.test_name))
</del><ins>+        run_results.add(result, expected)
</ins><span class="cx"> 
</span><span class="cx">         self._printer.print_finished_test(result, expected, exp_str, got_str)
</span><span class="cx"> 
</span><span class="lines">@@ -436,7 +435,7 @@
</span><span class="cx">         driver.stop()
</span><span class="cx"> 
</span><span class="cx">         if not result:
</span><del>-            result = test_results.TestResult(test_input.test_name, failures=failures, test_run_time=0)
</del><ins>+            result = test_results.TestResult(test_input, failures=failures, test_run_time=0)
</ins><span class="cx">         return result
</span><span class="cx"> 
</span><span class="cx">     def _run_test_in_this_thread(self, test_input, stop_when_done):
</span></span></pre></div>
<a id="trunkToolsScriptswebkitpylayout_testscontrollerslayout_test_runner_unittestpy"></a>
<div class="modfile"><h4>Modified: trunk/Tools/Scripts/webkitpy/layout_tests/controllers/layout_test_runner_unittest.py (277780 => 277781)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Tools/Scripts/webkitpy/layout_tests/controllers/layout_test_runner_unittest.py     2021-05-20 13:31:05 UTC (rev 277780)
+++ trunk/Tools/Scripts/webkitpy/layout_tests/controllers/layout_test_runner_unittest.py        2021-05-20 13:48:42 UTC (rev 277781)
</span><span class="lines">@@ -79,7 +79,7 @@
</span><span class="cx"> 
</span><span class="cx">         host = MockHost()
</span><span class="cx">         port = port or host.port_factory.get(options.platform, options=options)
</span><del>-        return LayoutTestRunner(options, port, FakePrinter(), port.results_directory(), lambda test_name: False)
</del><ins>+        return LayoutTestRunner(options, port, FakePrinter(), port.results_directory())
</ins><span class="cx"> 
</span><span class="cx">     def _run_tests(self, runner, tests):
</span><span class="cx">         test_inputs = [TestInput(Test(test), 6000) for test in tests]
</span><span class="lines">@@ -131,19 +131,19 @@
</span><span class="cx">         runner._expectations = expectations
</span><span class="cx"> 
</span><span class="cx">         run_results = TestRunResults(expectations, 1)
</span><del>-        result = TestResult(test_name=test, failures=[test_failures.FailureReftestMismatchDidNotOccur()], reftest_type=['!='])
</del><ins>+        result = TestResult(test, failures=[test_failures.FailureReftestMismatchDidNotOccur()], reftest_type=['!='])
</ins><span class="cx">         runner._update_summary_with_result(run_results, result)
</span><span class="cx">         self.assertEqual(1, run_results.expected)
</span><span class="cx">         self.assertEqual(0, run_results.unexpected)
</span><span class="cx"> 
</span><span class="cx">         run_results = TestRunResults(expectations, 1)
</span><del>-        result = TestResult(test_name=test, failures=[], reftest_type=['=='])
</del><ins>+        result = TestResult(test, failures=[], reftest_type=['=='])
</ins><span class="cx">         runner._update_summary_with_result(run_results, result)
</span><span class="cx">         self.assertEqual(0, run_results.expected)
</span><span class="cx">         self.assertEqual(1, run_results.unexpected)
</span><span class="cx"> 
</span><span class="cx">         run_results = TestRunResults(expectations, 1)
</span><del>-        result = TestResult(test_name=leak_test, failures=[])
</del><ins>+        result = TestResult(leak_test, failures=[])
</ins><span class="cx">         runner._update_summary_with_result(run_results, result)
</span><span class="cx">         self.assertEqual(1, run_results.expected)
</span><span class="cx">         self.assertEqual(0, run_results.unexpected)
</span></span></pre></div>
<a id="trunkToolsScriptswebkitpylayout_testscontrollersmanagerpy"></a>
<div class="modfile"><h4>Modified: trunk/Tools/Scripts/webkitpy/layout_tests/controllers/manager.py (277780 => 277781)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Tools/Scripts/webkitpy/layout_tests/controllers/manager.py 2021-05-20 13:31:05 UTC (rev 277780)
+++ trunk/Tools/Scripts/webkitpy/layout_tests/controllers/manager.py    2021-05-20 13:48:42 UTC (rev 277781)
</span><span class="lines">@@ -65,9 +65,14 @@
</span><span class="cx"> 
</span><span class="cx"> 
</span><span class="cx"> class Manager(object):
</span><del>-    """A class for managing running a series of tests on a series of layout
-    test files."""
</del><ins>+    """Test execution manager
</ins><span class="cx"> 
</span><ins>+    This class has the main entry points for run-webkit-tests; the ..run_webkit_tests module almost
+    exclusively just handles CLI options. It orchestrates collecting the tests (through
+    LayoutTestFinder), running them (LayoutTestRunner), and then displaying the results
+    (TestResultWriter/Printer).
+    """
+
</ins><span class="cx">     def __init__(self, port, options, printer):
</span><span class="cx">         """Initialize test runner data structures.
</span><span class="cx"> 
</span><span class="lines">@@ -77,17 +82,23 @@
</span><span class="cx">           printer: a Printer object to record updates to.
</span><span class="cx">         """
</span><span class="cx">         self._port = port
</span><del>-        self._filesystem = port.host.filesystem
</del><ins>+        fs = port.host.filesystem
+        self._filesystem = fs
</ins><span class="cx">         self._options = options
</span><span class="cx">         self._printer = printer
</span><span class="cx">         self._expectations = OrderedDict()
</span><del>-        self.LAYOUT_TESTS_DIRECTORY = 'LayoutTests'
</del><span class="cx">         self._results_directory = self._port.results_directory()
</span><span class="cx">         self._finder = LayoutTestFinder(self._port, self._options)
</span><span class="cx">         self._runner = None
</span><span class="cx"> 
</span><del>-        test_options_json_path = self._port.path_from_webkit_base(self.LAYOUT_TESTS_DIRECTORY, "tests-options.json")
-        self._tests_options = json.loads(self._filesystem.read_text_file(test_options_json_path)) if self._filesystem.exists(test_options_json_path) else {}
</del><ins>+        self._tests_options = {}
+        test_options_json_path = fs.join(self._port.layout_tests_dir(), "tests-options.json")
+        if fs.exists(test_options_json_path):
+            with fs.open_binary_file_for_reading(test_options_json_path) as fd:
+                try:
+                    self._tests_options = json.load(fd)
+                except (ValueError, IOError):
+                    pass
</ins><span class="cx"> 
</span><span class="cx">     def _collect_tests(self,
</span><span class="cx">                        paths,  # type: List[str]
</span><span class="lines">@@ -214,12 +225,13 @@
</span><span class="cx">         return tests_to_run
</span><span class="cx"> 
</span><span class="cx">     def _test_input_for_file(self, test_file, device_type):
</span><ins>+        test_is_slow = self._test_is_slow(test_file.test_path, device_type=device_type)
</ins><span class="cx">         reference_files = self._port.reference_files(
</span><span class="cx">             test_file.test_path, device_type=device_type
</span><span class="cx">         )
</span><span class="cx">         timeout = (
</span><span class="cx">             self._options.slow_time_out_ms
</span><del>-            if self._test_is_slow(test_file.test_path, device_type=device_type)
</del><ins>+            if test_is_slow
</ins><span class="cx">             else self._options.time_out_ms
</span><span class="cx">         )
</span><span class="cx">         should_dump_jsconsolelog_in_stderr = (
</span><span class="lines">@@ -243,6 +255,7 @@
</span><span class="cx">         return TestInput(
</span><span class="cx">             test_file,
</span><span class="cx">             timeout=timeout,
</span><ins>+            is_slow=test_is_slow,
</ins><span class="cx">             needs_servers=test_file.needs_any_server,
</span><span class="cx">             should_dump_jsconsolelog_in_stderr=should_dump_jsconsolelog_in_stderr,
</span><span class="cx">             reference_files=reference_files,
</span><span class="lines">@@ -353,7 +366,7 @@
</span><span class="cx">         needs_http = any(test.needs_http_server for tests in itervalues(tests_to_run_by_device) for test in tests)
</span><span class="cx">         needs_web_platform_test_server = any(test.needs_wpt_server for tests in itervalues(tests_to_run_by_device) for test in tests)
</span><span class="cx">         needs_websockets = any(test.needs_websocket_server for tests in itervalues(tests_to_run_by_device) for test in tests)
</span><del>-        self._runner = LayoutTestRunner(self._options, self._port, self._printer, self._results_directory, self._test_is_slow,
</del><ins>+        self._runner = LayoutTestRunner(self._options, self._port, self._printer, self._results_directory,
</ins><span class="cx">                                         needs_http=needs_http, needs_web_platform_test_server=needs_web_platform_test_server, needs_websockets=needs_websockets)
</span><span class="cx"> 
</span><span class="cx">         initial_results = None
</span><span class="lines">@@ -365,7 +378,6 @@
</span><span class="cx">         uploads = []
</span><span class="cx"> 
</span><span class="cx">         for device_type in device_type_list:
</span><del>-            self._runner._test_is_slow = lambda test_file: self._test_is_slow(test_file, device_type=device_type)
</del><span class="cx">             self._options.child_processes = min(self._port.max_child_processes(device_type=device_type), int(child_processes_option_value or self._port.default_child_processes(device_type=device_type)))
</span><span class="cx"> 
</span><span class="cx">             _log.info('')
</span><span class="lines">@@ -399,7 +411,7 @@
</span><span class="cx">             for skipped_test in set(aggregate_tests_to_skip):
</span><span class="cx">                 skipped_result = test_results.TestResult(skipped_test.test_path)
</span><span class="cx">                 skipped_result.type = test_expectations.SKIP
</span><del>-                skipped_results.add(skipped_result, expected=True, test_is_slow=self._test_is_slow(skipped_test.test_path, device_type=device_type))
</del><ins>+                skipped_results.add(skipped_result, expected=True)
</ins><span class="cx">             temp_initial_results = temp_initial_results.merge(skipped_results)
</span><span class="cx"> 
</span><span class="cx">             if self._options.report_urls:
</span><span class="lines">@@ -601,7 +613,7 @@
</span><span class="cx">                     result = test_results.TestResult(test)
</span><span class="cx">                     result.type = test_expectations.CRASH
</span><span class="cx">                     result.is_other_crash = True
</span><del>-                    run_results.add(result, expected=False, test_is_slow=False)
</del><ins>+                    run_results.add(result, expected=False)
</ins><span class="cx">                     _log.debug("Adding results for other crash: " + str(test))
</span><span class="cx"> 
</span><span class="cx">     def _clobber_old_results(self):
</span></span></pre></div>
<a id="trunkToolsScriptswebkitpylayout_testscontrollerssingle_test_runnerpy"></a>
<div class="modfile"><h4>Modified: trunk/Tools/Scripts/webkitpy/layout_tests/controllers/single_test_runner.py (277780 => 277781)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Tools/Scripts/webkitpy/layout_tests/controllers/single_test_runner.py      2021-05-20 13:31:05 UTC (rev 277780)
+++ trunk/Tools/Scripts/webkitpy/layout_tests/controllers/single_test_runner.py 2021-05-20 13:48:42 UTC (rev 277781)
</span><span class="lines">@@ -56,12 +56,8 @@
</span><span class="cx">         self._results_directory = results_directory
</span><span class="cx">         self._driver = driver
</span><span class="cx">         self._worker_name = worker_name
</span><del>-        self._test_name = test_input.test_name
-        self._should_run_pixel_test = test_input.should_run_pixel_test
-        self._should_dump_jsconsolelog_in_stderr = test_input.should_dump_jsconsolelog_in_stderr
-        self._reference_files = test_input.reference_files
</del><ins>+        self._test_input = test_input
</ins><span class="cx">         self._stop_when_done = stop_when_done
</span><del>-        self._timeout = test_input.timeout
</del><span class="cx"> 
</span><span class="cx">         if self._reference_files:
</span><span class="cx">             # Detect and report a test which has a wrong combination of expectation files.
</span><span class="lines">@@ -73,6 +69,26 @@
</span><span class="cx">                 if self._filesystem.exists(expected_filename):
</span><span class="cx">                     _log.error('%s is a reftest, but has an unused expectation file. Please remove %s.', self._test_name, expected_filename)
</span><span class="cx"> 
</span><ins>+    @property
+    def _test_name(self):
+        return self._test_input.test_name
+
+    @property
+    def _should_run_pixel_test(self):
+        return self._test_input.should_run_pixel_test
+
+    @property
+    def _should_dump_jsconsolelog_in_stderr(self):
+        return self._test_input.should_dump_jsconsolelog_in_stderr
+
+    @property
+    def _reference_files(self):
+        return self._test_input.reference_files
+
+    @property
+    def _timeout(self):
+        return self._test_input.timeout
+
</ins><span class="cx">     def _expected_driver_output(self):
</span><span class="cx">         return DriverOutput(self._port.expected_text(self._test_name, device_type=self._driver.host.device_type),
</span><span class="cx">                                  self._port.expected_image(self._test_name, device_type=self._driver.host.device_type),
</span><span class="lines">@@ -96,7 +112,7 @@
</span><span class="cx">         if self._reference_files:
</span><span class="cx">             if self._port.get_option('no_ref_tests') or self._options.reset_results:
</span><span class="cx">                 reftest_type = set([reference_file[0] for reference_file in self._reference_files])
</span><del>-                result = TestResult(self._test_name, reftest_type=reftest_type)
</del><ins>+                result = TestResult(self._test_input, reftest_type=reftest_type)
</ins><span class="cx">                 result.type = test_expectations.SKIP
</span><span class="cx">                 return result
</span><span class="cx">             return self._run_reftest()
</span><span class="lines">@@ -131,7 +147,7 @@
</span><span class="cx">         # FIXME: It the test crashed or timed out, it might be better to avoid
</span><span class="cx">         # to write new baselines.
</span><span class="cx">         self._overwrite_baselines(driver_output)
</span><del>-        return TestResult(self._test_name, failures, driver_output.test_time, driver_output.has_stderr(), pid=driver_output.pid)
</del><ins>+        return TestResult(self._test_input, failures, driver_output.test_time, driver_output.has_stderr(), pid=driver_output.pid)
</ins><span class="cx"> 
</span><span class="cx">     _render_tree_dump_pattern = re.compile(r"^layer at \(\d+,\d+\) size \d+x\d+\n")
</span><span class="cx"> 
</span><span class="lines">@@ -223,13 +239,13 @@
</span><span class="cx">         if driver_output.crash:
</span><span class="cx">             # Don't continue any more if we already have a crash.
</span><span class="cx">             # In case of timeouts, we continue since we still want to see the text and image output.
</span><del>-            return TestResult(self._test_name, failures, driver_output.test_time, driver_output.has_stderr(), pid=driver_output.pid)
</del><ins>+            return TestResult(self._test_input, failures, driver_output.test_time, driver_output.has_stderr(), pid=driver_output.pid)
</ins><span class="cx"> 
</span><span class="cx">         failures.extend(self._compare_text(expected_driver_output.text, driver_output.text))
</span><span class="cx">         failures.extend(self._compare_audio(expected_driver_output.audio, driver_output.audio))
</span><span class="cx">         if self._should_run_pixel_test:
</span><span class="cx">             failures.extend(self._compare_image(expected_driver_output, driver_output))
</span><del>-        return TestResult(self._test_name, failures, driver_output.test_time, driver_output.has_stderr(), pid=driver_output.pid)
</del><ins>+        return TestResult(self._test_input, failures, driver_output.test_time, driver_output.has_stderr(), pid=driver_output.pid)
</ins><span class="cx"> 
</span><span class="cx">     def _compare_text(self, expected_text, actual_text):
</span><span class="cx">         failures = []
</span><span class="lines">@@ -317,7 +333,7 @@
</span><span class="cx">         assert(reference_output)
</span><span class="cx">         test_result_writer.write_test_result(self._filesystem, self._port, self._results_directory, self._test_name, test_output, reference_output, test_result.failures)
</span><span class="cx">         reftest_type = set([reference_file[0] for reference_file in self._reference_files])
</span><del>-        return TestResult(self._test_name, test_result.failures, total_test_time + test_result.test_run_time, test_result.has_stderr, reftest_type=reftest_type, pid=test_result.pid, references=reference_test_names)
</del><ins>+        return TestResult(self._test_input, test_result.failures, total_test_time + test_result.test_run_time, test_result.has_stderr, reftest_type=reftest_type, pid=test_result.pid, references=reference_test_names)
</ins><span class="cx"> 
</span><span class="cx">     def _compare_output_with_reference(self, reference_driver_output, actual_driver_output, reference_filename, mismatch):
</span><span class="cx">         total_test_time = reference_driver_output.test_time + actual_driver_output.test_time
</span><span class="lines">@@ -326,10 +342,10 @@
</span><span class="cx">         failures.extend(self._handle_error(actual_driver_output))
</span><span class="cx">         if failures:
</span><span class="cx">             # Don't continue any more if we already have crash or timeout.
</span><del>-            return TestResult(self._test_name, failures, total_test_time, has_stderr)
</del><ins>+            return TestResult(self._test_input, failures, total_test_time, has_stderr)
</ins><span class="cx">         failures.extend(self._handle_error(reference_driver_output, reference_filename=reference_filename))
</span><span class="cx">         if failures:
</span><del>-            return TestResult(self._test_name, failures, total_test_time, has_stderr, pid=actual_driver_output.pid)
</del><ins>+            return TestResult(self._test_input, failures, total_test_time, has_stderr, pid=actual_driver_output.pid)
</ins><span class="cx"> 
</span><span class="cx">         if not reference_driver_output.image_hash and not actual_driver_output.image_hash:
</span><span class="cx">             failures.append(test_failures.FailureReftestNoImagesGenerated(reference_filename))
</span><span class="lines">@@ -348,4 +364,4 @@
</span><span class="cx">             elif diff_result[0]:
</span><span class="cx">                 failures.append(test_failures.FailureReftestMismatch(reference_filename))
</span><span class="cx"> 
</span><del>-        return TestResult(self._test_name, failures, total_test_time, has_stderr, pid=actual_driver_output.pid)
</del><ins>+        return TestResult(self._test_input, failures, total_test_time, has_stderr, pid=actual_driver_output.pid)
</ins></span></pre></div>
<a id="trunkToolsScriptswebkitpylayout_testsmodelstest_inputpy"></a>
<div class="modfile"><h4>Modified: trunk/Tools/Scripts/webkitpy/layout_tests/models/test_input.py (277780 => 277781)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Tools/Scripts/webkitpy/layout_tests/models/test_input.py   2021-05-20 13:31:05 UTC (rev 277780)
+++ trunk/Tools/Scripts/webkitpy/layout_tests/models/test_input.py      2021-05-20 13:48:42 UTC (rev 277781)
</span><span class="lines">@@ -43,6 +43,7 @@
</span><span class="cx">     """
</span><span class="cx">     test = attr.ib(type=Test)
</span><span class="cx">     timeout = attr.ib(default=None)  # type: Union[None, int, str]
</span><ins>+    is_slow = attr.ib(default=None)  # type: Optional[bool]
</ins><span class="cx">     needs_servers = attr.ib(default=None)  # type: Optional[bool]
</span><span class="cx">     should_dump_jsconsolelog_in_stderr = attr.ib(default=None)  # type: Optional[bool]
</span><span class="cx">     reference_files = attr.ib(default=None)  # type: Optional[List[Tuple[str str]]]
</span></span></pre></div>
<a id="trunkToolsScriptswebkitpylayout_testsmodelstest_resultspy"></a>
<div class="modfile"><h4>Modified: trunk/Tools/Scripts/webkitpy/layout_tests/models/test_results.py (277780 => 277781)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Tools/Scripts/webkitpy/layout_tests/models/test_results.py 2021-05-20 13:31:05 UTC (rev 277780)
+++ trunk/Tools/Scripts/webkitpy/layout_tests/models/test_results.py    2021-05-20 13:48:42 UTC (rev 277781)
</span><span class="lines">@@ -27,13 +27,21 @@
</span><span class="cx"> # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
</span><span class="cx"> 
</span><span class="cx"> from webkitpy.layout_tests.models import test_failures
</span><ins>+from webkitpy.layout_tests.models.test import Test
+from webkitpy.layout_tests.models.test_input import TestInput
</ins><span class="cx"> 
</span><span class="cx"> 
</span><span class="cx"> class TestResult(object):
</span><span class="cx">     """Data object containing the results of a single test."""
</span><span class="cx"> 
</span><del>-    def __init__(self, test_name, failures=None, test_run_time=None, has_stderr=False, reftest_type=None, pid=None, references=None):
-        self.test_name = test_name
</del><ins>+    def __init__(self, test_input, failures=None, test_run_time=None, has_stderr=False, reftest_type=None, pid=None, references=None):
+        # this takes a TestInput, and not a Test, as running the same Test with
+        # different input options can result in differing results
+        if not isinstance(test_input, TestInput):
+            # FIXME: figure out something better
+            # Changing all callers will be hard but probably worth it?
+            test_input = TestInput(Test(test_input))
+        self.test_input = test_input
</ins><span class="cx">         self.failures = failures or []
</span><span class="cx">         self.test_run_time = test_run_time or 0  # The time taken to execute the test itself.
</span><span class="cx">         self.has_stderr = has_stderr
</span><span class="lines">@@ -51,6 +59,10 @@
</span><span class="cx">         self.test_number = None
</span><span class="cx">         self.is_other_crash = False
</span><span class="cx"> 
</span><ins>+    @property
+    def test_name(self):
+        return self.test_input.test_name
+
</ins><span class="cx">     def __eq__(self, other):
</span><span class="cx">         return (self.test_name == other.test_name and
</span><span class="cx">                 self.failures == other.failures and
</span></span></pre></div>
<a id="trunkToolsScriptswebkitpylayout_testsmodelstest_results_unittestpy"></a>
<div class="modfile"><h4>Modified: trunk/Tools/Scripts/webkitpy/layout_tests/models/test_results_unittest.py (277780 => 277781)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Tools/Scripts/webkitpy/layout_tests/models/test_results_unittest.py        2021-05-20 13:31:05 UTC (rev 277780)
+++ trunk/Tools/Scripts/webkitpy/layout_tests/models/test_results_unittest.py   2021-05-20 13:48:42 UTC (rev 277781)
</span><span class="lines">@@ -44,7 +44,7 @@
</span><span class="cx">         self.assertEqual(result.test_run_time, 0)
</span><span class="cx"> 
</span><span class="cx">     def test_pickle_roundtrip(self):
</span><del>-        result = TestResult(test_name='foo', failures=[], test_run_time=1.1)
</del><ins>+        result = TestResult('foo', failures=[], test_run_time=1.1)
</ins><span class="cx">         s = pickle.dumps(result)  # multiprocessing uses the default protocol version
</span><span class="cx">         new_result = pickle.loads(s)
</span><span class="cx">         self.assertIsInstance(new_result, TestResult)
</span></span></pre></div>
<a id="trunkToolsScriptswebkitpylayout_testsmodelstest_run_resultspy"></a>
<div class="modfile"><h4>Modified: trunk/Tools/Scripts/webkitpy/layout_tests/models/test_run_results.py (277780 => 277781)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Tools/Scripts/webkitpy/layout_tests/models/test_run_results.py     2021-05-20 13:31:05 UTC (rev 277780)
+++ trunk/Tools/Scripts/webkitpy/layout_tests/models/test_run_results.py        2021-05-20 13:48:42 UTC (rev 277781)
</span><span class="lines">@@ -67,7 +67,7 @@
</span><span class="cx">         self.interrupted = False
</span><span class="cx">         self.keyboard_interrupted = False
</span><span class="cx"> 
</span><del>-    def add(self, test_result, expected, test_is_slow):
</del><ins>+    def add(self, test_result, expected):
</ins><span class="cx">         self.tests_by_expectation[test_result.type].add(test_result.test_name)
</span><span class="cx">         self.results_by_name[test_result.test_name] = test_result
</span><span class="cx">         if test_result.is_other_crash:
</span><span class="lines">@@ -91,7 +91,7 @@
</span><span class="cx">                 self.unexpected_crashes += 1
</span><span class="cx">             elif test_result.type == test_expectations.TIMEOUT:
</span><span class="cx">                 self.unexpected_timeouts += 1
</span><del>-        if test_is_slow:
</del><ins>+        if test_result.test_input.is_slow:
</ins><span class="cx">             self.slow_tests.add(test_result.test_name)
</span><span class="cx"> 
</span><span class="cx">     def change_result_to_failure(self, existing_result, new_result, existing_expected, new_expected):
</span></span></pre></div>
<a id="trunkToolsScriptswebkitpylayout_testsmodelstest_run_results_unittestpy"></a>
<div class="modfile"><h4>Modified: trunk/Tools/Scripts/webkitpy/layout_tests/models/test_run_results_unittest.py (277780 => 277781)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Tools/Scripts/webkitpy/layout_tests/models/test_run_results_unittest.py    2021-05-20 13:31:05 UTC (rev 277780)
+++ trunk/Tools/Scripts/webkitpy/layout_tests/models/test_run_results_unittest.py       2021-05-20 13:48:42 UTC (rev 277781)
</span><span class="lines">@@ -60,58 +60,56 @@
</span><span class="cx"> 
</span><span class="cx"> 
</span><span class="cx"> def summarized_results(port, expected, passing, flaky, include_passes=False):
</span><del>-    test_is_slow = False
-
</del><span class="cx">     initial_results = run_results(port)
</span><span class="cx">     if expected:
</span><del>-        initial_results.add(get_result('passes/text.html', test_expectations.PASS), expected, test_is_slow)
-        initial_results.add(get_result('failures/expected/audio.html', test_expectations.AUDIO), expected, test_is_slow)
-        initial_results.add(get_result('failures/expected/timeout.html', test_expectations.TIMEOUT), expected, test_is_slow)
-        initial_results.add(get_result('failures/expected/crash.html', test_expectations.CRASH), expected, test_is_slow)
</del><ins>+        initial_results.add(get_result('passes/text.html', test_expectations.PASS), expected)
+        initial_results.add(get_result('failures/expected/audio.html', test_expectations.AUDIO), expected)
+        initial_results.add(get_result('failures/expected/timeout.html', test_expectations.TIMEOUT), expected)
+        initial_results.add(get_result('failures/expected/crash.html', test_expectations.CRASH), expected)
</ins><span class="cx"> 
</span><span class="cx">         if port._options.pixel_tests:
</span><del>-            initial_results.add(get_result('failures/expected/pixel-fail.html', test_expectations.IMAGE), expected, test_is_slow)
</del><ins>+            initial_results.add(get_result('failures/expected/pixel-fail.html', test_expectations.IMAGE), expected)
</ins><span class="cx">         else:
</span><del>-            initial_results.add(get_result('failures/expected/pixel-fail.html', test_expectations.PASS), expected, test_is_slow)
</del><ins>+            initial_results.add(get_result('failures/expected/pixel-fail.html', test_expectations.PASS), expected)
</ins><span class="cx"> 
</span><span class="cx">         if port._options.world_leaks:
</span><del>-            initial_results.add(get_result('failures/expected/leak.html', test_expectations.LEAK), expected, test_is_slow)
</del><ins>+            initial_results.add(get_result('failures/expected/leak.html', test_expectations.LEAK), expected)
</ins><span class="cx">         else:
</span><del>-            initial_results.add(get_result('failures/expected/leak.html', test_expectations.PASS), expected, test_is_slow)
</del><ins>+            initial_results.add(get_result('failures/expected/leak.html', test_expectations.PASS), expected)
</ins><span class="cx"> 
</span><span class="cx">     elif passing:
</span><del>-        initial_results.add(get_result('passes/text.html'), expected, test_is_slow)
-        initial_results.add(get_result('failures/expected/audio.html'), expected, test_is_slow)
-        initial_results.add(get_result('failures/expected/timeout.html'), expected, test_is_slow)
-        initial_results.add(get_result('failures/expected/crash.html'), expected, test_is_slow)
</del><ins>+        initial_results.add(get_result('passes/text.html'), expected)
+        initial_results.add(get_result('failures/expected/audio.html'), expected)
+        initial_results.add(get_result('failures/expected/timeout.html'), expected)
+        initial_results.add(get_result('failures/expected/crash.html'), expected)
</ins><span class="cx"> 
</span><span class="cx">         if port._options.pixel_tests:
</span><del>-            initial_results.add(get_result('failures/expected/pixel-fail.html'), expected, test_is_slow)
</del><ins>+            initial_results.add(get_result('failures/expected/pixel-fail.html'), expected)
</ins><span class="cx">         else:
</span><del>-            initial_results.add(get_result('failures/expected/pixel-fail.html', test_expectations.IMAGE), expected, test_is_slow)
</del><ins>+            initial_results.add(get_result('failures/expected/pixel-fail.html', test_expectations.IMAGE), expected)
</ins><span class="cx"> 
</span><span class="cx">         if port._options.world_leaks:
</span><del>-            initial_results.add(get_result('failures/expected/leak.html'), expected, test_is_slow)
</del><ins>+            initial_results.add(get_result('failures/expected/leak.html'), expected)
</ins><span class="cx">         else:
</span><del>-            initial_results.add(get_result('failures/expected/leak.html', test_expectations.PASS), expected, test_is_slow)
</del><ins>+            initial_results.add(get_result('failures/expected/leak.html', test_expectations.PASS), expected)
</ins><span class="cx">     else:
</span><del>-        initial_results.add(get_result('passes/text.html', test_expectations.TIMEOUT), expected, test_is_slow)
-        initial_results.add(get_result('failures/expected/audio.html', test_expectations.AUDIO), expected, test_is_slow)
-        initial_results.add(get_result('failures/expected/timeout.html', test_expectations.CRASH), expected, test_is_slow)
-        initial_results.add(get_result('failures/expected/crash.html', test_expectations.TIMEOUT), expected, test_is_slow)
-        initial_results.add(get_result('failures/expected/pixel-fail.html', test_expectations.TIMEOUT), expected, test_is_slow)
-        initial_results.add(get_result('failures/expected/leak.html', test_expectations.CRASH), expected, test_is_slow)
</del><ins>+        initial_results.add(get_result('passes/text.html', test_expectations.TIMEOUT), expected)
+        initial_results.add(get_result('failures/expected/audio.html', test_expectations.AUDIO), expected)
+        initial_results.add(get_result('failures/expected/timeout.html', test_expectations.CRASH), expected)
+        initial_results.add(get_result('failures/expected/crash.html', test_expectations.TIMEOUT), expected)
+        initial_results.add(get_result('failures/expected/pixel-fail.html', test_expectations.TIMEOUT), expected)
+        initial_results.add(get_result('failures/expected/leak.html', test_expectations.CRASH), expected)
</ins><span class="cx"> 
</span><span class="cx">         # we only list hang.html here, since normally this is WontFix
</span><del>-        initial_results.add(get_result('failures/expected/hang.html', test_expectations.TIMEOUT), expected, test_is_slow)
</del><ins>+        initial_results.add(get_result('failures/expected/hang.html', test_expectations.TIMEOUT), expected)
</ins><span class="cx"> 
</span><span class="cx">     if flaky:
</span><span class="cx">         retry_results = run_results(port)
</span><del>-        retry_results.add(get_result('passes/text.html'), True, test_is_slow)
-        retry_results.add(get_result('failures/expected/timeout.html'), True, test_is_slow)
-        retry_results.add(get_result('failures/expected/crash.html'), True, test_is_slow)
-        retry_results.add(get_result('failures/expected/pixel-fail.html'), True, test_is_slow)
-        retry_results.add(get_result('failures/expected/leak.html'), True, test_is_slow)
</del><ins>+        retry_results.add(get_result('passes/text.html'), True)
+        retry_results.add(get_result('failures/expected/timeout.html'), True)
+        retry_results.add(get_result('failures/expected/crash.html'), True)
+        retry_results.add(get_result('failures/expected/pixel-fail.html'), True)
+        retry_results.add(get_result('failures/expected/leak.html'), True)
</ins><span class="cx">     else:
</span><span class="cx">         retry_results = None
</span><span class="cx"> 
</span></span></pre></div>
<a id="trunkToolsScriptswebkitpylayout_testsrun_webkit_tests_integrationtestpy"></a>
<div class="modfile"><h4>Modified: trunk/Tools/Scripts/webkitpy/layout_tests/run_webkit_tests_integrationtest.py (277780 => 277781)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Tools/Scripts/webkitpy/layout_tests/run_webkit_tests_integrationtest.py    2021-05-20 13:31:05 UTC (rev 277780)
+++ trunk/Tools/Scripts/webkitpy/layout_tests/run_webkit_tests_integrationtest.py       2021-05-20 13:48:42 UTC (rev 277781)
</span><span class="lines">@@ -819,6 +819,21 @@
</span><span class="cx">         self.assertTrue(passing_run(['--additional-expectations', '/tmp/overrides.txt', 'failures/unexpected/mismatch.html'],
</span><span class="cx">                                     tests_included=True, host=host))
</span><span class="cx"> 
</span><ins>+    def test_tests_options(self):
+        host = MockHost()
+        host.filesystem.write_text_file(
+            '/test.checkout/LayoutTests/tests-options.json',
+            '{"failures/unexpected/timeout.html":["slow"]}'
+        )
+
+        details, _, _ = logging_run(['failures/expected/timeout.html',
+                                     'failures/unexpected/timeout.html'],
+                                    host=host)
+        self.assertEquals(details.initial_results.slow_tests,
+                          {'failures/unexpected/timeout.html'})
+        self.assertEquals(details.retry_results.slow_tests,
+                          {'failures/unexpected/timeout.html'})
+
</ins><span class="cx">     def test_no_http_and_force(self):
</span><span class="cx">         # See test_run_force, using --force raises an exception.
</span><span class="cx">         # FIXME: We would like to check the warnings generated.
</span></span></pre>
</div>
</div>

</body>
</html>