<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN"
"http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head><meta http-equiv="content-type" content="text/html; charset=utf-8" />
<title>[205080] trunk</title>
</head>
<body>

<style type="text/css"><!--
#msg dl.meta { border: 1px #006 solid; background: #369; padding: 6px; color: #fff; }
#msg dl.meta dt { float: left; width: 6em; font-weight: bold; }
#msg dt:after { content:':';}
#msg dl, #msg dt, #msg ul, #msg li, #header, #footer, #logmsg { font-family: verdana,arial,helvetica,sans-serif; font-size: 10pt;  }
#msg dl a { font-weight: bold}
#msg dl a:link    { color:#fc3; }
#msg dl a:active  { color:#ff0; }
#msg dl a:visited { color:#cc6; }
h3 { font-family: verdana,arial,helvetica,sans-serif; font-size: 10pt; font-weight: bold; }
#msg pre { overflow: auto; background: #ffc; border: 1px #fa0 solid; padding: 6px; }
#logmsg { background: #ffc; border: 1px #fa0 solid; padding: 1em 1em 0 1em; }
#logmsg p, #logmsg pre, #logmsg blockquote { margin: 0 0 1em 0; }
#logmsg p, #logmsg li, #logmsg dt, #logmsg dd { line-height: 14pt; }
#logmsg h1, #logmsg h2, #logmsg h3, #logmsg h4, #logmsg h5, #logmsg h6 { margin: .5em 0; }
#logmsg h1:first-child, #logmsg h2:first-child, #logmsg h3:first-child, #logmsg h4:first-child, #logmsg h5:first-child, #logmsg h6:first-child { margin-top: 0; }
#logmsg ul, #logmsg ol { padding: 0; list-style-position: inside; margin: 0 0 0 1em; }
#logmsg ul { text-indent: -1em; padding-left: 1em; }#logmsg ol { text-indent: -1.5em; padding-left: 1.5em; }
#logmsg > ul, #logmsg > ol { margin: 0 0 1em 0; }
#logmsg pre { background: #eee; padding: 1em; }
#logmsg blockquote { border: 1px solid #fa0; border-left-width: 10px; padding: 1em 1em 0 1em; background: white;}
#logmsg dl { margin: 0; }
#logmsg dt { font-weight: bold; }
#logmsg dd { margin: 0; padding: 0 0 0.5em 0; }
#logmsg dd:before { content:'\00bb';}
#logmsg table { border-spacing: 0px; border-collapse: collapse; border-top: 4px solid #fa0; border-bottom: 1px solid #fa0; background: #fff; }
#logmsg table th { text-align: left; font-weight: normal; padding: 0.2em 0.5em; border-top: 1px dotted #fa0; }
#logmsg table td { text-align: right; border-top: 1px dotted #fa0; padding: 0.2em 0.5em; }
#logmsg table thead th { text-align: center; border-bottom: 1px solid #fa0; }
#logmsg table th.Corner { text-align: left; }
#logmsg hr { border: none 0; border-top: 2px dashed #fa0; height: 1px; }
#header, #footer { color: #fff; background: #636; border: 1px #300 solid; padding: 6px; }
#patch { width: 100%; }
#patch h4 {font-family: verdana,arial,helvetica,sans-serif;font-size:10pt;padding:8px;background:#369;color:#fff;margin:0;}
#patch .propset h4, #patch .binary h4 {margin:0;}
#patch pre {padding:0;line-height:1.2em;margin:0;}
#patch .diff {width:100%;background:#eee;padding: 0 0 10px 0;overflow:auto;}
#patch .propset .diff, #patch .binary .diff  {padding:10px 0;}
#patch span {display:block;padding:0 10px;}
#patch .modfile, #patch .addfile, #patch .delfile, #patch .propset, #patch .binary, #patch .copfile {border:1px solid #ccc;margin:10px 0;}
#patch ins {background:#dfd;text-decoration:none;display:block;padding:0 10px;}
#patch del {background:#fdd;text-decoration:none;display:block;padding:0 10px;}
#patch .lines, .info {color:#888;background:#fff;}
--></style>
<div id="msg">
<dl class="meta">
<dt>Revision</dt> <dd><a href="http://trac.webkit.org/projects/webkit/changeset/205080">205080</a></dd>
<dt>Author</dt> <dd>simon.fraser@apple.com</dd>
<dt>Date</dt> <dd>2016-08-27 11:07:52 -0700 (Sat, 27 Aug 2016)</dd>
</dl>

<h3>Log Message</h3>
<pre>Add run-webkit-tests --print-expectations to show expectations for all or a subset of tests
https://bugs.webkit.org/show_bug.cgi?id=161217

Reviewed by Ryosuke Niwa.
Tools:

&quot;run-webkit-tests --print-expectations&quot; runs the same logic as running the tests, but
dumps out the lists of tests that would be run and skipped, and, for each, the entry
in TestExpectations that determines the expected outcome of the test.

This is an improved version of webkit-patch print-expectations.

See bug for sample output.

* Scripts/webkitpy/layout_tests/controllers/manager.py:
(Manager._print_expectations_for_subset): Print out the list of tests and expected
outcome for some subset of tests.
(Manager.print_expectations): Do the same splitting by device class that running tests
does, and for each subset of tests, call _print_expectations_for_subset.
* Scripts/webkitpy/layout_tests/models/test_expectations.py:
(TestExpectationParser.expectation_for_skipped_test): Set the flag
expectation_line.not_applicable_to_current_platform
(TestExpectationLine.__init__): Init not_applicable_to_current_platform to False
(TestExpectationLine.expected_behavior): line.expectation is ['PASS'] by default,
even for skipped tests. This function returns a list relevant for display, taking the skipped
modifier into account.
(TestExpectationLine.create_passing_expectation): expectations is normally a list, not a set.
(TestExpectations.readable_filename_and_line_number): Return something printable for
lines with and without filenames
* Scripts/webkitpy/layout_tests/run_webkit_tests.py:
(main): Handle options.print_expectations
(parse_args): Add support for --print-expectations
(_print_expectations):
* Scripts/webkitpy/port/ios.py:
(IOSSimulatorPort.default_child_processes): Make this a debug log.

LayoutTests:

Explicitly skip fast/viewport

* platform/mac/TestExpectations:</pre>

<h3>Modified Paths</h3>
<ul>
<li><a href="#trunkLayoutTestsChangeLog">trunk/LayoutTests/ChangeLog</a></li>
<li><a href="#trunkLayoutTestsplatformmacTestExpectations">trunk/LayoutTests/platform/mac/TestExpectations</a></li>
<li><a href="#trunkToolsChangeLog">trunk/Tools/ChangeLog</a></li>
<li><a href="#trunkToolsScriptswebkitpylayout_testscontrollersmanagerpy">trunk/Tools/Scripts/webkitpy/layout_tests/controllers/manager.py</a></li>
<li><a href="#trunkToolsScriptswebkitpylayout_testsmodelstest_expectationspy">trunk/Tools/Scripts/webkitpy/layout_tests/models/test_expectations.py</a></li>
<li><a href="#trunkToolsScriptswebkitpylayout_testsrun_webkit_testspy">trunk/Tools/Scripts/webkitpy/layout_tests/run_webkit_tests.py</a></li>
<li><a href="#trunkToolsScriptswebkitpyportiospy">trunk/Tools/Scripts/webkitpy/port/ios.py</a></li>
</ul>

</div>
<div id="patch">
<h3>Diff</h3>
<a id="trunkLayoutTestsChangeLog"></a>
<div class="modfile"><h4>Modified: trunk/LayoutTests/ChangeLog (205079 => 205080)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/LayoutTests/ChangeLog        2016-08-27 17:45:12 UTC (rev 205079)
+++ trunk/LayoutTests/ChangeLog        2016-08-27 18:07:52 UTC (rev 205080)
</span><span class="lines">@@ -1,3 +1,14 @@
</span><ins>+2016-08-27  Simon Fraser  &lt;simon.fraser@apple.com&gt;
+
+        Add run-webkit-tests --print-expectations to show expectations for all or a subset of tests
+        https://bugs.webkit.org/show_bug.cgi?id=161217
+
+        Reviewed by Ryosuke Niwa.
+        
+        Explicitly skip fast/viewport
+
+        * platform/mac/TestExpectations:
+
</ins><span class="cx"> 2016-08-27  Youenn Fablet  &lt;youenn@apple.com&gt;
</span><span class="cx"> 
</span><span class="cx">         html/dom/interfaces.html is flaky due to WebSocket test
</span></span></pre></div>
<a id="trunkLayoutTestsplatformmacTestExpectations"></a>
<div class="modfile"><h4>Modified: trunk/LayoutTests/platform/mac/TestExpectations (205079 => 205080)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/LayoutTests/platform/mac/TestExpectations        2016-08-27 17:45:12 UTC (rev 205079)
+++ trunk/LayoutTests/platform/mac/TestExpectations        2016-08-27 18:07:52 UTC (rev 205080)
</span><span class="lines">@@ -223,7 +223,7 @@
</span><span class="cx"> webkit.org/b/43960 scrollbars/custom-scrollbar-with-incomplete-style.html
</span><span class="cx"> 
</span><span class="cx"> # viewport meta tag support
</span><del>-fast/viewport
</del><ins>+fast/viewport [ Skip ]
</ins><span class="cx"> 
</span><span class="cx"> webkit.org/b/116640 plugins/plugin-initiate-popup-window.html
</span><span class="cx"> 
</span></span></pre></div>
<a id="trunkToolsChangeLog"></a>
<div class="modfile"><h4>Modified: trunk/Tools/ChangeLog (205079 => 205080)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Tools/ChangeLog        2016-08-27 17:45:12 UTC (rev 205079)
+++ trunk/Tools/ChangeLog        2016-08-27 18:07:52 UTC (rev 205080)
</span><span class="lines">@@ -1,3 +1,40 @@
</span><ins>+2016-08-27  Simon Fraser  &lt;simon.fraser@apple.com&gt;
+
+        Add run-webkit-tests --print-expectations to show expectations for all or a subset of tests
+        https://bugs.webkit.org/show_bug.cgi?id=161217
+
+        Reviewed by Ryosuke Niwa.
+
+        &quot;run-webkit-tests --print-expectations&quot; runs the same logic as running the tests, but
+        dumps out the lists of tests that would be run and skipped, and, for each, the entry
+        in TestExpectations that determines the expected outcome of the test.
+
+        This is an improved version of webkit-patch print-expectations.
+
+        See bug for sample output.
+
+        * Scripts/webkitpy/layout_tests/controllers/manager.py:
+        (Manager._print_expectations_for_subset): Print out the list of tests and expected
+        outcome for some subset of tests.
+        (Manager.print_expectations): Do the same splitting by device class that running tests
+        does, and for each subset of tests, call _print_expectations_for_subset.
+        * Scripts/webkitpy/layout_tests/models/test_expectations.py:
+        (TestExpectationParser.expectation_for_skipped_test): Set the flag
+        expectation_line.not_applicable_to_current_platform
+        (TestExpectationLine.__init__): Init not_applicable_to_current_platform to False
+        (TestExpectationLine.expected_behavior): line.expectation is ['PASS'] by default,
+        even for skipped tests. This function returns a list relevant for display, taking the skipped
+        modifier into account.
+        (TestExpectationLine.create_passing_expectation): expectations is normally a list, not a set.
+        (TestExpectations.readable_filename_and_line_number): Return something printable for 
+        lines with and without filenames
+        * Scripts/webkitpy/layout_tests/run_webkit_tests.py:
+        (main): Handle options.print_expectations
+        (parse_args): Add support for --print-expectations
+        (_print_expectations):
+        * Scripts/webkitpy/port/ios.py:
+        (IOSSimulatorPort.default_child_processes): Make this a debug log.
+
</ins><span class="cx"> 2016-08-26  Dan Bernstein  &lt;mitz@apple.com&gt;
</span><span class="cx"> 
</span><span class="cx">         Keep trying to fix the build after r205057.
</span></span></pre></div>
<a id="trunkToolsScriptswebkitpylayout_testscontrollersmanagerpy"></a>
<div class="modfile"><h4>Modified: trunk/Tools/Scripts/webkitpy/layout_tests/controllers/manager.py (205079 => 205080)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Tools/Scripts/webkitpy/layout_tests/controllers/manager.py        2016-08-27 17:45:12 UTC (rev 205079)
+++ trunk/Tools/Scripts/webkitpy/layout_tests/controllers/manager.py        2016-08-27 18:07:52 UTC (rev 205080)
</span><span class="lines">@@ -524,3 +524,60 @@
</span><span class="cx">         for name, value in stats.iteritems():
</span><span class="cx">             json_results_generator.add_path_to_trie(name, value, stats_trie)
</span><span class="cx">         return stats_trie
</span><ins>+
+    def _print_expectation_line_for_test(self, format_string, test):
+        line = self._expectations.model().get_expectation_line(test)
+        print format_string.format(test, line.expected_behavior, self._expectations.readable_filename_and_line_number(line), line.original_string or '')
+    
+    def _print_expectations_for_subset(self, device_class, test_col_width, tests_to_run, tests_to_skip={}):
+        format_string = '{{:{width}}} {{}} {{}} {{}}'.format(width=test_col_width)
+        if tests_to_skip:
+            print ''
+            print 'Tests to skip ({})'.format(len(tests_to_skip))
+            for test in sorted(tests_to_skip):
+                self._print_expectation_line_for_test(format_string, test)
+
+        print ''
+        print 'Tests to run{} ({})'.format(' for ' + device_class if device_class else '', len(tests_to_run))
+        for test in sorted(tests_to_run):
+            self._print_expectation_line_for_test(format_string, test)
+
+    def print_expectations(self, args):
+        self._printer.write_update(&quot;Collecting tests ...&quot;)
+        try:
+            paths, test_names = self._collect_tests(args)
+        except IOError:
+            # This is raised if --test-list doesn't exist
+            return -1
+
+        self._printer.write_update(&quot;Parsing expectations ...&quot;)
+        self._expectations = test_expectations.TestExpectations(self._port, test_names, force_expectations_pass=self._options.force)
+        self._expectations.parse_all_expectations()
+
+        tests_to_run, tests_to_skip = self._prepare_lists(paths, test_names)
+        self._printer.print_found(len(test_names), len(tests_to_run), self._options.repeat_each, self._options.iterations)
+
+        test_col_width = len(max(tests_to_run + list(tests_to_skip), key=len)) + 1
+
+        default_device_tests = []
+
+        # Look for tests with custom device requirements.
+        custom_device_tests = defaultdict(list)
+        for test_file in tests_to_run:
+            custom_device = self._custom_device_for_test(test_file)
+            if custom_device:
+                custom_device_tests[custom_device].append(test_file)
+            else:
+                default_device_tests.append(test_file)
+
+        if custom_device_tests:
+            for device_class in custom_device_tests:
+                _log.debug('{} tests use device {}'.format(len(custom_device_tests[device_class]), device_class))
+
+        self._print_expectations_for_subset(None, test_col_width, tests_to_run, tests_to_skip)
+
+        for device_class in custom_device_tests:
+            device_tests = custom_device_tests[device_class]
+            self._print_expectations_for_subset(device_class, test_col_width, device_tests)
+
+        return 0
</ins></span></pre></div>
<a id="trunkToolsScriptswebkitpylayout_testsmodelstest_expectationspy"></a>
<div class="modfile"><h4>Modified: trunk/Tools/Scripts/webkitpy/layout_tests/models/test_expectations.py (205079 => 205080)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Tools/Scripts/webkitpy/layout_tests/models/test_expectations.py        2016-08-27 17:45:12 UTC (rev 205079)
+++ trunk/Tools/Scripts/webkitpy/layout_tests/models/test_expectations.py        2016-08-27 18:07:52 UTC (rev 205080)
</span><span class="lines">@@ -108,6 +108,7 @@
</span><span class="cx">         expectation_line.filename = '&lt;Skipped file&gt;'
</span><span class="cx">         expectation_line.line_number = 0
</span><span class="cx">         expectation_line.expectations = [TestExpectationParser.PASS_EXPECTATION]
</span><ins>+        expectation_line.not_applicable_to_current_platform = True
</ins><span class="cx">         self._parse_line(expectation_line)
</span><span class="cx">         return expectation_line
</span><span class="cx"> 
</span><span class="lines">@@ -380,6 +381,7 @@
</span><span class="cx">         self.comment = None
</span><span class="cx">         self.matching_tests = []
</span><span class="cx">         self.warnings = []
</span><ins>+        self.not_applicable_to_current_platform = False
</ins><span class="cx"> 
</span><span class="cx">     def is_invalid(self):
</span><span class="cx">         return self.warnings and self.warnings != [TestExpectationParser.MISSING_BUG_WARNING]
</span><span class="lines">@@ -387,6 +389,21 @@
</span><span class="cx">     def is_flaky(self):
</span><span class="cx">         return len(self.parsed_expectations) &gt; 1
</span><span class="cx"> 
</span><ins>+    @property
+    def expected_behavior(self):
+        expectations = self.expectations
+        if &quot;SLOW&quot; in self.modifiers:
+            expectations += [&quot;SLOW&quot;]
+
+        if &quot;SKIP&quot; in self.modifiers:
+            expectations = [&quot;SKIP&quot;]
+        elif &quot;WONTFIX&quot; in self.modifiers:
+            expectations = [&quot;WONTFIX&quot;]
+        elif &quot;CRASH&quot; in self.modifiers:
+            expectations += [&quot;CRASH&quot;]
+
+        return expectations
+
</ins><span class="cx">     @staticmethod
</span><span class="cx">     def create_passing_expectation(test):
</span><span class="cx">         expectation_line = TestExpectationLine()
</span><span class="lines">@@ -393,7 +410,7 @@
</span><span class="cx">         expectation_line.name = test
</span><span class="cx">         expectation_line.path = test
</span><span class="cx">         expectation_line.parsed_expectations = set([PASS])
</span><del>-        expectation_line.expectations = set(['PASS'])
</del><ins>+        expectation_line.expectations = ['PASS']
</ins><span class="cx">         expectation_line.matching_tests = [test]
</span><span class="cx">         return expectation_line
</span><span class="cx"> 
</span><span class="lines">@@ -844,6 +861,15 @@
</span><span class="cx">         self._include_overrides = include_overrides
</span><span class="cx">         self._expectations_to_lint = expectations_to_lint
</span><span class="cx"> 
</span><ins>+    def readable_filename_and_line_number(self, line):
+        if line.not_applicable_to_current_platform:
+            return &quot;(skipped for this platform)&quot;
+        if not line.filename:
+            return ''
+        if line.filename.startswith(self._port.path_from_webkit_base()):
+            return '{}:{}'.format(self._port.host.filesystem.relpath(line.filename, self._port.path_from_webkit_base()), line.line_number)
+        return '{}:{}'.format(line.filename, line.line_number)
+
</ins><span class="cx">     def parse_generic_expectations(self):
</span><span class="cx">         if self._port.path_to_generic_test_expectations_file() in self._expectations_dict:
</span><span class="cx">             if self._include_generic:
</span></span></pre></div>
<a id="trunkToolsScriptswebkitpylayout_testsrun_webkit_testspy"></a>
<div class="modfile"><h4>Modified: trunk/Tools/Scripts/webkitpy/layout_tests/run_webkit_tests.py (205079 => 205080)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Tools/Scripts/webkitpy/layout_tests/run_webkit_tests.py        2016-08-27 17:45:12 UTC (rev 205079)
+++ trunk/Tools/Scripts/webkitpy/layout_tests/run_webkit_tests.py        2016-08-27 18:07:52 UTC (rev 205080)
</span><span class="lines">@@ -73,6 +73,9 @@
</span><span class="cx">         print &gt;&gt; stderr, str(e)
</span><span class="cx">         return EXCEPTIONAL_EXIT_STATUS
</span><span class="cx"> 
</span><ins>+    if options.print_expectations:
+        return _print_expectations(port, options, args, stderr)
+
</ins><span class="cx">     try:
</span><span class="cx">         # Force all tests to use a smaller stack so that stack overflow tests can run faster.
</span><span class="cx">         stackSizeInBytes = int(1.5 * 1024 * 1024)
</span><span class="lines">@@ -299,6 +302,9 @@
</span><span class="cx">         optparse.make_option(&quot;--lint-test-files&quot;, action=&quot;store_true&quot;,
</span><span class="cx">         default=False, help=(&quot;Makes sure the test files parse for all &quot;
</span><span class="cx">                             &quot;configurations. Does not run any tests.&quot;)),
</span><ins>+        optparse.make_option(&quot;--print-expectations&quot;, action=&quot;store_true&quot;,
+        default=False, help=(&quot;Print the expected outcome for the given test, or all tests listed in TestExpectations. &quot;
+                            &quot;Does not run any tests.&quot;)),
</ins><span class="cx">     ]))
</span><span class="cx"> 
</span><span class="cx">     option_group_definitions.append((&quot;Web Platform Test Server Options&quot;, [
</span><span class="lines">@@ -338,6 +344,24 @@
</span><span class="cx">     return option_parser.parse_args(args)
</span><span class="cx"> 
</span><span class="cx"> 
</span><ins>+def _print_expectations(port, options, args, logging_stream):
+    logger = logging.getLogger()
+    logger.setLevel(logging.DEBUG if options.debug_rwt_logging else logging.INFO)
+    try:
+        printer = printing.Printer(port, options, logging_stream, logger=logger)
+
+        _set_up_derived_options(port, options)
+        manager = Manager(port, options, printer)
+
+        exit_code = manager.print_expectations(args)
+        _log.debug(&quot;Printing expectations completed, Exit status: %d&quot;, exit_code)
+        return exit_code
+    except Exception as error:
+        _log.error('Error printing expectations: {}'.format(error))
+    finally:
+        printer.cleanup()
+        return -1
+
</ins><span class="cx"> def _set_up_derived_options(port, options):
</span><span class="cx">     &quot;&quot;&quot;Sets the options values that depend on other options values.&quot;&quot;&quot;
</span><span class="cx">     if not options.child_processes:
</span></span></pre></div>
<a id="trunkToolsScriptswebkitpyportiospy"></a>
<div class="modfile"><h4>Modified: trunk/Tools/Scripts/webkitpy/port/ios.py (205079 => 205080)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Tools/Scripts/webkitpy/port/ios.py        2016-08-27 17:45:12 UTC (rev 205079)
+++ trunk/Tools/Scripts/webkitpy/port/ios.py        2016-08-27 18:07:52 UTC (rev 205080)
</span><span class="lines">@@ -159,7 +159,7 @@
</span><span class="cx">         best_child_process_count_for_cpu = self._executive.cpu_count() / 2
</span><span class="cx">         system_process_count_limit = int(subprocess.check_output([&quot;ulimit&quot;, &quot;-u&quot;]).strip())
</span><span class="cx">         current_process_count = len(subprocess.check_output([&quot;ps&quot;, &quot;aux&quot;]).strip().split('\n'))
</span><del>-        _log.info('Process limit: %d, current #processes: %d' % (system_process_count_limit, current_process_count))
</del><ins>+        _log.debug('Process limit: %d, current #processes: %d' % (system_process_count_limit, current_process_count))
</ins><span class="cx">         maximum_simulator_count_on_this_system = (system_process_count_limit - current_process_count) // self.PROCESS_COUNT_ESTIMATE_PER_SIMULATOR_INSTANCE
</span><span class="cx">         # FIXME: We should also take into account the available RAM.
</span><span class="cx"> 
</span></span></pre>
</div>
</div>

</body>
</html>