<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN"
"http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head><meta http-equiv="content-type" content="text/html; charset=utf-8" />
<title>[162183] trunk</title>
</head>
<body>

<style type="text/css"><!--
#msg dl.meta { border: 1px #006 solid; background: #369; padding: 6px; color: #fff; }
#msg dl.meta dt { float: left; width: 6em; font-weight: bold; }
#msg dt:after { content:':';}
#msg dl, #msg dt, #msg ul, #msg li, #header, #footer, #logmsg { font-family: verdana,arial,helvetica,sans-serif; font-size: 10pt;  }
#msg dl a { font-weight: bold}
#msg dl a:link    { color:#fc3; }
#msg dl a:active  { color:#ff0; }
#msg dl a:visited { color:#cc6; }
h3 { font-family: verdana,arial,helvetica,sans-serif; font-size: 10pt; font-weight: bold; }
#msg pre { overflow: auto; background: #ffc; border: 1px #fa0 solid; padding: 6px; }
#logmsg { background: #ffc; border: 1px #fa0 solid; padding: 1em 1em 0 1em; }
#logmsg p, #logmsg pre, #logmsg blockquote { margin: 0 0 1em 0; }
#logmsg p, #logmsg li, #logmsg dt, #logmsg dd { line-height: 14pt; }
#logmsg h1, #logmsg h2, #logmsg h3, #logmsg h4, #logmsg h5, #logmsg h6 { margin: .5em 0; }
#logmsg h1:first-child, #logmsg h2:first-child, #logmsg h3:first-child, #logmsg h4:first-child, #logmsg h5:first-child, #logmsg h6:first-child { margin-top: 0; }
#logmsg ul, #logmsg ol { padding: 0; list-style-position: inside; margin: 0 0 0 1em; }
#logmsg ul { text-indent: -1em; padding-left: 1em; }#logmsg ol { text-indent: -1.5em; padding-left: 1.5em; }
#logmsg > ul, #logmsg > ol { margin: 0 0 1em 0; }
#logmsg pre { background: #eee; padding: 1em; }
#logmsg blockquote { border: 1px solid #fa0; border-left-width: 10px; padding: 1em 1em 0 1em; background: white;}
#logmsg dl { margin: 0; }
#logmsg dt { font-weight: bold; }
#logmsg dd { margin: 0; padding: 0 0 0.5em 0; }
#logmsg dd:before { content:'\00bb';}
#logmsg table { border-spacing: 0px; border-collapse: collapse; border-top: 4px solid #fa0; border-bottom: 1px solid #fa0; background: #fff; }
#logmsg table th { text-align: left; font-weight: normal; padding: 0.2em 0.5em; border-top: 1px dotted #fa0; }
#logmsg table td { text-align: right; border-top: 1px dotted #fa0; padding: 0.2em 0.5em; }
#logmsg table thead th { text-align: center; border-bottom: 1px solid #fa0; }
#logmsg table th.Corner { text-align: left; }
#logmsg hr { border: none 0; border-top: 2px dashed #fa0; height: 1px; }
#header, #footer { color: #fff; background: #636; border: 1px #300 solid; padding: 6px; }
#patch { width: 100%; }
#patch h4 {font-family: verdana,arial,helvetica,sans-serif;font-size:10pt;padding:8px;background:#369;color:#fff;margin:0;}
#patch .propset h4, #patch .binary h4 {margin:0;}
#patch pre {padding:0;line-height:1.2em;margin:0;}
#patch .diff {width:100%;background:#eee;padding: 0 0 10px 0;overflow:auto;}
#patch .propset .diff, #patch .binary .diff  {padding:10px 0;}
#patch span {display:block;padding:0 10px;}
#patch .modfile, #patch .addfile, #patch .delfile, #patch .propset, #patch .binary, #patch .copfile {border:1px solid #ccc;margin:10px 0;}
#patch ins {background:#dfd;text-decoration:none;display:block;padding:0 10px;}
#patch del {background:#fdd;text-decoration:none;display:block;padding:0 10px;}
#patch .lines, .info {color:#888;background:#fff;}
--></style>
<div id="msg">
<dl class="meta">
<dt>Revision</dt> <dd><a href="http://trac.webkit.org/projects/webkit/changeset/162183">162183</a></dd>
<dt>Author</dt> <dd>rniwa@webkit.org</dd>
<dt>Date</dt> <dd>2014-01-16 22:06:36 -0800 (Thu, 16 Jan 2014)</dd>
</dl>

<h3>Log Message</h3>
<pre>Automate DoYouEvenBench
https://bugs.webkit.org/show_bug.cgi?id=124497

Reviewed by Geoffrey Garen.

PerformanceTests:

Enable DoYouEvenBench/Full.html on perf bots by default.

Put a space between the time and ms, and fixed a typo in runner.js so that the aggregator name will be reported.

* DoYouEvenBench/Full.html:
* Skipped:
* resources/runner.js:

Tools:

* Scripts/webkitpy/performance_tests/perftest.py:
(PerfTestMetric.__init__): Added the aggregator name as an argument.
(PerfTestMetric.aggregator): Added.
(PerfTest._metrics_regex): Made the subtest name match non-greedy so that the metric names will be
won't be eagerly parsed as a part of the subtest name. e.g. &quot;Time&quot; and &quot;Total&quot; in &quot;a:Time:Total&quot;
should be parsed as the metric and the aggregator respectively.
(PerfTest._run_with_driver): Pass in the aggregator name.
(PerfTest._ensure_metrics): Ditto. Also split the subtest name by / as required by DoYouEvenBench
which generates subtests of subtests within a single test file.

* Scripts/webkitpy/performance_tests/perftest_unittest.py:
(test_parse_output_with_subtests_and_total): Added.

* Scripts/webkitpy/performance_tests/perftestsrunner.py:
(_generate_results_dict): Add the aggregator name to the JSON when one is available.

* Scripts/webkitpy/performance_tests/perftestsrunner_integrationtest.py:
(TestWithSubtestsData): Added a sub test with an aggregator and a sub-sub test.</pre>

<h3>Modified Paths</h3>
<ul>
<li><a href="#trunkPerformanceTestsChangeLog">trunk/PerformanceTests/ChangeLog</a></li>
<li><a href="#trunkPerformanceTestsDoYouEvenBenchFullhtml">trunk/PerformanceTests/DoYouEvenBench/Full.html</a></li>
<li><a href="#trunkPerformanceTestsSkipped">trunk/PerformanceTests/Skipped</a></li>
<li><a href="#trunkPerformanceTestsresourcesrunnerjs">trunk/PerformanceTests/resources/runner.js</a></li>
<li><a href="#trunkToolsChangeLog">trunk/Tools/ChangeLog</a></li>
<li><a href="#trunkToolsScriptswebkitpyperformance_testsperftestpy">trunk/Tools/Scripts/webkitpy/performance_tests/perftest.py</a></li>
<li><a href="#trunkToolsScriptswebkitpyperformance_testsperftest_unittestpy">trunk/Tools/Scripts/webkitpy/performance_tests/perftest_unittest.py</a></li>
<li><a href="#trunkToolsScriptswebkitpyperformance_testsperftestsrunnerpy">trunk/Tools/Scripts/webkitpy/performance_tests/perftestsrunner.py</a></li>
<li><a href="#trunkToolsScriptswebkitpyperformance_testsperftestsrunner_integrationtestpy">trunk/Tools/Scripts/webkitpy/performance_tests/perftestsrunner_integrationtest.py</a></li>
</ul>

</div>
<div id="patch">
<h3>Diff</h3>
<a id="trunkPerformanceTestsChangeLog"></a>
<div class="modfile"><h4>Modified: trunk/PerformanceTests/ChangeLog (162182 => 162183)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/PerformanceTests/ChangeLog        2014-01-17 05:27:13 UTC (rev 162182)
+++ trunk/PerformanceTests/ChangeLog        2014-01-17 06:06:36 UTC (rev 162183)
</span><span class="lines">@@ -1,3 +1,18 @@
</span><ins>+2014-01-16  Ryosuke Niwa  &lt;rniwa@webkit.org&gt;
+
+        Automate DoYouEvenBench
+        https://bugs.webkit.org/show_bug.cgi?id=124497
+
+        Reviewed by Geoffrey Garen.
+
+        Enable DoYouEvenBench/Full.html on perf bots by default.
+
+        Put a space between the time and ms, and fixed a typo in runner.js so that the aggregator name will be reported.
+
+        * DoYouEvenBench/Full.html:
+        * Skipped:
+        * resources/runner.js:
+
</ins><span class="cx"> 2014-01-15  Manuel Rego Casasnovas  &lt;rego@igalia.com&gt;
</span><span class="cx"> 
</span><span class="cx">         [CSS Regions] Add performance tests for selection with mixed content
</span></span></pre></div>
<a id="trunkPerformanceTestsDoYouEvenBenchFullhtml"></a>
<div class="modfile"><h4>Modified: trunk/PerformanceTests/DoYouEvenBench/Full.html (162182 => 162183)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/PerformanceTests/DoYouEvenBench/Full.html        2014-01-17 05:27:13 UTC (rev 162182)
+++ trunk/PerformanceTests/DoYouEvenBench/Full.html        2014-01-17 06:06:36 UTC (rev 162183)
</span><span class="lines">@@ -17,13 +17,13 @@
</span><span class="cx">             }
</span><span class="cx">             values.push(measuredValues.total);
</span><span class="cx">             iterationNumber++;
</span><del>-            pre.appendChild(document.createTextNode('Iteration ' + iterationNumber + ': ' + measuredValues.total + 'ms\n'));
</del><ins>+            pre.appendChild(document.createTextNode('Iteration ' + iterationNumber + ': ' + measuredValues.total + ' ms\n'));
</ins><span class="cx">         },
</span><span class="cx">         didFinishLastIteration: function () {
</span><span class="cx">             var sum = 0;
</span><span class="cx">             for (var i = 0; i &lt; values.length; i++)
</span><span class="cx">                 sum += values[i];
</span><del>-            pre.appendChild(document.createTextNode('Average: ' + (sum / iterationNumber)  + 'ms\n'));
</del><ins>+            pre.appendChild(document.createTextNode('Average: ' + (sum / iterationNumber)  + ' ms\n'));
</ins><span class="cx">             pre.style.paddingTop = 0;
</span><span class="cx">         }
</span><span class="cx">     }
</span></span></pre></div>
<a id="trunkPerformanceTestsSkipped"></a>
<div class="modfile"><h4>Modified: trunk/PerformanceTests/Skipped (162182 => 162183)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/PerformanceTests/Skipped        2014-01-17 05:27:13 UTC (rev 162182)
+++ trunk/PerformanceTests/Skipped        2014-01-17 06:06:36 UTC (rev 162183)
</span><span class="lines">@@ -89,5 +89,5 @@
</span><span class="cx"> # https://bugs.webkit.org/show_bug.cgi?id=113811#c2
</span><span class="cx"> Layout/LineLayoutJapanese.html
</span><span class="cx"> 
</span><del>-# New DOM benchmark is not ready for the prime time yet.
-DoYouEvenBench
</del><ins>+# Don't run the interactive runner. We run Full.html
+DoYouEvenBench/benchmark.html
</ins></span></pre></div>
<a id="trunkPerformanceTestsresourcesrunnerjs"></a>
<div class="modfile"><h4>Modified: trunk/PerformanceTests/resources/runner.js (162182 => 162183)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/PerformanceTests/resources/runner.js        2014-01-17 05:27:13 UTC (rev 162182)
+++ trunk/PerformanceTests/resources/runner.js        2014-01-17 06:06:36 UTC (rev 162183)
</span><span class="lines">@@ -221,7 +221,7 @@
</span><span class="cx">             if (currentTest.description)
</span><span class="cx">                 PerfTestRunner.log(&quot;Description: &quot; + currentTest.description);
</span><span class="cx">             metric = {'fps': 'FrameRate', 'runs/s': 'Runs', 'ms': 'Time'}[PerfTestRunner.unit];
</span><del>-            var suffix = currentTest.aggregation ? ':' + currentTest.aggregation : '';
</del><ins>+            var suffix = currentTest.aggregator ? ':' + currentTest.aggregator : '';
</ins><span class="cx">             PerfTestRunner.logStatistics(results, PerfTestRunner.unit, prefix + &quot;:&quot; + metric + suffix);
</span><span class="cx">             if (jsHeapResults.length) {
</span><span class="cx">                 PerfTestRunner.logStatistics(jsHeapResults, &quot;bytes&quot;, prefix + &quot;:JSHeap&quot;);
</span></span></pre></div>
<a id="trunkToolsChangeLog"></a>
<div class="modfile"><h4>Modified: trunk/Tools/ChangeLog (162182 => 162183)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Tools/ChangeLog        2014-01-17 05:27:13 UTC (rev 162182)
+++ trunk/Tools/ChangeLog        2014-01-17 06:06:36 UTC (rev 162183)
</span><span class="lines">@@ -1,3 +1,29 @@
</span><ins>+2014-01-16  Ryosuke Niwa  &lt;rniwa@webkit.org&gt;
+
+        Automate DoYouEvenBench
+        https://bugs.webkit.org/show_bug.cgi?id=124497
+
+        Reviewed by Geoffrey Garen.
+
+        * Scripts/webkitpy/performance_tests/perftest.py:
+        (PerfTestMetric.__init__): Added the aggregator name as an argument.
+        (PerfTestMetric.aggregator): Added.
+        (PerfTest._metrics_regex): Made the subtest name match non-greedy so that the metric names will be
+        won't be eagerly parsed as a part of the subtest name. e.g. &quot;Time&quot; and &quot;Total&quot; in &quot;a:Time:Total&quot;
+        should be parsed as the metric and the aggregator respectively.
+        (PerfTest._run_with_driver): Pass in the aggregator name.
+        (PerfTest._ensure_metrics): Ditto. Also split the subtest name by / as required by DoYouEvenBench
+        which generates subtests of subtests within a single test file.
+
+        * Scripts/webkitpy/performance_tests/perftest_unittest.py:
+        (test_parse_output_with_subtests_and_total): Added.
+
+        * Scripts/webkitpy/performance_tests/perftestsrunner.py:
+        (_generate_results_dict): Add the aggregator name to the JSON when one is available.
+
+        * Scripts/webkitpy/performance_tests/perftestsrunner_integrationtest.py:
+        (TestWithSubtestsData): Added a sub test with an aggregator and a sub-sub test.
+
</ins><span class="cx"> 2014-01-16  Chris Fleizach  &lt;cfleizach@apple.com&gt;
</span><span class="cx"> 
</span><span class="cx">         platform/mac/accessibility/aria-multiline.html sometimes asserts in AccessibilityController::removeNotificationListener
</span></span></pre></div>
<a id="trunkToolsScriptswebkitpyperformance_testsperftestpy"></a>
<div class="modfile"><h4>Modified: trunk/Tools/Scripts/webkitpy/performance_tests/perftest.py (162182 => 162183)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Tools/Scripts/webkitpy/performance_tests/perftest.py        2014-01-17 05:27:13 UTC (rev 162182)
+++ trunk/Tools/Scripts/webkitpy/performance_tests/perftest.py        2014-01-17 06:06:36 UTC (rev 162183)
</span><span class="lines">@@ -50,10 +50,11 @@
</span><span class="cx"> 
</span><span class="cx"> 
</span><span class="cx"> class PerfTestMetric(object):
</span><del>-    def __init__(self, path, test_file_name, metric, unit=None, iterations=None):
</del><ins>+    def __init__(self, path, test_file_name, metric, unit=None, aggregator=None, iterations=None):
</ins><span class="cx">         # FIXME: Fix runner.js to report correct metric names
</span><span class="cx">         self._iterations = iterations or []
</span><span class="cx">         self._unit = unit or self.metric_to_unit(metric)
</span><ins>+        self._aggregator = aggregator
</ins><span class="cx">         self._metric = self.time_unit_to_metric(self._unit) if metric == 'Time' else metric
</span><span class="cx">         self._path = path
</span><span class="cx">         self._test_file_name = test_file_name
</span><span class="lines">@@ -61,6 +62,9 @@
</span><span class="cx">     def name(self):
</span><span class="cx">         return self._metric
</span><span class="cx"> 
</span><ins>+    def aggregator(self):
+        return self._aggregator
+
</ins><span class="cx">     def path(self):
</span><span class="cx">         return self._path
</span><span class="cx"> 
</span><span class="lines">@@ -168,7 +172,7 @@
</span><span class="cx">             (median, unit, stdev, unit, sorted_values[0], unit, sorted_values[-1], unit))
</span><span class="cx"> 
</span><span class="cx">     _description_regex = re.compile(r'^Description: (?P&lt;description&gt;.*)$', re.IGNORECASE)
</span><del>-    _metrics_regex = re.compile(r'^(?P&lt;subtest&gt;[A-Za-z0-9\(\[].+)?:(?P&lt;metric&gt;[A-Z][A-Za-z]+)(:(?P&lt;aggregator&gt;[A-Z][A-Za-z]+))? -&gt; \[(?P&lt;values&gt;(\d+(\.\d+)?)(, \d+(\.\d+)?)+)\] (?P&lt;unit&gt;[a-z/]+)?$')
</del><ins>+    _metrics_regex = re.compile(r'^(?P&lt;subtest&gt;[A-Za-z0-9\(\[].+?)?:(?P&lt;metric&gt;[A-Z][A-Za-z]+)(:(?P&lt;aggregator&gt;[A-Z][A-Za-z]+))? -&gt; \[(?P&lt;values&gt;(\d+(\.\d+)?)(, \d+(\.\d+)?)+)\] (?P&lt;unit&gt;[a-z/]+)?$')
</ins><span class="cx"> 
</span><span class="cx">     def _run_with_driver(self, driver, time_out_ms):
</span><span class="cx">         output = self.run_single(driver, self.test_path(), time_out_ms)
</span><span class="lines">@@ -188,12 +192,12 @@
</span><span class="cx">                 _log.error('ERROR: ' + line)
</span><span class="cx">                 return False
</span><span class="cx"> 
</span><del>-            metric = self._ensure_metrics(metric_match.group('metric'), metric_match.group('subtest'), metric_match.group('unit'))
</del><ins>+            metric = self._ensure_metrics(metric_match.group('metric'), metric_match.group('subtest'), metric_match.group('unit'), metric_match.group('aggregator'))
</ins><span class="cx">             metric.append_group(map(lambda value: float(value), metric_match.group('values').split(', ')))
</span><span class="cx"> 
</span><span class="cx">         return True
</span><span class="cx"> 
</span><del>-    def _ensure_metrics(self, metric_name, subtest_name='', unit=None):
</del><ins>+    def _ensure_metrics(self, metric_name, subtest_name='', unit=None, aggregator=None):
</ins><span class="cx">         try:
</span><span class="cx">             subtest = next(subtest for subtest in self._metrics if subtest['name'] == subtest_name)
</span><span class="cx">         except StopIteration:
</span><span class="lines">@@ -205,8 +209,8 @@
</span><span class="cx">         except StopIteration:
</span><span class="cx">             path = self.test_name_without_file_extension().split('/')
</span><span class="cx">             if subtest_name:
</span><del>-                path += [subtest_name]
-            metric = PerfTestMetric(path, self._test_name, metric_name, unit)
</del><ins>+                path += subtest_name.split('/')
+            metric = PerfTestMetric(path, self._test_name, metric_name, unit, aggregator)
</ins><span class="cx">             subtest['metrics'].append(metric)
</span><span class="cx">             return metric
</span><span class="cx"> 
</span></span></pre></div>
<a id="trunkToolsScriptswebkitpyperformance_testsperftest_unittestpy"></a>
<div class="modfile"><h4>Modified: trunk/Tools/Scripts/webkitpy/performance_tests/perftest_unittest.py (162182 => 162183)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Tools/Scripts/webkitpy/performance_tests/perftest_unittest.py        2014-01-17 05:27:13 UTC (rev 162182)
+++ trunk/Tools/Scripts/webkitpy/performance_tests/perftest_unittest.py        2014-01-17 06:06:36 UTC (rev 162183)
</span><span class="lines">@@ -211,7 +211,56 @@
</span><span class="cx"> median= 1101.0 ms, stdev= 13.3140211016 ms, min= 1080.0 ms, max= 1120.0 ms
</span><span class="cx"> &quot;&quot;&quot;)
</span><span class="cx"> 
</span><ins>+    def test_parse_output_with_subtests_and_total(self):
+        output = DriverOutput(&quot;&quot;&quot;
+:Time:Total -&gt; [2324, 2328, 2345, 2314, 2312] ms
+EmberJS-TodoMVC:Time:Total -&gt; [1462, 1473, 1490, 1465, 1458] ms
+EmberJS-TodoMVC/a:Time -&gt; [1, 2, 3, 4, 5] ms
+BackboneJS-TodoMVC:Time -&gt; [862, 855, 855, 849, 854] ms
+&quot;&quot;&quot;, image=None, image_hash=None, audio=None)
+        output_capture = OutputCapture()
+        output_capture.capture_output()
+        try:
+            test = PerfTest(MockPort(), 'some-dir/some-test', '/path/some-dir/some-test')
+            test.run_single = lambda driver, path, time_out_ms: output
+            self.assertTrue(test.run(10))
+        finally:
+            actual_stdout, actual_stderr, actual_logs = output_capture.restore_output()
</ins><span class="cx"> 
</span><ins>+        subtests = test._metrics
+        self.assertEqual(map(lambda test: test['name'], subtests), [None, 'EmberJS-TodoMVC', 'EmberJS-TodoMVC/a', 'BackboneJS-TodoMVC'])
+
+        main_metrics = subtests[0]['metrics']
+        self.assertEqual(map(lambda metric: metric.name(), main_metrics), ['Time'])
+        self.assertEqual(main_metrics[0].aggregator(), 'Total')
+        self.assertEqual(main_metrics[0].path(), ['some-dir', 'some-test'])
+        self.assertEqual(main_metrics[0].flattened_iteration_values(), [2324, 2328, 2345, 2314, 2312] * 4)
+
+        some_test_metrics = subtests[1]['metrics']
+        self.assertEqual(map(lambda metric: metric.name(), some_test_metrics), ['Time'])
+        self.assertEqual(some_test_metrics[0].aggregator(), 'Total')
+        self.assertEqual(some_test_metrics[0].path(), ['some-dir', 'some-test', 'EmberJS-TodoMVC'])
+        self.assertEqual(some_test_metrics[0].flattened_iteration_values(), [1462, 1473, 1490, 1465, 1458] * 4)
+
+        some_test_metrics = subtests[2]['metrics']
+        self.assertEqual(map(lambda metric: metric.name(), some_test_metrics), ['Time'])
+        self.assertEqual(some_test_metrics[0].aggregator(), None)
+        self.assertEqual(some_test_metrics[0].path(), ['some-dir', 'some-test', 'EmberJS-TodoMVC', 'a'])
+        self.assertEqual(some_test_metrics[0].flattened_iteration_values(), [1, 2, 3, 4, 5] * 4)
+
+        some_test_metrics = subtests[3]['metrics']
+        self.assertEqual(map(lambda metric: metric.name(), some_test_metrics), ['Time'])
+        self.assertEqual(some_test_metrics[0].aggregator(), None)
+        self.assertEqual(some_test_metrics[0].path(), ['some-dir', 'some-test', 'BackboneJS-TodoMVC'])
+        self.assertEqual(some_test_metrics[0].flattened_iteration_values(), [862, 855, 855, 849, 854] * 4)
+
+        self.assertEqual(actual_stdout, '')
+        self.assertEqual(actual_stderr, '')
+        self.assertEqual(actual_logs, &quot;&quot;&quot;RESULT some-dir: some-test: Time= 2324.6 ms
+median= 2324.0 ms, stdev= 12.1326007105 ms, min= 2312.0 ms, max= 2345.0 ms
+&quot;&quot;&quot;)
+
+
</ins><span class="cx"> class TestSingleProcessPerfTest(unittest.TestCase):
</span><span class="cx">     def test_use_only_one_process(self):
</span><span class="cx">         called = [0]
</span></span></pre></div>
<a id="trunkToolsScriptswebkitpyperformance_testsperftestsrunnerpy"></a>
<div class="modfile"><h4>Modified: trunk/Tools/Scripts/webkitpy/performance_tests/perftestsrunner.py (162182 => 162183)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Tools/Scripts/webkitpy/performance_tests/perftestsrunner.py        2014-01-17 05:27:13 UTC (rev 162182)
+++ trunk/Tools/Scripts/webkitpy/performance_tests/perftestsrunner.py        2014-01-17 06:06:36 UTC (rev 162183)
</span><span class="lines">@@ -269,7 +269,10 @@
</span><span class="cx">                     current_test['url'] = view_source_url('PerformanceTests/' + metric.test_file_name())
</span><span class="cx">                     current_test.setdefault('metrics', {})
</span><span class="cx">                     assert metric.name() not in current_test['metrics']
</span><del>-                    current_test['metrics'][metric.name()] = {'current': metric.grouped_iteration_values()}
</del><ins>+                    test_results = {'current': metric.grouped_iteration_values()}
+                    if metric.aggregator():
+                        test_results['aggregators'] = [metric.aggregator()]
+                    current_test['metrics'][metric.name()] = test_results
</ins><span class="cx">                 else:
</span><span class="cx">                     current_test.setdefault('tests', {})
</span><span class="cx">                     tests = current_test['tests']
</span></span></pre></div>
<a id="trunkToolsScriptswebkitpyperformance_testsperftestsrunner_integrationtestpy"></a>
<div class="modfile"><h4>Modified: trunk/Tools/Scripts/webkitpy/performance_tests/perftestsrunner_integrationtest.py (162182 => 162183)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Tools/Scripts/webkitpy/performance_tests/perftestsrunner_integrationtest.py        2014-01-17 05:27:13 UTC (rev 162182)
+++ trunk/Tools/Scripts/webkitpy/performance_tests/perftestsrunner_integrationtest.py        2014-01-17 06:06:36 UTC (rev 162183)
</span><span class="lines">@@ -98,6 +98,8 @@
</span><span class="cx"> 
</span><span class="cx"> class TestWithSubtestsData:
</span><span class="cx">     text = &quot;&quot;&quot;subtest:Time -&gt; [1, 2, 3, 4, 5] ms
</span><ins>+total-test:Time:Total -&gt; [1, 2, 3, 4, 5] ms
+total-test/subsubtest:Time -&gt; [1, 2, 3, 4, 5] ms
</ins><span class="cx"> :Time -&gt; [1080, 1120, 1095, 1101, 1104] ms
</span><span class="cx"> &quot;&quot;&quot;
</span><span class="cx"> 
</span><span class="lines">@@ -113,7 +115,14 @@
</span><span class="cx">         'tests': {
</span><span class="cx">             'subtest': {
</span><span class="cx">                 'url': 'http://trac.webkit.org/browser/trunk/PerformanceTests/Parser/test-with-subtests.html',
</span><del>-                'metrics': {'Time': {'current': [[1.0, 2.0, 3.0, 4.0, 5.0]] * 4}}}}}
</del><ins>+                'metrics': {'Time': {'current': [[1.0, 2.0, 3.0, 4.0, 5.0]] * 4}}},
+            'total-test': {
+                'url': 'http://trac.webkit.org/browser/trunk/PerformanceTests/Parser/test-with-subtests.html',
+                'metrics': {'Time': {'current': [[1.0, 2.0, 3.0, 4.0, 5.0]] * 4, &quot;aggregators&quot;: [&quot;Total&quot;]}},
+                'tests': {
+                    'subsubtest':
+                        {'url': 'http://trac.webkit.org/browser/trunk/PerformanceTests/Parser/test-with-subtests.html',
+                        'metrics': {'Time': {'current': [[1.0, 2.0, 3.0, 4.0, 5.0]] * 4}}}}}}}
</ins><span class="cx"> 
</span><span class="cx"> 
</span><span class="cx"> class TestDriver:
</span></span></pre>
</div>
</div>

</body>
</html>