<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN"
"http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head><meta http-equiv="content-type" content="text/html; charset=utf-8" />
<title>[159805] trunk</title>
</head>
<body>

<style type="text/css"><!--
#msg dl.meta { border: 1px #006 solid; background: #369; padding: 6px; color: #fff; }
#msg dl.meta dt { float: left; width: 6em; font-weight: bold; }
#msg dt:after { content:':';}
#msg dl, #msg dt, #msg ul, #msg li, #header, #footer, #logmsg { font-family: verdana,arial,helvetica,sans-serif; font-size: 10pt;  }
#msg dl a { font-weight: bold}
#msg dl a:link    { color:#fc3; }
#msg dl a:active  { color:#ff0; }
#msg dl a:visited { color:#cc6; }
h3 { font-family: verdana,arial,helvetica,sans-serif; font-size: 10pt; font-weight: bold; }
#msg pre { overflow: auto; background: #ffc; border: 1px #fa0 solid; padding: 6px; }
#logmsg { background: #ffc; border: 1px #fa0 solid; padding: 1em 1em 0 1em; }
#logmsg p, #logmsg pre, #logmsg blockquote { margin: 0 0 1em 0; }
#logmsg p, #logmsg li, #logmsg dt, #logmsg dd { line-height: 14pt; }
#logmsg h1, #logmsg h2, #logmsg h3, #logmsg h4, #logmsg h5, #logmsg h6 { margin: .5em 0; }
#logmsg h1:first-child, #logmsg h2:first-child, #logmsg h3:first-child, #logmsg h4:first-child, #logmsg h5:first-child, #logmsg h6:first-child { margin-top: 0; }
#logmsg ul, #logmsg ol { padding: 0; list-style-position: inside; margin: 0 0 0 1em; }
#logmsg ul { text-indent: -1em; padding-left: 1em; }#logmsg ol { text-indent: -1.5em; padding-left: 1.5em; }
#logmsg > ul, #logmsg > ol { margin: 0 0 1em 0; }
#logmsg pre { background: #eee; padding: 1em; }
#logmsg blockquote { border: 1px solid #fa0; border-left-width: 10px; padding: 1em 1em 0 1em; background: white;}
#logmsg dl { margin: 0; }
#logmsg dt { font-weight: bold; }
#logmsg dd { margin: 0; padding: 0 0 0.5em 0; }
#logmsg dd:before { content:'\00bb';}
#logmsg table { border-spacing: 0px; border-collapse: collapse; border-top: 4px solid #fa0; border-bottom: 1px solid #fa0; background: #fff; }
#logmsg table th { text-align: left; font-weight: normal; padding: 0.2em 0.5em; border-top: 1px dotted #fa0; }
#logmsg table td { text-align: right; border-top: 1px dotted #fa0; padding: 0.2em 0.5em; }
#logmsg table thead th { text-align: center; border-bottom: 1px solid #fa0; }
#logmsg table th.Corner { text-align: left; }
#logmsg hr { border: none 0; border-top: 2px dashed #fa0; height: 1px; }
#header, #footer { color: #fff; background: #636; border: 1px #300 solid; padding: 6px; }
#patch { width: 100%; }
#patch h4 {font-family: verdana,arial,helvetica,sans-serif;font-size:10pt;padding:8px;background:#369;color:#fff;margin:0;}
#patch .propset h4, #patch .binary h4 {margin:0;}
#patch pre {padding:0;line-height:1.2em;margin:0;}
#patch .diff {width:100%;background:#eee;padding: 0 0 10px 0;overflow:auto;}
#patch .propset .diff, #patch .binary .diff  {padding:10px 0;}
#patch span {display:block;padding:0 10px;}
#patch .modfile, #patch .addfile, #patch .delfile, #patch .propset, #patch .binary, #patch .copfile {border:1px solid #ccc;margin:10px 0;}
#patch ins {background:#dfd;text-decoration:none;display:block;padding:0 10px;}
#patch del {background:#fdd;text-decoration:none;display:block;padding:0 10px;}
#patch .lines, .info {color:#888;background:#fff;}
--></style>
<div id="msg">
<dl class="meta">
<dt>Revision</dt> <dd><a href="http://trac.webkit.org/projects/webkit/changeset/159805">159805</a></dd>
<dt>Author</dt> <dd>rniwa@webkit.org</dd>
<dt>Date</dt> <dd>2013-11-26 21:33:17 -0800 (Tue, 26 Nov 2013)</dd>
</dl>

<h3>Log Message</h3>
<pre>Record subtest values in Dromaeo tests
https://bugs.webkit.org/show_bug.cgi?id=124498

Reviewed by Andreas Kling.

PerformanceTests: 

Made Dromaeo's test runner report values in DRT.progress via newly added PerfTestRunner.reportValues.

* Dromaeo/resources/dromaeorunner.js:
(.): Moved the definition out of DRT.setup.
(DRT.setup): Ditto.
(DRT.testObject): Extracted from DRT.setup. Set the subtest name and continueTesting.
continueTesting is set true for subtests; i.e. when name is specified.
(DRT.progress): Call PerfTestRunner.reportValues to report subtest results.
(DRT.teardown): Call PerfTestRunner.reportValues instead of measureValueAsync.

* resources/runner.js: Made various changes for newly added PerfTestRunner.reportValues.
(.): Moved the initialization of completedIterations, results, jsHeapResults, and mallocHeapResults into
start since they need to be initialized before running each subtest. Initialize logLines here since we
need to use the same logger for all subtests.
(.start): Initialize the variables mentioned above here. Also respect doNotLogStart used by reportValues.
(ignoreWarmUpAndLog): Added doNotLogProgress. Used by reportValues since it reports all values at once.
(finish): Compute the metric name such as FrameFrame and Runs from unit. Also don't log or notify done
when continueTesting is set on the test object.
(PerfTestRunner.reportValues): Added. Reports all values for the main/sub test.

Tools: 

Supported parsing subtest results.

* Scripts/webkitpy/performance_tests/perftest.py: Replaced _metrics with an ordered list of subtests, each of
which contains a dictionary with its name and an ordered list of subtest's metrics.
(PerfTest.__init__): Initialize _metrics as a list.
(PerfTest.run): Go through each subtest and its metrics to create a list of TestMetrics.
(PerfTest._run_with_driver):
(PerfTest._ensure_metrics): Look for a subtest then a metric in _metrics.

* Scripts/webkitpy/performance_tests/perftest_unittest.py:
(TestPerfTest._assert_results_are_correct): Updated the assertions per changes to _metrics.
(TestPerfTest.test_parse_output): Ditto.
(TestPerfTest.test_parse_output_with_subtests): Added the metric and the unit on each subtest result as well as
assertions to ensure subtest results are parsed properly.
(TestReplayPerfTest.test_run_with_driver_accumulates_results): Updated the assertions per changes to _metrics.
(TestReplayPerfTest.test_run_with_driver_accumulates_memory_results): Dittp.

* Scripts/webkitpy/performance_tests/perftestsrunner.py:
(_generate_results_dict): When the metric for a subtest is processed before that of the main test, the url is
incorrectly suffixed with '/'. Fix this later by re-computing the url with TestPerfMetric.test_file_name when
adding new results.

* Scripts/webkitpy/performance_tests/perftestsrunner_integrationtest.py:
(TestWithSubtestsData): Added.
(TestDriver.run_test):
(MainTest.test_run_test_with_subtests): Added.

LayoutTests: 

Rebaselined the test.

* fast/harness/perftests/runs-per-second-log-expected.txt:</pre>

<h3>Modified Paths</h3>
<ul>
<li><a href="#trunkLayoutTestsChangeLog">trunk/LayoutTests/ChangeLog</a></li>
<li><a href="#trunkLayoutTestsfastharnessperftestsrunspersecondlogexpectedtxt">trunk/LayoutTests/fast/harness/perftests/runs-per-second-log-expected.txt</a></li>
<li><a href="#trunkPerformanceTestsChangeLog">trunk/PerformanceTests/ChangeLog</a></li>
<li><a href="#trunkPerformanceTestsDromaeoresourcesdromaeorunnerjs">trunk/PerformanceTests/Dromaeo/resources/dromaeorunner.js</a></li>
<li><a href="#trunkPerformanceTestsresourcesrunnerjs">trunk/PerformanceTests/resources/runner.js</a></li>
<li><a href="#trunkToolsChangeLog">trunk/Tools/ChangeLog</a></li>
<li><a href="#trunkToolsScriptswebkitpyperformance_testsperftestpy">trunk/Tools/Scripts/webkitpy/performance_tests/perftest.py</a></li>
<li><a href="#trunkToolsScriptswebkitpyperformance_testsperftest_unittestpy">trunk/Tools/Scripts/webkitpy/performance_tests/perftest_unittest.py</a></li>
<li><a href="#trunkToolsScriptswebkitpyperformance_testsperftestsrunnerpy">trunk/Tools/Scripts/webkitpy/performance_tests/perftestsrunner.py</a></li>
<li><a href="#trunkToolsScriptswebkitpyperformance_testsperftestsrunner_integrationtestpy">trunk/Tools/Scripts/webkitpy/performance_tests/perftestsrunner_integrationtest.py</a></li>
</ul>

</div>
<div id="patch">
<h3>Diff</h3>
<a id="trunkLayoutTestsChangeLog"></a>
<div class="modfile"><h4>Modified: trunk/LayoutTests/ChangeLog (159804 => 159805)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/LayoutTests/ChangeLog        2013-11-27 05:13:51 UTC (rev 159804)
+++ trunk/LayoutTests/ChangeLog        2013-11-27 05:33:17 UTC (rev 159805)
</span><span class="lines">@@ -1,3 +1,14 @@
</span><ins>+2013-11-26  Ryosuke Niwa  &lt;rniwa@webkit.org&gt;
+
+        Record subtest values in Dromaeo tests
+        https://bugs.webkit.org/show_bug.cgi?id=124498
+
+        Reviewed by Andreas Kling.
+
+        Rebaselined the test.
+
+        * fast/harness/perftests/runs-per-second-log-expected.txt:
+
</ins><span class="cx"> 2013-11-26  Nick Diego Yamane  &lt;nick.yamane@openbossa.org&gt;
</span><span class="cx"> 
</span><span class="cx">         [MediaStream API] HTMLMediaElement should be able to use MediaStream as source
</span></span></pre></div>
<a id="trunkLayoutTestsfastharnessperftestsrunspersecondlogexpectedtxt"></a>
<div class="modfile"><h4>Modified: trunk/LayoutTests/fast/harness/perftests/runs-per-second-log-expected.txt (159804 => 159805)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/LayoutTests/fast/harness/perftests/runs-per-second-log-expected.txt        2013-11-27 05:13:51 UTC (rev 159804)
+++ trunk/LayoutTests/fast/harness/perftests/runs-per-second-log-expected.txt        2013-11-27 05:33:17 UTC (rev 159805)
</span><span class="lines">@@ -1,5 +1,5 @@
</span><span class="cx"> This test verifies PerfTestRunner.runPerSecond() outputs log as expected.
</span><span class="cx"> 
</span><span class="cx"> 
</span><del>-:Time -&gt; [2, 4, 5, 8, 10] runs/s
</del><ins>+:Runs -&gt; [2, 4, 5, 8, 10] runs/s
</ins><span class="cx"> 
</span></span></pre></div>
<a id="trunkPerformanceTestsChangeLog"></a>
<div class="modfile"><h4>Modified: trunk/PerformanceTests/ChangeLog (159804 => 159805)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/PerformanceTests/ChangeLog        2013-11-27 05:13:51 UTC (rev 159804)
+++ trunk/PerformanceTests/ChangeLog        2013-11-27 05:33:17 UTC (rev 159805)
</span><span class="lines">@@ -1,5 +1,32 @@
</span><span class="cx"> 2013-11-26  Ryosuke Niwa  &lt;rniwa@webkit.org&gt;
</span><span class="cx"> 
</span><ins>+        Record subtest values in Dromaeo tests
+        https://bugs.webkit.org/show_bug.cgi?id=124498
+
+        Reviewed by Andreas Kling.
+
+        Made Dromaeo's test runner report values in DRT.progress via newly added PerfTestRunner.reportValues.
+
+        * Dromaeo/resources/dromaeorunner.js:
+        (.): Moved the definition out of DRT.setup.
+        (DRT.setup): Ditto.
+        (DRT.testObject): Extracted from DRT.setup. Set the subtest name and continueTesting.
+        continueTesting is set true for subtests; i.e. when name is specified.
+        (DRT.progress): Call PerfTestRunner.reportValues to report subtest results.
+        (DRT.teardown): Call PerfTestRunner.reportValues instead of measureValueAsync.
+
+        * resources/runner.js: Made various changes for newly added PerfTestRunner.reportValues.
+        (.): Moved the initialization of completedIterations, results, jsHeapResults, and mallocHeapResults into
+        start since they need to be initialized before running each subtest. Initialize logLines here since we
+        need to use the same logger for all subtests.
+        (.start): Initialize the variables mentioned above here. Also respect doNotLogStart used by reportValues.
+        (ignoreWarmUpAndLog): Added doNotLogProgress. Used by reportValues since it reports all values at once.
+        (finish): Compute the metric name such as FrameFrame and Runs from unit. Also don't log or notify done
+        when continueTesting is set on the test object.
+        (PerfTestRunner.reportValues): Added. Reports all values for the main/sub test.
+
+2013-11-26  Ryosuke Niwa  &lt;rniwa@webkit.org&gt;
+
</ins><span class="cx">         Remove replay performance tests as it's not actively maintained
</span><span class="cx">         https://bugs.webkit.org/show_bug.cgi?id=124764
</span><span class="cx"> 
</span></span></pre></div>
<a id="trunkPerformanceTestsDromaeoresourcesdromaeorunnerjs"></a>
<div class="modfile"><h4>Modified: trunk/PerformanceTests/Dromaeo/resources/dromaeorunner.js (159804 => 159805)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/PerformanceTests/Dromaeo/resources/dromaeorunner.js        2013-11-27 05:13:51 UTC (rev 159804)
+++ trunk/PerformanceTests/Dromaeo/resources/dromaeorunner.js        2013-11-27 05:33:17 UTC (rev 159805)
</span><span class="lines">@@ -1,11 +1,9 @@
</span><span class="cx"> (function(){
</span><ins>+     var ITERATION_COUNT = 5;
</ins><span class="cx">      var DRT  = {
</span><span class="cx">          baseURL: &quot;./resources/dromaeo/web/index.html&quot;,
</span><span class="cx"> 
</span><span class="cx">          setup: function(testName) {
</span><del>-             var ITERATION_COUNT = 5;
-             PerfTestRunner.prepareToMeasureValuesAsync({dromaeoIterationCount: ITERATION_COUNT, doNotMeasureMemoryUsage: true, doNotIgnoreInitialRun: true, unit: 'runs/s'});
-
</del><span class="cx">              var iframe = document.createElement(&quot;iframe&quot;);
</span><span class="cx">              var url = DRT.baseURL + &quot;?&quot; + testName + '&amp;numTests=' + ITERATION_COUNT;
</span><span class="cx">              iframe.setAttribute(&quot;src&quot;, url);
</span><span class="lines">@@ -33,6 +31,11 @@
</span><span class="cx">                  });
</span><span class="cx">          },
</span><span class="cx"> 
</span><ins>+         testObject: function(name) {
+             return {dromaeoIterationCount: ITERATION_COUNT, doNotMeasureMemoryUsage: true, doNotIgnoreInitialRun: true, unit: 'runs/s',
+                name: name, continueTesting: !!name};
+         },
+
</ins><span class="cx">          start: function() {
</span><span class="cx">              DRT.targetWindow.postMessage({ name: &quot;dromaeo:start&quot; } , &quot;*&quot;);
</span><span class="cx">          },
</span><span class="lines">@@ -40,7 +43,7 @@
</span><span class="cx">          progress: function(message) {
</span><span class="cx">             var score = message.status.score;
</span><span class="cx">             if (score)
</span><del>-                DRT.log(score.name + ' -&gt; [' + score.times.join(', ') + ']');
</del><ins>+                PerfTestRunner.reportValues(this.testObject(score.name), score.times);
</ins><span class="cx">          },
</span><span class="cx"> 
</span><span class="cx">          teardown: function(data) {
</span><span class="lines">@@ -55,8 +58,7 @@
</span><span class="cx">                  }
</span><span class="cx">              }
</span><span class="cx"> 
</span><del>-             for (var i = 0; i &lt; times.length; ++i)
-                 PerfTestRunner.measureValueAsync(1 / times[i]);
</del><ins>+             PerfTestRunner.reportValues(this.testObject(), times.map(function (time) { return 1 / time; }));
</ins><span class="cx">          },
</span><span class="cx"> 
</span><span class="cx">          targetDelegateOf: function(functionName) {
</span></span></pre></div>
<a id="trunkPerformanceTestsresourcesrunnerjs"></a>
<div class="modfile"><h4>Modified: trunk/PerformanceTests/resources/runner.js (159804 => 159805)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/PerformanceTests/resources/runner.js        2013-11-27 05:13:51 UTC (rev 159804)
+++ trunk/PerformanceTests/resources/runner.js        2013-11-27 05:33:17 UTC (rev 159805)
</span><span class="lines">@@ -6,14 +6,14 @@
</span><span class="cx"> }
</span><span class="cx"> 
</span><span class="cx"> (function () {
</span><del>-    var logLines = null;
</del><ins>+    var logLines = window.testRunner ? [] : null;
</ins><span class="cx">     var verboseLogging = false;
</span><del>-    var completedIterations = -1;
</del><ins>+    var completedIterations;
</ins><span class="cx">     var callsPerIteration = 1;
</span><span class="cx">     var currentTest = null;
</span><del>-    var results = [];
-    var jsHeapResults = [];
-    var mallocHeapResults = [];
</del><ins>+    var results;
+    var jsHeapResults;
+    var mallocHeapResults;
</ins><span class="cx">     var iterationCount = undefined;
</span><span class="cx"> 
</span><span class="cx">     var PerfTestRunner = {};
</span><span class="lines">@@ -145,7 +145,7 @@
</span><span class="cx">         finish();
</span><span class="cx">     }
</span><span class="cx"> 
</span><del>-    function start(test, runner) {
</del><ins>+    function start(test, runner, doNotLogStart) {
</ins><span class="cx">         if (!test) {
</span><span class="cx">             logFatalError(&quot;Got a bad test object.&quot;);
</span><span class="cx">             return;
</span><span class="lines">@@ -154,9 +154,15 @@
</span><span class="cx">         // FIXME: We should be using multiple instances of test runner on Dromaeo as well but it's too slow now.
</span><span class="cx">         // FIXME: Don't hard code the number of in-process iterations to use inside a test runner.
</span><span class="cx">         iterationCount = test.dromaeoIterationCount || (window.testRunner ? 5 : 20);
</span><del>-        logLines = window.testRunner ? [] : null;
</del><ins>+        completedIterations = -1;
+        results = [];
+        jsHeapResults = [];
+        mallocHeapResults = [];
</ins><span class="cx">         verboseLogging = !window.testRunner;
</span><del>-        PerfTestRunner.logInfo(&quot;Running &quot; + iterationCount + &quot; times&quot;);
</del><ins>+        if (!doNotLogStart) {
+            PerfTestRunner.logInfo('');
+            PerfTestRunner.logInfo(&quot;Running &quot; + iterationCount + &quot; times&quot;);
+        }
</ins><span class="cx">         if (test.doNotIgnoreInitialRun)
</span><span class="cx">             completedIterations++;
</span><span class="cx">         if (runner)
</span><span class="lines">@@ -192,39 +198,50 @@
</span><span class="cx">         }, 0);
</span><span class="cx">     }
</span><span class="cx"> 
</span><del>-    function ignoreWarmUpAndLog(measuredValue) {
</del><ins>+    function ignoreWarmUpAndLog(measuredValue, doNotLogProgress) {
</ins><span class="cx">         var labeledResult = measuredValue + &quot; &quot; + PerfTestRunner.unit;
</span><del>-        if (completedIterations &lt;= 0)
-            PerfTestRunner.logDetail(completedIterations, labeledResult + &quot; (Ignored warm-up run)&quot;);
-        else {
-            results.push(measuredValue);
-            if (window.internals &amp;&amp; !currentTest.doNotMeasureMemoryUsage) {
-                jsHeapResults.push(getUsedJSHeap());
-                mallocHeapResults.push(getUsedMallocHeap());
-            }
</del><ins>+        if (completedIterations &lt;= 0) {
+            if (!doNotLogProgress)
+                PerfTestRunner.logDetail(completedIterations, labeledResult + &quot; (Ignored warm-up run)&quot;);
+            return;
+        }
+
+        results.push(measuredValue);
+        if (window.internals &amp;&amp; !currentTest.doNotMeasureMemoryUsage) {
+            jsHeapResults.push(getUsedJSHeap());
+            mallocHeapResults.push(getUsedMallocHeap());
+        }
+        if (!doNotLogProgress)
</ins><span class="cx">             PerfTestRunner.logDetail(completedIterations, labeledResult);
</span><del>-        }
</del><span class="cx">     }
</span><span class="cx"> 
</span><span class="cx">     function finish() {
</span><span class="cx">         try {
</span><ins>+            var prefix = currentTest.name || '';
</ins><span class="cx">             if (currentTest.description)
</span><span class="cx">                 PerfTestRunner.log(&quot;Description: &quot; + currentTest.description);
</span><del>-            PerfTestRunner.logStatistics(results, PerfTestRunner.unit, &quot;:Time&quot;);
</del><ins>+            metric = {'fps': 'FrameRate', 'runs/s': 'Runs', 'ms': 'Time'}[PerfTestRunner.unit]
+            PerfTestRunner.logStatistics(results, PerfTestRunner.unit, prefix + &quot;:&quot; + metric);
</ins><span class="cx">             if (jsHeapResults.length) {
</span><del>-                PerfTestRunner.logStatistics(jsHeapResults, &quot;bytes&quot;, &quot;:JSHeap&quot;);
-                PerfTestRunner.logStatistics(mallocHeapResults, &quot;bytes&quot;, &quot;:Malloc&quot;);
</del><ins>+                PerfTestRunner.logStatistics(jsHeapResults, &quot;bytes&quot;, prefix + &quot;:JSHeap&quot;);
+                PerfTestRunner.logStatistics(mallocHeapResults, &quot;bytes&quot;, prefix + &quot;:Malloc&quot;);
</ins><span class="cx">             }
</span><del>-            if (logLines)
-                logLines.forEach(logInDocument);
</del><span class="cx">             if (currentTest.done)
</span><span class="cx">                 currentTest.done();
</span><ins>+
+            if (logLines &amp;&amp; !currentTest.continueTesting)
+                logLines.forEach(logInDocument);
</ins><span class="cx">         } catch (exception) {
</span><span class="cx">             logInDocument(&quot;Got an exception while finalizing the test with name=&quot; + exception.name + &quot;, message=&quot; + exception.message);
</span><span class="cx">         }
</span><span class="cx"> 
</span><del>-        if (window.testRunner)
-            testRunner.notifyDone();
</del><ins>+        if (!currentTest.continueTesting) {
+            if (window.testRunner)
+                testRunner.notifyDone();
+            return;
+        }
+
+        currentTest = null;
</ins><span class="cx">     }
</span><span class="cx"> 
</span><span class="cx">     PerfTestRunner.prepareToMeasureValuesAsync = function (test) {
</span><span class="lines">@@ -250,6 +267,16 @@
</span><span class="cx">         return true;
</span><span class="cx">     }
</span><span class="cx"> 
</span><ins>+    PerfTestRunner.reportValues = function (test, values) {
+        PerfTestRunner.unit = test.unit;
+        start(test, null, true);
+        for (var i = 0; i &lt; values.length; i++) {
+            completedIterations++;
+            ignoreWarmUpAndLog(values[i], true);
+        }
+        finish();
+    }
+
</ins><span class="cx">     PerfTestRunner.measureTime = function (test) {
</span><span class="cx">         PerfTestRunner.unit = &quot;ms&quot;;
</span><span class="cx">         start(test, measureTimeOnce);
</span></span></pre></div>
<a id="trunkToolsChangeLog"></a>
<div class="modfile"><h4>Modified: trunk/Tools/ChangeLog (159804 => 159805)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Tools/ChangeLog        2013-11-27 05:13:51 UTC (rev 159804)
+++ trunk/Tools/ChangeLog        2013-11-27 05:33:17 UTC (rev 159805)
</span><span class="lines">@@ -1,5 +1,39 @@
</span><span class="cx"> 2013-11-26  Ryosuke Niwa  &lt;rniwa@webkit.org&gt;
</span><span class="cx"> 
</span><ins>+        Record subtest values in Dromaeo tests
+        https://bugs.webkit.org/show_bug.cgi?id=124498
+
+        Reviewed by Andreas Kling.
+
+        Supported parsing subtest results.
+
+        * Scripts/webkitpy/performance_tests/perftest.py: Replaced _metrics with an ordered list of subtests, each of
+        which contains a dictionary with its name and an ordered list of subtest's metrics.
+        (PerfTest.__init__): Initialize _metrics as a list.
+        (PerfTest.run): Go through each subtest and its metrics to create a list of TestMetrics.
+        (PerfTest._run_with_driver):
+        (PerfTest._ensure_metrics): Look for a subtest then a metric in _metrics.
+
+        * Scripts/webkitpy/performance_tests/perftest_unittest.py:
+        (TestPerfTest._assert_results_are_correct): Updated the assertions per changes to _metrics.
+        (TestPerfTest.test_parse_output): Ditto.
+        (TestPerfTest.test_parse_output_with_subtests): Added the metric and the unit on each subtest result as well as
+        assertions to ensure subtest results are parsed properly.
+        (TestReplayPerfTest.test_run_with_driver_accumulates_results): Updated the assertions per changes to _metrics.
+        (TestReplayPerfTest.test_run_with_driver_accumulates_memory_results): Dittp.
+
+        * Scripts/webkitpy/performance_tests/perftestsrunner.py:
+        (_generate_results_dict): When the metric for a subtest is processed before that of the main test, the url is
+        incorrectly suffixed with '/'. Fix this later by re-computing the url with TestPerfMetric.test_file_name when
+        adding new results.
+
+        * Scripts/webkitpy/performance_tests/perftestsrunner_integrationtest.py:
+        (TestWithSubtestsData): Added.
+        (TestDriver.run_test):
+        (MainTest.test_run_test_with_subtests): Added.
+
+2013-11-26  Ryosuke Niwa  &lt;rniwa@webkit.org&gt;
+
</ins><span class="cx">         Enable HTML template element on Windows ports
</span><span class="cx">         https://bugs.webkit.org/show_bug.cgi?id=124758
</span><span class="cx"> 
</span></span></pre></div>
<a id="trunkToolsScriptswebkitpyperformance_testsperftestpy"></a>
<div class="modfile"><h4>Modified: trunk/Tools/Scripts/webkitpy/performance_tests/perftest.py (159804 => 159805)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Tools/Scripts/webkitpy/performance_tests/perftest.py        2013-11-27 05:13:51 UTC (rev 159804)
+++ trunk/Tools/Scripts/webkitpy/performance_tests/perftest.py        2013-11-27 05:33:17 UTC (rev 159805)
</span><span class="lines">@@ -100,8 +100,7 @@
</span><span class="cx">         self._test_name = test_name
</span><span class="cx">         self._test_path = test_path
</span><span class="cx">         self._description = None
</span><del>-        self._metrics = {}
-        self._ordered_metrics_name = []
</del><ins>+        self._metrics = []
</ins><span class="cx">         self._test_runner_count = test_runner_count
</span><span class="cx"> 
</span><span class="cx">     def test_name(self):
</span><span class="lines">@@ -136,13 +135,13 @@
</span><span class="cx">             _log.info('DESCRIPTION: %s' % self._description)
</span><span class="cx"> 
</span><span class="cx">         results = []
</span><del>-        for metric_name in self._ordered_metrics_name:
-            metric = self._metrics[metric_name]
-            results.append(metric)
-            if should_log:
-                legacy_chromium_bot_compatible_name = self.test_name_without_file_extension().replace('/', ': ')
-                self.log_statistics(legacy_chromium_bot_compatible_name + ': ' + metric.name(),
-                    metric.flattened_iteration_values(), metric.unit())
</del><ins>+        for subtest in self._metrics:
+            for metric in subtest['metrics']:
+                results.append(metric)
+                if should_log and not subtest['name']:
+                    legacy_chromium_bot_compatible_name = self.test_name_without_file_extension().replace('/', ': ')
+                    self.log_statistics(legacy_chromium_bot_compatible_name + ': ' + metric.name(),
+                        metric.flattened_iteration_values(), metric.unit())
</ins><span class="cx"> 
</span><span class="cx">         return results
</span><span class="cx"> 
</span><span class="lines">@@ -169,7 +168,7 @@
</span><span class="cx">             (median, unit, stdev, unit, sorted_values[0], unit, sorted_values[-1], unit))
</span><span class="cx"> 
</span><span class="cx">     _description_regex = re.compile(r'^Description: (?P&lt;description&gt;.*)$', re.IGNORECASE)
</span><del>-    _metrics_regex = re.compile(r'^:(?P&lt;metric&gt;Time|Malloc|JSHeap) -&gt; \[(?P&lt;values&gt;(\d+(\.\d+)?)(, \d+(\.\d+)?)+)\] (?P&lt;unit&gt;[a-z/]+)')
</del><ins>+    _metrics_regex = re.compile(r'^(?P&lt;subtest&gt;[A-Za-z0-9\(\[].+)?:(?P&lt;metric&gt;[A-Z][A-Za-z]+) -&gt; \[(?P&lt;values&gt;(\d+(\.\d+)?)(, \d+(\.\d+)?)+)\] (?P&lt;unit&gt;[a-z/]+)?$')
</ins><span class="cx"> 
</span><span class="cx">     def _run_with_driver(self, driver, time_out_ms):
</span><span class="cx">         output = self.run_single(driver, self.test_path(), time_out_ms)
</span><span class="lines">@@ -189,17 +188,28 @@
</span><span class="cx">                 _log.error('ERROR: ' + line)
</span><span class="cx">                 return False
</span><span class="cx"> 
</span><del>-            metric = self._ensure_metrics(metric_match.group('metric'), metric_match.group('unit'))
</del><ins>+            metric = self._ensure_metrics(metric_match.group('metric'), metric_match.group('subtest'), metric_match.group('unit'))
</ins><span class="cx">             metric.append_group(map(lambda value: float(value), metric_match.group('values').split(', ')))
</span><span class="cx"> 
</span><span class="cx">         return True
</span><span class="cx"> 
</span><del>-    def _ensure_metrics(self, metric_name, unit=None):
-        if metric_name not in self._metrics:
-            self._metrics[metric_name] = PerfTestMetric(self.test_name_without_file_extension().split('/'), self._test_name, metric_name, unit)
-            self._ordered_metrics_name.append(metric_name)
-        return self._metrics[metric_name]
</del><ins>+    def _ensure_metrics(self, metric_name, subtest_name='', unit=None):
+        try:
+            subtest = next(subtest for subtest in self._metrics if subtest['name'] == subtest_name)
+        except StopIteration:
+            subtest = {'name': subtest_name, 'metrics': []}
+            self._metrics.append(subtest)
</ins><span class="cx"> 
</span><ins>+        try:
+            return next(metric for metric in subtest['metrics'] if metric.name() == metric_name)
+        except StopIteration:
+            path = self.test_name_without_file_extension().split('/')
+            if subtest_name:
+                path += [subtest_name]
+            metric = PerfTestMetric(path, self._test_name, metric_name, unit)
+            subtest['metrics'].append(metric)
+            return metric
+
</ins><span class="cx">     def run_single(self, driver, test_path, time_out_ms, should_run_pixel_test=False):
</span><span class="cx">         return driver.run_test(DriverInput(test_path, time_out_ms, image_hash=None, should_run_pixel_test=should_run_pixel_test), stop_when_done=False)
</span><span class="cx"> 
</span><span class="lines">@@ -236,9 +246,6 @@
</span><span class="cx">         re.compile(re.escape(&quot;&quot;&quot;Blocked access to external URL http://www.whatwg.org/specs/web-apps/current-work/&quot;&quot;&quot;)),
</span><span class="cx">         re.compile(r&quot;CONSOLE MESSAGE: (line \d+: )?Blocked script execution in '[A-Za-z0-9\-\.:]+' because the document's frame is sandboxed and the 'allow-scripts' permission is not set.&quot;),
</span><span class="cx">         re.compile(r&quot;CONSOLE MESSAGE: (line \d+: )?Not allowed to load local resource&quot;),
</span><del>-        # Dromaeo reports values for subtests. Ignore them for now.
-        # FIXME: Remove once subtests are supported
-        re.compile(r'^[A-Za-z0-9\(\[].+( -&gt; )(\[?[0-9\., ]+\])( [a-z/]+)?$'),
</del><span class="cx">     ]
</span><span class="cx"> 
</span><span class="cx">     def _filter_output(self, output):
</span></span></pre></div>
<a id="trunkToolsScriptswebkitpyperformance_testsperftest_unittestpy"></a>
<div class="modfile"><h4>Modified: trunk/Tools/Scripts/webkitpy/performance_tests/perftest_unittest.py (159804 => 159805)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Tools/Scripts/webkitpy/performance_tests/perftest_unittest.py        2013-11-27 05:13:51 UTC (rev 159804)
+++ trunk/Tools/Scripts/webkitpy/performance_tests/perftest_unittest.py        2013-11-27 05:33:17 UTC (rev 159805)
</span><span class="lines">@@ -91,9 +91,12 @@
</span><span class="cx"> class TestPerfTest(unittest.TestCase):
</span><span class="cx">     def _assert_results_are_correct(self, test, output):
</span><span class="cx">         test.run_single = lambda driver, path, time_out_ms: output
</span><del>-        self.assertTrue(test._run_with_driver(None, None))
-        self.assertEqual(test._metrics.keys(), ['Time'])
-        self.assertEqual(test._metrics['Time'].flattened_iteration_values(), [1080, 1120, 1095, 1101, 1104])
</del><ins>+        self.assertTrue(test.run(10))
+        subtests = test._metrics
+        self.assertEqual(map(lambda test: test['name'], subtests), [None])
+        metrics = subtests[0]['metrics']
+        self.assertEqual(map(lambda metric: metric.name(), metrics), ['Time'])
+        self.assertEqual(metrics[0].flattened_iteration_values(), [1080, 1120, 1095, 1101, 1104] * 4)
</ins><span class="cx"> 
</span><span class="cx">     def test_parse_output(self):
</span><span class="cx">         output = DriverOutput(&quot;&quot;&quot;
</span><span class="lines">@@ -108,7 +111,9 @@
</span><span class="cx">             actual_stdout, actual_stderr, actual_logs = output_capture.restore_output()
</span><span class="cx">         self.assertEqual(actual_stdout, '')
</span><span class="cx">         self.assertEqual(actual_stderr, '')
</span><del>-        self.assertEqual(actual_logs, '')
</del><ins>+        self.assertEqual(actual_logs, &quot;&quot;&quot;RESULT some-test: Time= 1100.0 ms
+median= 1101.0 ms, stdev= 13.3140211016 ms, min= 1080.0 ms, max= 1120.0 ms
+&quot;&quot;&quot;)
</ins><span class="cx"> 
</span><span class="cx">     def _assert_failed_on_line(self, output_text, expected_log):
</span><span class="cx">         output = DriverOutput(output_text, image=None, image_hash=None, audio=None)
</span><span class="lines">@@ -155,28 +160,56 @@
</span><span class="cx">     def test_parse_output_with_subtests(self):
</span><span class="cx">         output = DriverOutput(&quot;&quot;&quot;
</span><span class="cx"> Description: this is a test description.
</span><del>-some test -&gt; [1, 2, 3, 4, 5]
-some other test = else -&gt; [6, 7, 8, 9, 10]
-Array Construction, [] -&gt; [11, 12, 13, 14, 15]
-Concat String -&gt; [15163, 15304, 15386, 15608, 15622]
-jQuery - addClass -&gt; [2785, 2815, 2826, 2841, 2861]
-Dojo - div:only-child -&gt; [7825, 7910, 7950, 7958, 7970]
-Dojo - div:nth-child(2n+1) -&gt; [3620, 3623, 3633, 3641, 3658]
-Dojo - div &gt; div -&gt; [10158, 10172, 10180, 10183, 10231]
-Dojo - div ~ div -&gt; [6673, 6675, 6714, 6848, 6902]
</del><ins>+some test:Time -&gt; [1, 2, 3, 4, 5] ms
+some other test = else:Time -&gt; [6, 7, 8, 9, 10] ms
+some other test = else:Malloc -&gt; [11, 12, 13, 14, 15] bytes
+Array Construction, []:Time -&gt; [11, 12, 13, 14, 15] ms
+Concat String:Time -&gt; [15163, 15304, 15386, 15608, 15622] ms
+jQuery - addClass:Time -&gt; [2785, 2815, 2826, 2841, 2861] ms
+Dojo - div:only-child:Time -&gt; [7825, 7910, 7950, 7958, 7970] ms
+Dojo - div:nth-child(2n+1):Time -&gt; [3620, 3623, 3633, 3641, 3658] ms
+Dojo - div &gt; div:Time -&gt; [10158, 10172, 10180, 10183, 10231] ms
+Dojo - div ~ div:Time -&gt; [6673, 6675, 6714, 6848, 6902] ms
</ins><span class="cx"> 
</span><span class="cx"> :Time -&gt; [1080, 1120, 1095, 1101, 1104] ms
</span><span class="cx"> &quot;&quot;&quot;, image=None, image_hash=None, audio=None)
</span><span class="cx">         output_capture = OutputCapture()
</span><span class="cx">         output_capture.capture_output()
</span><span class="cx">         try:
</span><del>-            test = PerfTest(MockPort(), 'some-test', '/path/some-dir/some-test')
-            self._assert_results_are_correct(test, output)
</del><ins>+            test = PerfTest(MockPort(), 'some-dir/some-test', '/path/some-dir/some-test')
+            test.run_single = lambda driver, path, time_out_ms: output
+            self.assertTrue(test.run(10))
</ins><span class="cx">         finally:
</span><span class="cx">             actual_stdout, actual_stderr, actual_logs = output_capture.restore_output()
</span><ins>+
+        subtests = test._metrics
+        self.assertEqual(map(lambda test: test['name'], subtests), ['some test', 'some other test = else',
+            'Array Construction, []', 'Concat String', 'jQuery - addClass', 'Dojo - div:only-child',
+            'Dojo - div:nth-child(2n+1)', 'Dojo - div &gt; div', 'Dojo - div ~ div', None])
+
+        some_test_metrics = subtests[0]['metrics']
+        self.assertEqual(map(lambda metric: metric.name(), some_test_metrics), ['Time'])
+        self.assertEqual(some_test_metrics[0].path(), ['some-dir', 'some-test', 'some test'])
+        self.assertEqual(some_test_metrics[0].flattened_iteration_values(), [1, 2, 3, 4, 5] * 4)
+
+        some_other_test_metrics = subtests[1]['metrics']
+        self.assertEqual(map(lambda metric: metric.name(), some_other_test_metrics), ['Time', 'Malloc'])
+        self.assertEqual(some_other_test_metrics[0].path(), ['some-dir', 'some-test', 'some other test = else'])
+        self.assertEqual(some_other_test_metrics[0].flattened_iteration_values(), [6, 7, 8, 9, 10] * 4)
+        self.assertEqual(some_other_test_metrics[1].path(), ['some-dir', 'some-test', 'some other test = else'])
+        self.assertEqual(some_other_test_metrics[1].flattened_iteration_values(), [11, 12, 13, 14, 15] * 4)
+
+        main_metrics = subtests[len(subtests) - 1]['metrics']
+        self.assertEqual(map(lambda metric: metric.name(), main_metrics), ['Time'])
+        self.assertEqual(main_metrics[0].path(), ['some-dir', 'some-test'])
+        self.assertEqual(main_metrics[0].flattened_iteration_values(), [1080, 1120, 1095, 1101, 1104] * 4)
+
</ins><span class="cx">         self.assertEqual(actual_stdout, '')
</span><span class="cx">         self.assertEqual(actual_stderr, '')
</span><del>-        self.assertEqual(actual_logs, '')
</del><ins>+        self.assertEqual(actual_logs, &quot;&quot;&quot;DESCRIPTION: this is a test description.
+RESULT some-dir: some-test: Time= 1100.0 ms
+median= 1101.0 ms, stdev= 13.3140211016 ms, min= 1080.0 ms, max= 1120.0 ms
+&quot;&quot;&quot;)
</ins><span class="cx"> 
</span><span class="cx"> 
</span><span class="cx"> class TestSingleProcessPerfTest(unittest.TestCase):
</span></span></pre></div>
<a id="trunkToolsScriptswebkitpyperformance_testsperftestsrunnerpy"></a>
<div class="modfile"><h4>Modified: trunk/Tools/Scripts/webkitpy/performance_tests/perftestsrunner.py (159804 => 159805)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Tools/Scripts/webkitpy/performance_tests/perftestsrunner.py        2013-11-27 05:13:51 UTC (rev 159804)
+++ trunk/Tools/Scripts/webkitpy/performance_tests/perftestsrunner.py        2013-11-27 05:33:17 UTC (rev 159805)
</span><span class="lines">@@ -262,10 +262,11 @@
</span><span class="cx">             path = metric.path()
</span><span class="cx">             for i in range(0, len(path)):
</span><span class="cx">                 is_last_token = i + 1 == len(path)
</span><del>-                url = view_source_url('PerformanceTests/' + (metric.test_file_name() if is_last_token else '/'.join(path[0:i + 1])))
</del><ins>+                url = view_source_url('PerformanceTests/' + '/'.join(path[0:i + 1]))
</ins><span class="cx">                 tests.setdefault(path[i], {'url': url})
</span><span class="cx">                 current_test = tests[path[i]]
</span><span class="cx">                 if is_last_token:
</span><ins>+                    current_test['url'] = view_source_url('PerformanceTests/' + metric.test_file_name())
</ins><span class="cx">                     current_test.setdefault('metrics', {})
</span><span class="cx">                     assert metric.name() not in current_test['metrics']
</span><span class="cx">                     current_test['metrics'][metric.name()] = {'current': metric.grouped_iteration_values()}
</span></span></pre></div>
<a id="trunkToolsScriptswebkitpyperformance_testsperftestsrunner_integrationtestpy"></a>
<div class="modfile"><h4>Modified: trunk/Tools/Scripts/webkitpy/performance_tests/perftestsrunner_integrationtest.py (159804 => 159805)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Tools/Scripts/webkitpy/performance_tests/perftestsrunner_integrationtest.py        2013-11-27 05:13:51 UTC (rev 159804)
+++ trunk/Tools/Scripts/webkitpy/performance_tests/perftestsrunner_integrationtest.py        2013-11-27 05:33:17 UTC (rev 159805)
</span><span class="lines">@@ -96,6 +96,26 @@
</span><span class="cx">     malloc_results = {'current': [[529000, 511000, 548000, 536000, 521000]] * 4}
</span><span class="cx"> 
</span><span class="cx"> 
</span><ins>+class TestWithSubtestsData:
+    text = &quot;&quot;&quot;subtest:Time -&gt; [1, 2, 3, 4, 5] ms
+:Time -&gt; [1080, 1120, 1095, 1101, 1104] ms
+&quot;&quot;&quot;
+
+    output = &quot;&quot;&quot;Running 1 tests
+Running Parser/test-with-subtests.html (1 of 1)
+RESULT Parser: test-with-subtests: Time= 1100.0 ms
+median= 1101.0 ms, stdev= 13.31402 ms, min= 1080.0 ms, max= 1120.0 ms
+Finished: 0.1 s
+&quot;&quot;&quot;
+
+    results = {'url': 'http://trac.webkit.org/browser/trunk/PerformanceTests/Parser/test-with-subtests.html',
+        'metrics': {'Time': {'current': [[1080.0, 1120.0, 1095.0, 1101.0, 1104.0]] * 4}},
+        'tests': {
+            'subtest': {
+                'url': 'http://trac.webkit.org/browser/trunk/PerformanceTests/Parser/test-with-subtests.html',
+                'metrics': {'Time': {'current': [[1.0, 2.0, 3.0, 4.0, 5.0]] * 4}}}}}
+
+
</ins><span class="cx"> class TestDriver:
</span><span class="cx">     def run_test(self, driver_input, stop_when_done):
</span><span class="cx">         text = ''
</span><span class="lines">@@ -117,6 +137,8 @@
</span><span class="cx">             text = SomeParserTestData.text
</span><span class="cx">         elif driver_input.test_name.endswith('memory-test.html'):
</span><span class="cx">             text = MemoryTestData.text
</span><ins>+        elif driver_input.test_name.endswith('test-with-subtests.html'):
+            text = TestWithSubtestsData.text
</ins><span class="cx">         return DriverOutput(text, '', '', '', crash=crash, timeout=timeout)
</span><span class="cx"> 
</span><span class="cx">     def start(self):
</span><span class="lines">@@ -223,6 +245,24 @@
</span><span class="cx">         self.assertEqual(parser_tests['memory-test']['metrics']['JSHeap'], MemoryTestData.js_heap_results)
</span><span class="cx">         self.assertEqual(parser_tests['memory-test']['metrics']['Malloc'], MemoryTestData.malloc_results)
</span><span class="cx"> 
</span><ins>+    def test_run_test_with_subtests(self):
+        runner, port = self.create_runner_and_setup_results_template()
+        runner._timestamp = 123456789
+        port.host.filesystem.write_text_file(runner._base_path + '/Parser/test-with-subtests.html', 'some content')
+
+        output = OutputCapture()
+        output.capture_output()
+        try:
+            unexpected_result_count = runner.run()
+        finally:
+            stdout, stderr, log = output.restore_output()
+
+        self.assertEqual(unexpected_result_count, 0)
+        self.assertEqual(self._normalize_output(log), TestWithSubtestsData.output + '\nMOCK: user.open_url: file://...\n')
+        parser_tests = self._load_output_json(runner)[0]['tests']['Parser']['tests']
+        self.maxDiff = None
+        self.assertEqual(parser_tests['test-with-subtests'], TestWithSubtestsData.results)
+
</ins><span class="cx">     def _test_run_with_json_output(self, runner, filesystem, upload_succeeds=False, results_shown=True, expected_exit_code=0, repeat=1, compare_logs=True):
</span><span class="cx">         filesystem.write_text_file(runner._base_path + '/Parser/some-parser.html', 'some content')
</span><span class="cx">         filesystem.write_text_file(runner._base_path + '/Bindings/event-target-wrapper.html', 'some content')
</span></span></pre>
</div>
</div>

</body>
</html>