<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN"
"http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head><meta http-equiv="content-type" content="text/html; charset=utf-8" />
<title>[174123] trunk</title>
</head>
<body>

<style type="text/css"><!--
#msg dl.meta { border: 1px #006 solid; background: #369; padding: 6px; color: #fff; }
#msg dl.meta dt { float: left; width: 6em; font-weight: bold; }
#msg dt:after { content:':';}
#msg dl, #msg dt, #msg ul, #msg li, #header, #footer, #logmsg { font-family: verdana,arial,helvetica,sans-serif; font-size: 10pt;  }
#msg dl a { font-weight: bold}
#msg dl a:link    { color:#fc3; }
#msg dl a:active  { color:#ff0; }
#msg dl a:visited { color:#cc6; }
h3 { font-family: verdana,arial,helvetica,sans-serif; font-size: 10pt; font-weight: bold; }
#msg pre { overflow: auto; background: #ffc; border: 1px #fa0 solid; padding: 6px; }
#logmsg { background: #ffc; border: 1px #fa0 solid; padding: 1em 1em 0 1em; }
#logmsg p, #logmsg pre, #logmsg blockquote { margin: 0 0 1em 0; }
#logmsg p, #logmsg li, #logmsg dt, #logmsg dd { line-height: 14pt; }
#logmsg h1, #logmsg h2, #logmsg h3, #logmsg h4, #logmsg h5, #logmsg h6 { margin: .5em 0; }
#logmsg h1:first-child, #logmsg h2:first-child, #logmsg h3:first-child, #logmsg h4:first-child, #logmsg h5:first-child, #logmsg h6:first-child { margin-top: 0; }
#logmsg ul, #logmsg ol { padding: 0; list-style-position: inside; margin: 0 0 0 1em; }
#logmsg ul { text-indent: -1em; padding-left: 1em; }#logmsg ol { text-indent: -1.5em; padding-left: 1.5em; }
#logmsg > ul, #logmsg > ol { margin: 0 0 1em 0; }
#logmsg pre { background: #eee; padding: 1em; }
#logmsg blockquote { border: 1px solid #fa0; border-left-width: 10px; padding: 1em 1em 0 1em; background: white;}
#logmsg dl { margin: 0; }
#logmsg dt { font-weight: bold; }
#logmsg dd { margin: 0; padding: 0 0 0.5em 0; }
#logmsg dd:before { content:'\00bb';}
#logmsg table { border-spacing: 0px; border-collapse: collapse; border-top: 4px solid #fa0; border-bottom: 1px solid #fa0; background: #fff; }
#logmsg table th { text-align: left; font-weight: normal; padding: 0.2em 0.5em; border-top: 1px dotted #fa0; }
#logmsg table td { text-align: right; border-top: 1px dotted #fa0; padding: 0.2em 0.5em; }
#logmsg table thead th { text-align: center; border-bottom: 1px solid #fa0; }
#logmsg table th.Corner { text-align: left; }
#logmsg hr { border: none 0; border-top: 2px dashed #fa0; height: 1px; }
#header, #footer { color: #fff; background: #636; border: 1px #300 solid; padding: 6px; }
#patch { width: 100%; }
#patch h4 {font-family: verdana,arial,helvetica,sans-serif;font-size:10pt;padding:8px;background:#369;color:#fff;margin:0;}
#patch .propset h4, #patch .binary h4 {margin:0;}
#patch pre {padding:0;line-height:1.2em;margin:0;}
#patch .diff {width:100%;background:#eee;padding: 0 0 10px 0;overflow:auto;}
#patch .propset .diff, #patch .binary .diff  {padding:10px 0;}
#patch span {display:block;padding:0 10px;}
#patch .modfile, #patch .addfile, #patch .delfile, #patch .propset, #patch .binary, #patch .copfile {border:1px solid #ccc;margin:10px 0;}
#patch ins {background:#dfd;text-decoration:none;display:block;padding:0 10px;}
#patch del {background:#fdd;text-decoration:none;display:block;padding:0 10px;}
#patch .lines, .info {color:#888;background:#fff;}
--></style>
<div id="msg">
<dl class="meta">
<dt>Revision</dt> <dd><a href="http://trac.webkit.org/projects/webkit/changeset/174123">174123</a></dd>
<dt>Author</dt> <dd>fpizlo@apple.com</dd>
<dt>Date</dt> <dd>2014-09-30 14:11:57 -0700 (Tue, 30 Sep 2014)</dd>
</dl>

<h3>Log Message</h3>
<pre>It should be fun and easy to run every possible JavaScript benchmark from the command line
https://bugs.webkit.org/show_bug.cgi?id=137245

Reviewed by Oliver Hunt.
        
PerformanceTests:

This adds the scaffolding for running Octane version 2 inside run-jsc-benchmarks.
In the future we should just land Octane2 in this directory, and run-jsc-benchmarks
should be changed to point directly at this directory instead of requiring the
Octane path to be configured as part of the configuration file.

* Octane: Added.
* Octane/wrappers: Added.
* Octane/wrappers/jsc-box2d.js: Added.
(jscSetUp):
(jscTearDown):
(jscRun):
* Octane/wrappers/jsc-boyer.js: Added.
(jscSetUp):
(jscTearDown):
(jscRun):
* Octane/wrappers/jsc-closure.js: Added.
(jscSetUp):
(jscTearDown):
(jscRun):
* Octane/wrappers/jsc-decrypt.js: Added.
(jscSetUp):
(jscTearDown):
(jscRun):
* Octane/wrappers/jsc-deltablue.js: Added.
(jscSetUp):
(jscTearDown):
(jscRun):
* Octane/wrappers/jsc-earley.js: Added.
(jscSetUp):
(jscTearDown):
(jscRun):
* Octane/wrappers/jsc-encrypt.js: Added.
(jscSetUp):
(jscTearDown):
(jscRun):
* Octane/wrappers/jsc-gbemu.js: Added.
(jscSetUp):
(jscTearDown):
(jscRun):
* Octane/wrappers/jsc-jquery.js: Added.
(jscSetUp):
(jscTearDown):
(jscRun):
* Octane/wrappers/jsc-mandreel.js: Added.
(jscSetUp):
(jscTearDown):
(jscRun):
* Octane/wrappers/jsc-navier-stokes.js: Added.
(jscSetUp):
(jscTearDown):
(jscRun):
* Octane/wrappers/jsc-pdfjs.js: Added.
(jscSetUp.PdfJS_window.console.log):
(jscSetUp):
(jscTearDown):
(jscRun):
* Octane/wrappers/jsc-raytrace.js: Added.
(jscSetUp):
(jscTearDown):
(jscRun):
* Octane/wrappers/jsc-regexp.js: Added.
(jscSetUp):
(jscTearDown):
(jscRun):
* Octane/wrappers/jsc-richards.js: Added.
(jscSetUp):
(jscTearDown):
(jscRun):
* Octane/wrappers/jsc-splay.js: Added.
(jscSetUp):
(jscTearDown):
(jscRun):
* Octane/wrappers/jsc-typescript.js: Added.
(jscSetUp):
(jscTearDown):
(jscRun):
* Octane/wrappers/jsc-zlib.js: Added.
(jscSetUp):
(jscTearDown):
(jscRun):

Tools:

We previously had Tools/Scripts/bencher.  Then we stopped adding things to it because we
weren't sure about the licensing of things like Kraken and Octane.  Various people ended up
having their own private scripts for doing benchmark runs, and didn't share them in the open
source community, because of fears about the shady licensing of the benchmarks suites that
they were running. The dominant version of this was &quot;run-jsc-benchmarks&quot;, which has a lot of
excellent power - it can run benchmarks through either jsc, DumpRenferTree, or
WebKitTestRunner; it can run tests on any number of remote machines; and it has inside
knowledge about how to run *a lot* of test suites. Many of those test suites are not public,
but some of them are. The non-public tests are exclusively those that were not made by any
WebKit contributor, but which JSC/WebKit devs found useful for testing.

This fixes this weirdness by releasing run-jsc-benchmarks. The paths to the test suites
whose licenses are incompatible with WebKit's (to the extent that they cannot be safely
checked into WebKit svn at all) can be run by passing the path to them via a configuration
file. The default configuration file is ~/.run-jsc-benchmarks. The most important benchmark
suites are Octane version 2 and Kraken version 1.1. We should probably check Octane 2 into
WebKit eventually because it seems that the license is fine. Kraken, on the other hand, will
probably never be checked in because there is no license text anywhere in that benchmark.
A valid ~/.run-jsc-benchmarks file will just be something like:
        
    {
        &quot;OctanePath&quot;: &quot;/path/to/Octane2&quot;,
        &quot;KrakenPath&quot;: &quot;/path/to/Kraken-1.1/tests/kraken-1.1&quot;
    }
        
If your ~/.run-jsc-benchmarks file omits the directory for any particular test suite, then
run-jsc-benchmarks will just gracefully avoid running that test suite.
        
Finally, a word about policy: it is understood that different organizations that do
development on JSC may find themselves having internal benchmarks that they cannot share
because of weird licensing. It happens - usually because the organization doing JSC
development found some test in the wild that is owned by someone else and therefore cannot
be shared. So, we should consider it acceptable to write patches against run-jsc-benchmarks
that add support for some new kind of benchmark suite even if the suite is not made public
as part of the same patch - so long as the patch isn't too invasive. An example of
non-invasiveness is the DSPJS suite, which is implemented using some new classes (like
DSPJSAmmoJSRegularBenchmark) and some calls to otherwise reusable functions (like
emitSelfContainedBenchRunCode). It is obviously super helpful if a benchmark suite can be
completely open-sourced and committed to the WebKit repo - but the reality is that this
can't always be done safely.

* Scripts/bencher: Removed.
* Scripts/run-jsc-benchmarks: Added.</pre>

<h3>Modified Paths</h3>
<ul>
<li><a href="#trunkPerformanceTestsChangeLog">trunk/PerformanceTests/ChangeLog</a></li>
<li><a href="#trunkToolsChangeLog">trunk/Tools/ChangeLog</a></li>
</ul>

<h3>Added Paths</h3>
<ul>
<li>trunk/PerformanceTests/Octane/</li>
<li>trunk/PerformanceTests/Octane/wrappers/</li>
<li><a href="#trunkPerformanceTestsOctanewrappersjscbox2djs">trunk/PerformanceTests/Octane/wrappers/jsc-box2d.js</a></li>
<li><a href="#trunkPerformanceTestsOctanewrappersjscboyerjs">trunk/PerformanceTests/Octane/wrappers/jsc-boyer.js</a></li>
<li><a href="#trunkPerformanceTestsOctanewrappersjscclosurejs">trunk/PerformanceTests/Octane/wrappers/jsc-closure.js</a></li>
<li><a href="#trunkPerformanceTestsOctanewrappersjscdecryptjs">trunk/PerformanceTests/Octane/wrappers/jsc-decrypt.js</a></li>
<li><a href="#trunkPerformanceTestsOctanewrappersjscdeltabluejs">trunk/PerformanceTests/Octane/wrappers/jsc-deltablue.js</a></li>
<li><a href="#trunkPerformanceTestsOctanewrappersjscearleyjs">trunk/PerformanceTests/Octane/wrappers/jsc-earley.js</a></li>
<li><a href="#trunkPerformanceTestsOctanewrappersjscencryptjs">trunk/PerformanceTests/Octane/wrappers/jsc-encrypt.js</a></li>
<li><a href="#trunkPerformanceTestsOctanewrappersjscgbemujs">trunk/PerformanceTests/Octane/wrappers/jsc-gbemu.js</a></li>
<li><a href="#trunkPerformanceTestsOctanewrappersjscjqueryjs">trunk/PerformanceTests/Octane/wrappers/jsc-jquery.js</a></li>
<li><a href="#trunkPerformanceTestsOctanewrappersjscmandreeljs">trunk/PerformanceTests/Octane/wrappers/jsc-mandreel.js</a></li>
<li><a href="#trunkPerformanceTestsOctanewrappersjscnavierstokesjs">trunk/PerformanceTests/Octane/wrappers/jsc-navier-stokes.js</a></li>
<li><a href="#trunkPerformanceTestsOctanewrappersjscpdfjsjs">trunk/PerformanceTests/Octane/wrappers/jsc-pdfjs.js</a></li>
<li><a href="#trunkPerformanceTestsOctanewrappersjscraytracejs">trunk/PerformanceTests/Octane/wrappers/jsc-raytrace.js</a></li>
<li><a href="#trunkPerformanceTestsOctanewrappersjscregexpjs">trunk/PerformanceTests/Octane/wrappers/jsc-regexp.js</a></li>
<li><a href="#trunkPerformanceTestsOctanewrappersjscrichardsjs">trunk/PerformanceTests/Octane/wrappers/jsc-richards.js</a></li>
<li><a href="#trunkPerformanceTestsOctanewrappersjscsplayjs">trunk/PerformanceTests/Octane/wrappers/jsc-splay.js</a></li>
<li><a href="#trunkPerformanceTestsOctanewrappersjsctypescriptjs">trunk/PerformanceTests/Octane/wrappers/jsc-typescript.js</a></li>
<li><a href="#trunkPerformanceTestsOctanewrappersjsczlibjs">trunk/PerformanceTests/Octane/wrappers/jsc-zlib.js</a></li>
<li><a href="#trunkToolsScriptsrunjscbenchmarks">trunk/Tools/Scripts/run-jsc-benchmarks</a></li>
</ul>

<h3>Removed Paths</h3>
<ul>
<li><a href="#trunkToolsScriptsbencher">trunk/Tools/Scripts/bencher</a></li>
</ul>

</div>
<div id="patch">
<h3>Diff</h3>
<a id="trunkPerformanceTestsChangeLog"></a>
<div class="modfile"><h4>Modified: trunk/PerformanceTests/ChangeLog (174122 => 174123)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/PerformanceTests/ChangeLog        2014-09-30 21:05:08 UTC (rev 174122)
+++ trunk/PerformanceTests/ChangeLog        2014-09-30 21:11:57 UTC (rev 174123)
</span><span class="lines">@@ -1,3 +1,91 @@
</span><ins>+2014-09-29  Filip Pizlo  &lt;fpizlo@apple.com&gt;
+
+        It should be fun and easy to run every possible JavaScript benchmark from the command line
+        https://bugs.webkit.org/show_bug.cgi?id=137245
+
+        Reviewed by Oliver Hunt.
+        
+        This adds the scaffolding for running Octane version 2 inside run-jsc-benchmarks.
+        In the future we should just land Octane2 in this directory, and run-jsc-benchmarks
+        should be changed to point directly at this directory instead of requiring the
+        Octane path to be configured as part of the configuration file.
+
+        * Octane: Added.
+        * Octane/wrappers: Added.
+        * Octane/wrappers/jsc-box2d.js: Added.
+        (jscSetUp):
+        (jscTearDown):
+        (jscRun):
+        * Octane/wrappers/jsc-boyer.js: Added.
+        (jscSetUp):
+        (jscTearDown):
+        (jscRun):
+        * Octane/wrappers/jsc-closure.js: Added.
+        (jscSetUp):
+        (jscTearDown):
+        (jscRun):
+        * Octane/wrappers/jsc-decrypt.js: Added.
+        (jscSetUp):
+        (jscTearDown):
+        (jscRun):
+        * Octane/wrappers/jsc-deltablue.js: Added.
+        (jscSetUp):
+        (jscTearDown):
+        (jscRun):
+        * Octane/wrappers/jsc-earley.js: Added.
+        (jscSetUp):
+        (jscTearDown):
+        (jscRun):
+        * Octane/wrappers/jsc-encrypt.js: Added.
+        (jscSetUp):
+        (jscTearDown):
+        (jscRun):
+        * Octane/wrappers/jsc-gbemu.js: Added.
+        (jscSetUp):
+        (jscTearDown):
+        (jscRun):
+        * Octane/wrappers/jsc-jquery.js: Added.
+        (jscSetUp):
+        (jscTearDown):
+        (jscRun):
+        * Octane/wrappers/jsc-mandreel.js: Added.
+        (jscSetUp):
+        (jscTearDown):
+        (jscRun):
+        * Octane/wrappers/jsc-navier-stokes.js: Added.
+        (jscSetUp):
+        (jscTearDown):
+        (jscRun):
+        * Octane/wrappers/jsc-pdfjs.js: Added.
+        (jscSetUp.PdfJS_window.console.log):
+        (jscSetUp):
+        (jscTearDown):
+        (jscRun):
+        * Octane/wrappers/jsc-raytrace.js: Added.
+        (jscSetUp):
+        (jscTearDown):
+        (jscRun):
+        * Octane/wrappers/jsc-regexp.js: Added.
+        (jscSetUp):
+        (jscTearDown):
+        (jscRun):
+        * Octane/wrappers/jsc-richards.js: Added.
+        (jscSetUp):
+        (jscTearDown):
+        (jscRun):
+        * Octane/wrappers/jsc-splay.js: Added.
+        (jscSetUp):
+        (jscTearDown):
+        (jscRun):
+        * Octane/wrappers/jsc-typescript.js: Added.
+        (jscSetUp):
+        (jscTearDown):
+        (jscRun):
+        * Octane/wrappers/jsc-zlib.js: Added.
+        (jscSetUp):
+        (jscTearDown):
+        (jscRun):
+
</ins><span class="cx"> 2014-09-28  Sungmann Cho  &lt;sungmann.cho@navercorp.com&gt;
</span><span class="cx"> 
</span><span class="cx">         Fix some minor typos: psuedo -&gt; pseudo
</span></span></pre></div>
<a id="trunkPerformanceTestsOctanewrappersjscbox2djs"></a>
<div class="addfile"><h4>Added: trunk/PerformanceTests/Octane/wrappers/jsc-box2d.js (0 => 174123)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/PerformanceTests/Octane/wrappers/jsc-box2d.js                                (rev 0)
+++ trunk/PerformanceTests/Octane/wrappers/jsc-box2d.js        2014-09-30 21:11:57 UTC (rev 174123)
</span><span class="lines">@@ -0,0 +1,11 @@
</span><ins>+function jscSetUp() {
+    setupBox2D();
+}
+
+function jscTearDown() {
+    world = null;
+}
+
+function jscRun() {
+    runBox2D();
+}
</ins></span></pre></div>
<a id="trunkPerformanceTestsOctanewrappersjscboyerjs"></a>
<div class="addfile"><h4>Added: trunk/PerformanceTests/Octane/wrappers/jsc-boyer.js (0 => 174123)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/PerformanceTests/Octane/wrappers/jsc-boyer.js                                (rev 0)
+++ trunk/PerformanceTests/Octane/wrappers/jsc-boyer.js        2014-09-30 21:11:57 UTC (rev 174123)
</span><span class="lines">@@ -0,0 +1,6 @@
</span><ins>+function jscSetUp() { }
+function jscTearDown() { }
+
+function jscRun() {
+    BgL_nboyerzd2benchmarkzd2();
+}
</ins></span></pre></div>
<a id="trunkPerformanceTestsOctanewrappersjscclosurejs"></a>
<div class="addfile"><h4>Added: trunk/PerformanceTests/Octane/wrappers/jsc-closure.js (0 => 174123)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/PerformanceTests/Octane/wrappers/jsc-closure.js                                (rev 0)
+++ trunk/PerformanceTests/Octane/wrappers/jsc-closure.js        2014-09-30 21:11:57 UTC (rev 174123)
</span><span class="lines">@@ -0,0 +1,11 @@
</span><ins>+function jscSetUp() {
+    setupCodeLoad()
+}
+
+function jscTearDown() {
+    tearDownCodeLoad();
+}
+
+function jscRun() {
+    runCodeLoadClosure();
+}
</ins></span></pre></div>
<a id="trunkPerformanceTestsOctanewrappersjscdecryptjs"></a>
<div class="addfile"><h4>Added: trunk/PerformanceTests/Octane/wrappers/jsc-decrypt.js (0 => 174123)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/PerformanceTests/Octane/wrappers/jsc-decrypt.js                                (rev 0)
+++ trunk/PerformanceTests/Octane/wrappers/jsc-decrypt.js        2014-09-30 21:11:57 UTC (rev 174123)
</span><span class="lines">@@ -0,0 +1,9 @@
</span><ins>+var encrypted = &quot;0599c0900f04d914aa11176b75f1ff8040b99a4cd008dd526d41fff88acc006a0eab4a6ec8d390b300c169c6dc2c23aa7767ba83f336b7f8eaade253c22eb7c78c98881d5e89b2827592f73baea3e32f10b2b1fba83dda854c9c6a96467fca0d1e2f5aa4d595b62f65b2eb258aaef9e73a407511c24085df025de6bbbfb32764&quot;;
+
+function jscSetUp() { }
+function jscTearDown() { }
+
+function jscRun() {
+    decrypt();
+}
+
</ins></span></pre></div>
<a id="trunkPerformanceTestsOctanewrappersjscdeltabluejs"></a>
<div class="addfile"><h4>Added: trunk/PerformanceTests/Octane/wrappers/jsc-deltablue.js (0 => 174123)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/PerformanceTests/Octane/wrappers/jsc-deltablue.js                                (rev 0)
+++ trunk/PerformanceTests/Octane/wrappers/jsc-deltablue.js        2014-09-30 21:11:57 UTC (rev 174123)
</span><span class="lines">@@ -0,0 +1,6 @@
</span><ins>+function jscSetUp() { }
+function jscTearDown() { }
+
+function jscRun() {
+    deltaBlue();
+}
</ins></span></pre></div>
<a id="trunkPerformanceTestsOctanewrappersjscearleyjs"></a>
<div class="addfile"><h4>Added: trunk/PerformanceTests/Octane/wrappers/jsc-earley.js (0 => 174123)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/PerformanceTests/Octane/wrappers/jsc-earley.js                                (rev 0)
+++ trunk/PerformanceTests/Octane/wrappers/jsc-earley.js        2014-09-30 21:11:57 UTC (rev 174123)
</span><span class="lines">@@ -0,0 +1,6 @@
</span><ins>+function jscSetUp() { }
+function jscTearDown() { }
+
+function jscRun() {
+    BgL_earleyzd2benchmarkzd2();
+}
</ins></span></pre></div>
<a id="trunkPerformanceTestsOctanewrappersjscencryptjs"></a>
<div class="addfile"><h4>Added: trunk/PerformanceTests/Octane/wrappers/jsc-encrypt.js (0 => 174123)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/PerformanceTests/Octane/wrappers/jsc-encrypt.js                                (rev 0)
+++ trunk/PerformanceTests/Octane/wrappers/jsc-encrypt.js        2014-09-30 21:11:57 UTC (rev 174123)
</span><span class="lines">@@ -0,0 +1,7 @@
</span><ins>+function jscSetUp() { }
+function jscTearDown() { }
+
+function jscRun() {
+    encrypt();
+}
+
</ins></span></pre></div>
<a id="trunkPerformanceTestsOctanewrappersjscgbemujs"></a>
<div class="addfile"><h4>Added: trunk/PerformanceTests/Octane/wrappers/jsc-gbemu.js (0 => 174123)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/PerformanceTests/Octane/wrappers/jsc-gbemu.js                                (rev 0)
+++ trunk/PerformanceTests/Octane/wrappers/jsc-gbemu.js        2014-09-30 21:11:57 UTC (rev 174123)
</span><span class="lines">@@ -0,0 +1,11 @@
</span><ins>+function jscSetUp() {
+    setupGameboy();
+}
+
+function jscTearDown() {
+  decoded_gameboy_rom = null;
+}
+
+function jscRun() {
+    runGameboy();
+}
</ins></span></pre></div>
<a id="trunkPerformanceTestsOctanewrappersjscjqueryjs"></a>
<div class="addfile"><h4>Added: trunk/PerformanceTests/Octane/wrappers/jsc-jquery.js (0 => 174123)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/PerformanceTests/Octane/wrappers/jsc-jquery.js                                (rev 0)
+++ trunk/PerformanceTests/Octane/wrappers/jsc-jquery.js        2014-09-30 21:11:57 UTC (rev 174123)
</span><span class="lines">@@ -0,0 +1,11 @@
</span><ins>+function jscSetUp() {
+    setupCodeLoad()
+}
+
+function jscTearDown() {
+    tearDownCodeLoad();
+}
+
+function jscRun() {
+    runCodeLoadJQuery();
+}
</ins></span></pre></div>
<a id="trunkPerformanceTestsOctanewrappersjscmandreeljs"></a>
<div class="addfile"><h4>Added: trunk/PerformanceTests/Octane/wrappers/jsc-mandreel.js (0 => 174123)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/PerformanceTests/Octane/wrappers/jsc-mandreel.js                                (rev 0)
+++ trunk/PerformanceTests/Octane/wrappers/jsc-mandreel.js        2014-09-30 21:11:57 UTC (rev 174123)
</span><span class="lines">@@ -0,0 +1,11 @@
</span><ins>+function jscSetUp() {
+    setupMandreel();
+}
+
+function jscTearDown() {
+    tearDownMandreel();
+}
+
+function jscRun() {
+    runMandreel();
+}
</ins></span></pre></div>
<a id="trunkPerformanceTestsOctanewrappersjscnavierstokesjs"></a>
<div class="addfile"><h4>Added: trunk/PerformanceTests/Octane/wrappers/jsc-navier-stokes.js (0 => 174123)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/PerformanceTests/Octane/wrappers/jsc-navier-stokes.js                                (rev 0)
+++ trunk/PerformanceTests/Octane/wrappers/jsc-navier-stokes.js        2014-09-30 21:11:57 UTC (rev 174123)
</span><span class="lines">@@ -0,0 +1,14 @@
</span><ins>+function jscSetUp()
+{
+    setupNavierStokes();
+}
+
+function jscTearDown()
+{
+    tearDownNavierStokes();
+}
+
+function jscRun()
+{
+    runNavierStokes();
+}
</ins></span></pre></div>
<a id="trunkPerformanceTestsOctanewrappersjscpdfjsjs"></a>
<div class="addfile"><h4>Added: trunk/PerformanceTests/Octane/wrappers/jsc-pdfjs.js (0 => 174123)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/PerformanceTests/Octane/wrappers/jsc-pdfjs.js                                (rev 0)
+++ trunk/PerformanceTests/Octane/wrappers/jsc-pdfjs.js        2014-09-30 21:11:57 UTC (rev 174123)
</span><span class="lines">@@ -0,0 +1,26 @@
</span><ins>+function jscSetUp() {
+    canvas_logs = [];
+    PdfJS_window.console = {log:function(){}}
+    PdfJS_window.__timeouts__ = [];
+    PdfJS_window.__resources__ = {};
+    setupPdfJS();
+}
+
+function jscTearDown() {
+  for (var i = 0; i &lt; canvas_logs.length; ++i) {
+    var log_length = canvas_logs[i].length;
+    var log_hash = hash(canvas_logs[i].join(&quot; &quot;));
+    var expected_length = 36788;
+    var expected_hash = 939524096;
+    if (log_length !== expected_length || log_hash !== expected_hash) {
+      var message = &quot;PdfJS produced incorrect output: &quot; +
+          &quot;expected &quot; + expected_length + &quot; &quot; + expected_hash + &quot;, &quot; +
+          &quot;got &quot; + log_length + &quot; &quot; + log_hash;
+      throw message;
+    }
+  }
+}
+
+function jscRun() {
+    runPdfJS();
+}
</ins></span></pre></div>
<a id="trunkPerformanceTestsOctanewrappersjscraytracejs"></a>
<div class="addfile"><h4>Added: trunk/PerformanceTests/Octane/wrappers/jsc-raytrace.js (0 => 174123)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/PerformanceTests/Octane/wrappers/jsc-raytrace.js                                (rev 0)
+++ trunk/PerformanceTests/Octane/wrappers/jsc-raytrace.js        2014-09-30 21:11:57 UTC (rev 174123)
</span><span class="lines">@@ -0,0 +1,6 @@
</span><ins>+function jscSetUp() { }
+function jscTearDown() { }
+
+function jscRun() {
+    renderScene();
+}
</ins></span></pre></div>
<a id="trunkPerformanceTestsOctanewrappersjscregexpjs"></a>
<div class="addfile"><h4>Added: trunk/PerformanceTests/Octane/wrappers/jsc-regexp.js (0 => 174123)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/PerformanceTests/Octane/wrappers/jsc-regexp.js                                (rev 0)
+++ trunk/PerformanceTests/Octane/wrappers/jsc-regexp.js        2014-09-30 21:11:57 UTC (rev 174123)
</span><span class="lines">@@ -0,0 +1,13 @@
</span><ins>+function jscSetUp() {
+    BenchmarkSuite.ResetRNG();
+    RegExpSetup();
+}
+
+function jscTearDown() {
+    RegExpTearDown();
+}
+
+function jscRun() {
+    RegExpRun();
+}
+
</ins></span></pre></div>
<a id="trunkPerformanceTestsOctanewrappersjscrichardsjs"></a>
<div class="addfile"><h4>Added: trunk/PerformanceTests/Octane/wrappers/jsc-richards.js (0 => 174123)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/PerformanceTests/Octane/wrappers/jsc-richards.js                                (rev 0)
+++ trunk/PerformanceTests/Octane/wrappers/jsc-richards.js        2014-09-30 21:11:57 UTC (rev 174123)
</span><span class="lines">@@ -0,0 +1,7 @@
</span><ins>+function jscSetUp() { }
+function jscTearDown() { }
+
+function jscRun() {
+    runRichards();
+}
+
</ins></span></pre></div>
<a id="trunkPerformanceTestsOctanewrappersjscsplayjs"></a>
<div class="addfile"><h4>Added: trunk/PerformanceTests/Octane/wrappers/jsc-splay.js (0 => 174123)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/PerformanceTests/Octane/wrappers/jsc-splay.js                                (rev 0)
+++ trunk/PerformanceTests/Octane/wrappers/jsc-splay.js        2014-09-30 21:11:57 UTC (rev 174123)
</span><span class="lines">@@ -0,0 +1,12 @@
</span><ins>+function jscSetUp() {
+    SplaySetup();
+}
+
+function jscTearDown() {
+    SplayTearDown();
+}
+
+function jscRun() {
+    SplayRun();
+}
+
</ins></span></pre></div>
<a id="trunkPerformanceTestsOctanewrappersjsctypescriptjs"></a>
<div class="addfile"><h4>Added: trunk/PerformanceTests/Octane/wrappers/jsc-typescript.js (0 => 174123)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/PerformanceTests/Octane/wrappers/jsc-typescript.js                                (rev 0)
+++ trunk/PerformanceTests/Octane/wrappers/jsc-typescript.js        2014-09-30 21:11:57 UTC (rev 174123)
</span><span class="lines">@@ -0,0 +1,11 @@
</span><ins>+function jscSetUp() {
+    setupTypescript();
+}
+
+function jscTearDown() {
+    tearDownTypescript();
+}
+
+function jscRun() {
+    runTypescript();
+}
</ins></span></pre></div>
<a id="trunkPerformanceTestsOctanewrappersjsczlibjs"></a>
<div class="addfile"><h4>Added: trunk/PerformanceTests/Octane/wrappers/jsc-zlib.js (0 => 174123)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/PerformanceTests/Octane/wrappers/jsc-zlib.js                                (rev 0)
+++ trunk/PerformanceTests/Octane/wrappers/jsc-zlib.js        2014-09-30 21:11:57 UTC (rev 174123)
</span><span class="lines">@@ -0,0 +1,12 @@
</span><ins>+var read;
+
+function jscSetUp() {
+}
+
+function jscTearDown() {
+    tearDownZlib();
+}
+
+function jscRun() {
+    runZlib();
+}
</ins></span></pre></div>
<a id="trunkToolsChangeLog"></a>
<div class="modfile"><h4>Modified: trunk/Tools/ChangeLog (174122 => 174123)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Tools/ChangeLog        2014-09-30 21:05:08 UTC (rev 174122)
+++ trunk/Tools/ChangeLog        2014-09-30 21:11:57 UTC (rev 174123)
</span><span class="lines">@@ -1,3 +1,54 @@
</span><ins>+2014-09-29  Filip Pizlo  &lt;fpizlo@apple.com&gt;
+
+        It should be fun and easy to run every possible JavaScript benchmark from the command line
+        https://bugs.webkit.org/show_bug.cgi?id=137245
+
+        Reviewed by Oliver Hunt.
+        
+        We previously had Tools/Scripts/bencher.  Then we stopped adding things to it because we
+        weren't sure about the licensing of things like Kraken and Octane.  Various people ended up
+        having their own private scripts for doing benchmark runs, and didn't share them in the open
+        source community, because of fears about the shady licensing of the benchmarks suites that
+        they were running. The dominant version of this was &quot;run-jsc-benchmarks&quot;, which has a lot of
+        excellent power - it can run benchmarks through either jsc, DumpRenferTree, or
+        WebKitTestRunner; it can run tests on any number of remote machines; and it has inside
+        knowledge about how to run *a lot* of test suites. Many of those test suites are not public,
+        but some of them are. The non-public tests are exclusively those that were not made by any
+        WebKit contributor, but which JSC/WebKit devs found useful for testing.
+
+        This fixes this weirdness by releasing run-jsc-benchmarks. The paths to the test suites
+        whose licenses are incompatible with WebKit's (to the extent that they cannot be safely
+        checked into WebKit svn at all) can be run by passing the path to them via a configuration
+        file. The default configuration file is ~/.run-jsc-benchmarks. The most important benchmark
+        suites are Octane version 2 and Kraken version 1.1. We should probably check Octane 2 into
+        WebKit eventually because it seems that the license is fine. Kraken, on the other hand, will
+        probably never be checked in because there is no license text anywhere in that benchmark.
+        A valid ~/.run-jsc-benchmarks file will just be something like:
+        
+            {
+                &quot;OctanePath&quot;: &quot;/path/to/Octane2&quot;,
+                &quot;KrakenPath&quot;: &quot;/path/to/Kraken-1.1/tests/kraken-1.1&quot;
+            }
+        
+        If your ~/.run-jsc-benchmarks file omits the directory for any particular test suite, then
+        run-jsc-benchmarks will just gracefully avoid running that test suite.
+        
+        Finally, a word about policy: it is understood that different organizations that do
+        development on JSC may find themselves having internal benchmarks that they cannot share
+        because of weird licensing. It happens - usually because the organization doing JSC
+        development found some test in the wild that is owned by someone else and therefore cannot
+        be shared. So, we should consider it acceptable to write patches against run-jsc-benchmarks
+        that add support for some new kind of benchmark suite even if the suite is not made public
+        as part of the same patch - so long as the patch isn't too invasive. An example of
+        non-invasiveness is the DSPJS suite, which is implemented using some new classes (like
+        DSPJSAmmoJSRegularBenchmark) and some calls to otherwise reusable functions (like
+        emitSelfContainedBenchRunCode). It is obviously super helpful if a benchmark suite can be
+        completely open-sourced and committed to the WebKit repo - but the reality is that this
+        can't always be done safely.
+
+        * Scripts/bencher: Removed.
+        * Scripts/run-jsc-benchmarks: Added.
+
</ins><span class="cx"> 2014-09-30  Roger Fong  &lt;roger_fong@apple.com&gt;
</span><span class="cx"> 
</span><span class="cx">         [Windows] Back to 2 child processes for NRWT on Windows.
</span></span></pre></div>
<a id="trunkToolsScriptsbencher"></a>
<div class="delfile"><h4>Deleted: trunk/Tools/Scripts/bencher (174122 => 174123)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Tools/Scripts/bencher        2014-09-30 21:05:08 UTC (rev 174122)
+++ trunk/Tools/Scripts/bencher        2014-09-30 21:11:57 UTC (rev 174123)
</span><span class="lines">@@ -1,2101 +0,0 @@
</span><del>-#!/usr/bin/env ruby
-
-# Copyright (C) 2011 Apple Inc. All rights reserved.
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions
-# are met:
-# 1. Redistributions of source code must retain the above copyright
-#    notice, this list of conditions and the following disclaimer.
-# 2. Redistributions in binary form must reproduce the above copyright
-#    notice, this list of conditions and the following disclaimer in the
-#    documentation and/or other materials provided with the distribution.
-#
-# THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
-# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
-# THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
-# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
-# BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
-# THE POSSIBILITY OF SUCH DAMAGE.
-
-require 'rubygems'
-
-require 'getoptlong'
-require 'pathname'
-require 'tempfile'
-require 'socket'
-
-begin
-  require 'json'
-rescue LoadError =&gt; e
-  $stderr.puts &quot;It does not appear that you have the 'json' package installed.  Try running 'sudo gem install json'.&quot;
-  exit 1
-end
-
-# Configuration
-
-CONFIGURATION_FLNM = ENV[&quot;HOME&quot;]+&quot;/.bencher&quot;
-
-unless FileTest.exist? CONFIGURATION_FLNM
-  $stderr.puts &quot;Error: no configuration file at ~/.bencher.&quot;
-  $stderr.puts &quot;This file should contain paths to SunSpider, V8, and Kraken, as well as a&quot;
-  $stderr.puts &quot;temporary directory that bencher can use for its remote mode. It should be&quot;
-  $stderr.puts &quot;formatted in JSON.  For example:&quot;
-  $stderr.puts &quot;{&quot;
-  $stderr.puts &quot;    \&quot;sunSpiderPath\&quot;: \&quot;/Volumes/Data/pizlo/OpenSource/PerformanceTests/SunSpider/tests/sunspider-1.0\&quot;,&quot;
-  $stderr.puts &quot;    \&quot;v8Path\&quot;: \&quot;/Volumes/Data/pizlo/OpenSource/PerformanceTests/SunSpider/tests/v8-v6\&quot;,&quot;
-  $stderr.puts &quot;    \&quot;krakenPath\&quot;: \&quot;/Volumes/Data/pizlo/kraken/kraken-e119421cb325/tests/kraken-1.1\&quot;,&quot;
-  $stderr.puts &quot;    \&quot;tempPath\&quot;: \&quot;/Volumes/Data/pizlo/bencher/temp\&quot;&quot;
-  $stderr.puts &quot;}&quot;
-  exit 1
-end
-
-CONFIGURATION = JSON.parse(File::read(CONFIGURATION_FLNM))
-
-SUNSPIDER_PATH = CONFIGURATION[&quot;sunSpiderPath&quot;]
-V8_PATH = CONFIGURATION[&quot;v8Path&quot;]
-KRAKEN_PATH = CONFIGURATION[&quot;krakenPath&quot;]
-TEMP_PATH = CONFIGURATION[&quot;tempPath&quot;]
-BENCH_DATA_PATH = TEMP_PATH + &quot;/benchdata&quot;
-
-IBR_LOOKUP=[0.00615583, 0.0975, 0.22852, 0.341628, 0.430741, 0.500526, 0.555933, 
-            0.600706, 0.637513, 0.668244, 0.694254, 0.716537, 0.735827, 0.752684, 
-            0.767535, 0.780716, 0.792492, 0.803074, 0.812634, 0.821313, 0.829227, 
-            0.836472, 0.843129, 0.849267, 0.854943, 0.860209, 0.865107, 0.869674, 
-            0.873942, 0.877941, 0.881693, 0.885223, 0.888548, 0.891686, 0.894652, 
-            0.897461, 0.900124, 0.902652, 0.905056, 0.907343, 0.909524, 0.911604, 
-            0.91359, 0.91549, 0.917308, 0.919049, 0.920718, 0.92232, 0.923859, 0.925338, 
-            0.926761, 0.92813, 0.929449, 0.930721, 0.931948, 0.933132, 0.934275, 0.93538, 
-            0.936449, 0.937483, 0.938483, 0.939452, 0.940392, 0.941302, 0.942185, 
-            0.943042, 0.943874, 0.944682, 0.945467, 0.94623, 0.946972, 0.947694, 
-            0.948396, 0.94908, 0.949746, 0.950395, 0.951027, 0.951643, 0.952244, 
-            0.952831, 0.953403, 0.953961, 0.954506, 0.955039, 0.955559, 0.956067, 
-            0.956563, 0.957049, 0.957524, 0.957988, 0.958443, 0.958887, 0.959323, 
-            0.959749, 0.960166, 0.960575, 0.960975, 0.961368, 0.961752, 0.962129, 
-            0.962499, 0.962861, 0.963217, 0.963566, 0.963908, 0.964244, 0.964574, 
-            0.964897, 0.965215, 0.965527, 0.965834, 0.966135, 0.966431, 0.966722, 
-            0.967007, 0.967288, 0.967564, 0.967836, 0.968103, 0.968366, 0.968624, 
-            0.968878, 0.969128, 0.969374, 0.969617, 0.969855, 0.97009, 0.970321, 
-            0.970548, 0.970772, 0.970993, 0.97121, 0.971425, 0.971636, 0.971843, 
-            0.972048, 0.97225, 0.972449, 0.972645, 0.972839, 0.973029, 0.973217, 
-            0.973403, 0.973586, 0.973766, 0.973944, 0.97412, 0.974293, 0.974464, 
-            0.974632, 0.974799, 0.974963, 0.975125, 0.975285, 0.975443, 0.975599, 
-            0.975753, 0.975905, 0.976055, 0.976204, 0.97635, 0.976495, 0.976638, 
-            0.976779, 0.976918, 0.977056, 0.977193, 0.977327, 0.97746, 0.977592, 
-            0.977722, 0.97785, 0.977977, 0.978103, 0.978227, 0.978349, 0.978471, 
-            0.978591, 0.978709, 0.978827, 0.978943, 0.979058, 0.979171, 0.979283, 
-            0.979395, 0.979504, 0.979613, 0.979721, 0.979827, 0.979933, 0.980037, 
-            0.98014, 0.980242, 0.980343, 0.980443, 0.980543, 0.980641, 0.980738, 
-            0.980834, 0.980929, 0.981023, 0.981116, 0.981209, 0.9813, 0.981391, 0.981481, 
-            0.981569, 0.981657, 0.981745, 0.981831, 0.981916, 0.982001, 0.982085, 
-            0.982168, 0.982251, 0.982332, 0.982413, 0.982493, 0.982573, 0.982651, 
-            0.982729, 0.982807, 0.982883, 0.982959, 0.983034, 0.983109, 0.983183, 
-            0.983256, 0.983329, 0.983401, 0.983472, 0.983543, 0.983613, 0.983683, 
-            0.983752, 0.98382, 0.983888, 0.983956, 0.984022, 0.984089, 0.984154, 
-            0.984219, 0.984284, 0.984348, 0.984411, 0.984474, 0.984537, 0.984599, 
-            0.98466, 0.984721, 0.984782, 0.984842, 0.984902, 0.984961, 0.985019, 
-            0.985077, 0.985135, 0.985193, 0.985249, 0.985306, 0.985362, 0.985417, 
-            0.985472, 0.985527, 0.985582, 0.985635, 0.985689, 0.985742, 0.985795, 
-            0.985847, 0.985899, 0.985951, 0.986002, 0.986053, 0.986103, 0.986153, 
-            0.986203, 0.986252, 0.986301, 0.98635, 0.986398, 0.986446, 0.986494, 
-            0.986541, 0.986588, 0.986635, 0.986681, 0.986727, 0.986773, 0.986818, 
-            0.986863, 0.986908, 0.986953, 0.986997, 0.987041, 0.987084, 0.987128, 
-            0.987171, 0.987213, 0.987256, 0.987298, 0.98734, 0.987381, 0.987423, 
-            0.987464, 0.987504, 0.987545, 0.987585, 0.987625, 0.987665, 0.987704, 
-            0.987744, 0.987783, 0.987821, 0.98786, 0.987898, 0.987936, 0.987974, 
-            0.988011, 0.988049, 0.988086, 0.988123, 0.988159, 0.988196, 0.988232, 
-            0.988268, 0.988303, 0.988339, 0.988374, 0.988409, 0.988444, 0.988479, 
-            0.988513, 0.988547, 0.988582, 0.988615, 0.988649, 0.988682, 0.988716, 
-            0.988749, 0.988782, 0.988814, 0.988847, 0.988879, 0.988911, 0.988943, 
-            0.988975, 0.989006, 0.989038, 0.989069, 0.9891, 0.989131, 0.989161, 0.989192, 
-            0.989222, 0.989252, 0.989282, 0.989312, 0.989342, 0.989371, 0.989401, 
-            0.98943, 0.989459, 0.989488, 0.989516, 0.989545, 0.989573, 0.989602, 0.98963, 
-            0.989658, 0.989685, 0.989713, 0.98974, 0.989768, 0.989795, 0.989822, 
-            0.989849, 0.989876, 0.989902, 0.989929, 0.989955, 0.989981, 0.990007, 
-            0.990033, 0.990059, 0.990085, 0.99011, 0.990136, 0.990161, 0.990186, 
-            0.990211, 0.990236, 0.990261, 0.990285, 0.99031, 0.990334, 0.990358, 
-            0.990383, 0.990407, 0.99043, 0.990454, 0.990478, 0.990501, 0.990525, 
-            0.990548, 0.990571, 0.990594, 0.990617, 0.99064, 0.990663, 0.990686, 
-            0.990708, 0.990731, 0.990753, 0.990775, 0.990797, 0.990819, 0.990841, 
-            0.990863, 0.990885, 0.990906, 0.990928, 0.990949, 0.99097, 0.990991, 
-            0.991013, 0.991034, 0.991054, 0.991075, 0.991096, 0.991116, 0.991137, 
-            0.991157, 0.991178, 0.991198, 0.991218, 0.991238, 0.991258, 0.991278, 
-            0.991298, 0.991317, 0.991337, 0.991356, 0.991376, 0.991395, 0.991414, 
-            0.991433, 0.991452, 0.991471, 0.99149, 0.991509, 0.991528, 0.991547, 
-            0.991565, 0.991584, 0.991602, 0.99162, 0.991639, 0.991657, 0.991675, 
-            0.991693, 0.991711, 0.991729, 0.991746, 0.991764, 0.991782, 0.991799, 
-            0.991817, 0.991834, 0.991851, 0.991869, 0.991886, 0.991903, 0.99192, 
-            0.991937, 0.991954, 0.991971, 0.991987, 0.992004, 0.992021, 0.992037, 
-            0.992054, 0.99207, 0.992086, 0.992103, 0.992119, 0.992135, 0.992151, 
-            0.992167, 0.992183, 0.992199, 0.992215, 0.99223, 0.992246, 0.992262, 
-            0.992277, 0.992293, 0.992308, 0.992324, 0.992339, 0.992354, 0.992369, 
-            0.992384, 0.9924, 0.992415, 0.992429, 0.992444, 0.992459, 0.992474, 0.992489, 
-            0.992503, 0.992518, 0.992533, 0.992547, 0.992561, 0.992576, 0.99259, 
-            0.992604, 0.992619, 0.992633, 0.992647, 0.992661, 0.992675, 0.992689, 
-            0.992703, 0.992717, 0.99273, 0.992744, 0.992758, 0.992771, 0.992785, 
-            0.992798, 0.992812, 0.992825, 0.992839, 0.992852, 0.992865, 0.992879, 
-            0.992892, 0.992905, 0.992918, 0.992931, 0.992944, 0.992957, 0.99297, 
-            0.992983, 0.992995, 0.993008, 0.993021, 0.993034, 0.993046, 0.993059, 
-            0.993071, 0.993084, 0.993096, 0.993109, 0.993121, 0.993133, 0.993145, 
-            0.993158, 0.99317, 0.993182, 0.993194, 0.993206, 0.993218, 0.99323, 0.993242, 
-            0.993254, 0.993266, 0.993277, 0.993289, 0.993301, 0.993312, 0.993324, 
-            0.993336, 0.993347, 0.993359, 0.99337, 0.993382, 0.993393, 0.993404, 
-            0.993416, 0.993427, 0.993438, 0.993449, 0.99346, 0.993472, 0.993483, 
-            0.993494, 0.993505, 0.993516, 0.993527, 0.993538, 0.993548, 0.993559, 
-            0.99357, 0.993581, 0.993591, 0.993602, 0.993613, 0.993623, 0.993634, 
-            0.993644, 0.993655, 0.993665, 0.993676, 0.993686, 0.993697, 0.993707, 
-            0.993717, 0.993727, 0.993738, 0.993748, 0.993758, 0.993768, 0.993778, 
-            0.993788, 0.993798, 0.993808, 0.993818, 0.993828, 0.993838, 0.993848, 
-            0.993858, 0.993868, 0.993877, 0.993887, 0.993897, 0.993907, 0.993916, 
-            0.993926, 0.993935, 0.993945, 0.993954, 0.993964, 0.993973, 0.993983, 
-            0.993992, 0.994002, 0.994011, 0.99402, 0.99403, 0.994039, 0.994048, 0.994057, 
-            0.994067, 0.994076, 0.994085, 0.994094, 0.994103, 0.994112, 0.994121, 
-            0.99413, 0.994139, 0.994148, 0.994157, 0.994166, 0.994175, 0.994183, 
-            0.994192, 0.994201, 0.99421, 0.994218, 0.994227, 0.994236, 0.994244, 
-            0.994253, 0.994262, 0.99427, 0.994279, 0.994287, 0.994296, 0.994304, 
-            0.994313, 0.994321, 0.994329, 0.994338, 0.994346, 0.994354, 0.994363, 
-            0.994371, 0.994379, 0.994387, 0.994395, 0.994404, 0.994412, 0.99442, 
-            0.994428, 0.994436, 0.994444, 0.994452, 0.99446, 0.994468, 0.994476, 
-            0.994484, 0.994492, 0.9945, 0.994508, 0.994516, 0.994523, 0.994531, 0.994539, 
-            0.994547, 0.994554, 0.994562, 0.99457, 0.994577, 0.994585, 0.994593, 0.9946, 
-            0.994608, 0.994615, 0.994623, 0.994631, 0.994638, 0.994645, 0.994653, 
-            0.99466, 0.994668, 0.994675, 0.994683, 0.99469, 0.994697, 0.994705, 0.994712, 
-            0.994719, 0.994726, 0.994734, 0.994741, 0.994748, 0.994755, 0.994762, 
-            0.994769, 0.994777, 0.994784, 0.994791, 0.994798, 0.994805, 0.994812, 
-            0.994819, 0.994826, 0.994833, 0.99484, 0.994847, 0.994854, 0.99486, 0.994867, 
-            0.994874, 0.994881, 0.994888, 0.994895, 0.994901, 0.994908, 0.994915, 
-            0.994922, 0.994928, 0.994935, 0.994942, 0.994948, 0.994955, 0.994962, 
-            0.994968, 0.994975, 0.994981, 0.994988, 0.994994, 0.995001, 0.995007, 
-            0.995014, 0.99502, 0.995027, 0.995033, 0.99504, 0.995046, 0.995052, 0.995059, 
-            0.995065, 0.995071, 0.995078, 0.995084, 0.99509, 0.995097, 0.995103, 
-            0.995109, 0.995115, 0.995121, 0.995128, 0.995134, 0.99514, 0.995146, 
-            0.995152, 0.995158, 0.995164, 0.995171, 0.995177, 0.995183, 0.995189, 
-            0.995195, 0.995201, 0.995207, 0.995213, 0.995219, 0.995225, 0.995231, 
-            0.995236, 0.995242, 0.995248, 0.995254, 0.99526, 0.995266, 0.995272, 
-            0.995277, 0.995283, 0.995289, 0.995295, 0.995301, 0.995306, 0.995312, 
-            0.995318, 0.995323, 0.995329, 0.995335, 0.99534, 0.995346, 0.995352, 
-            0.995357, 0.995363, 0.995369, 0.995374, 0.99538, 0.995385, 0.995391, 
-            0.995396, 0.995402, 0.995407, 0.995413, 0.995418, 0.995424, 0.995429, 
-            0.995435, 0.99544, 0.995445, 0.995451, 0.995456, 0.995462, 0.995467, 
-            0.995472, 0.995478, 0.995483, 0.995488, 0.995493, 0.995499, 0.995504, 
-            0.995509, 0.995515, 0.99552, 0.995525, 0.99553, 0.995535, 0.995541, 0.995546, 
-            0.995551, 0.995556, 0.995561, 0.995566, 0.995571, 0.995577, 0.995582, 
-            0.995587, 0.995592, 0.995597, 0.995602, 0.995607, 0.995612, 0.995617, 
-            0.995622, 0.995627, 0.995632, 0.995637, 0.995642, 0.995647, 0.995652, 
-            0.995657, 0.995661, 0.995666, 0.995671, 0.995676, 0.995681, 0.995686, 
-            0.995691, 0.995695, 0.9957, 0.995705, 0.99571, 0.995715, 0.995719, 0.995724, 
-            0.995729, 0.995734, 0.995738, 0.995743, 0.995748, 0.995753, 0.995757, 
-            0.995762, 0.995767, 0.995771, 0.995776, 0.995781, 0.995785, 0.99579, 
-            0.995794, 0.995799, 0.995804, 0.995808, 0.995813, 0.995817, 0.995822, 
-            0.995826, 0.995831, 0.995835, 0.99584, 0.995844, 0.995849, 0.995853, 
-            0.995858, 0.995862, 0.995867, 0.995871, 0.995876, 0.99588, 0.995885, 
-            0.995889, 0.995893, 0.995898, 0.995902, 0.995906, 0.995911, 0.995915, 
-            0.99592, 0.995924, 0.995928, 0.995932, 0.995937, 0.995941, 0.995945, 0.99595, 
-            0.995954, 0.995958, 0.995962, 0.995967, 0.995971, 0.995975, 0.995979, 
-            0.995984, 0.995988, 0.995992, 0.995996, 0.996, 0.996004, 0.996009, 0.996013, 
-            0.996017, 0.996021, 0.996025, 0.996029, 0.996033, 0.996037, 0.996041, 
-            0.996046, 0.99605, 0.996054, 0.996058, 0.996062, 0.996066, 0.99607, 0.996074, 
-            0.996078, 0.996082, 0.996086, 0.99609, 0.996094, 0.996098, 0.996102, 
-            0.996106, 0.99611, 0.996114, 0.996117, 0.996121, 0.996125, 0.996129, 
-            0.996133, 0.996137, 0.996141, 0.996145, 0.996149, 0.996152, 0.996156, 
-            0.99616, 0.996164]
-
-# Run-time configuration parameters (can be set with command-line options)
-
-$rerun=1
-$inner=3
-$warmup=1
-$outer=4
-$includeSunSpider=true
-$includeV8=true
-$includeKraken=true
-$measureGC=false
-$benchmarkPattern=nil
-$verbosity=0
-$timeMode=:preciseTime
-$forceVMKind=nil
-$brief=false
-$silent=false
-$remoteHosts=[]
-$alsoLocal=false
-$sshOptions=[]
-$vms = []
-$needToCopyVMs = false
-$dontCopyVMs = false
-
-$prepare = true
-$run = true
-$analyze = []
-
-# Helpful functions and classes
-
-def smallUsage
-  puts &quot;Use the --help option to get basic usage information.&quot;
-  exit 1
-end
-
-def usage
-  puts &quot;bencher [options] &lt;vm1&gt; [&lt;vm2&gt; ...]&quot;
-  puts
-  puts &quot;Runs one or more JavaScript runtimes against SunSpider, V8, and/or Kraken&quot;
-  puts &quot;benchmarks, and reports detailed statistics.  What makes bencher special is&quot;
-  puts &quot;that each benchmark/VM configuration is run in a single VM invocation, and&quot;
-  puts &quot;the invocations are run in random order.  This minimizes systematics due to&quot;
-  puts &quot;one benchmark polluting the running time of another.  The fine-grained&quot;
-  puts &quot;interleaving of VM invocations further minimizes systematics due to changes in&quot;
-  puts &quot;the performance or behavior of your machine.&quot;
-  puts 
-  puts &quot;Bencher is highly configurable.  You can compare as many VMs as you like.  You&quot;
-  puts &quot;can change the amount of warm-up iterations, number of iterations executed per&quot;
-  puts &quot;VM invocation, and the number of VM invocations per benchmark.  By default,&quot;
-  puts &quot;SunSpider, VM, and Kraken are all run; but you can run any combination of these&quot;
-  puts &quot;suites.&quot;
-  puts
-  puts &quot;The &lt;vm&gt; should be either a path to a JavaScript runtime executable (such as&quot;
-  puts &quot;jsc), or a string of the form &lt;name&gt;:&lt;path&gt;, where the &lt;path&gt; is the path to&quot;
-  puts &quot;the executable and &lt;name&gt; is the name that you would like to give the&quot;
-  puts &quot;configuration for the purposeof reporting.  If no name is given, a generic name&quot;
-  puts &quot;of the form Conf#&lt;n&gt; will be ascribed to the configuration automatically.&quot;
-  puts
-  puts &quot;Options:&quot;
-  puts &quot;--rerun &lt;n&gt;          Set the number of iterations of the benchmark that&quot;
-  puts &quot;                     contribute to the measured run time.  Default is #{$rerun}.&quot;
-  puts &quot;--inner &lt;n&gt;          Set the number of inner (per-runtime-invocation)&quot;
-  puts &quot;                     iterations.  Default is #{$inner}.&quot;
-  puts &quot;--outer &lt;n&gt;          Set the number of runtime invocations for each benchmark.&quot;
-  puts &quot;                     Default is #{$outer}.&quot;
-  puts &quot;--warmup &lt;n&gt;         Set the number of warm-up runs per invocation.  Default&quot;
-  puts &quot;                     is #{$warmup}.&quot;
-  puts &quot;--timing-mode        Set the way that bencher measures time.  Possible values&quot;
-  puts &quot;                     are 'preciseTime' and 'date'.  Default is 'preciseTime'.&quot;
-  puts &quot;--force-vm-kind      Turn off auto-detection of VM kind, and assume that it is&quot;
-  puts &quot;                     the one specified.  Valid arguments are 'jsc' or&quot;
-  puts &quot;                     'DumpRenderTree'.&quot;
-  puts &quot;--force-vm-copy      Force VM builds to be copied to bencher's working directory.&quot;
-  puts &quot;                     This may reduce pathologies resulting from path names.&quot;
-  puts &quot;--dont-copy-vms      Don't copy VMs even when doing a remote benchmarking run;&quot;
-  puts &quot;                     instead assume that they are already there.&quot;
-  puts &quot;--v8-only            Only run V8.&quot;
-  puts &quot;--sunspider-only     Only run SunSpider.&quot;
-  puts &quot;--kraken-only        Only run Kraken.&quot;
-  puts &quot;--exclude-v8         Exclude V8 (only run SunSpider and Kraken).&quot;
-  puts &quot;--exclude-sunspider  Exclude SunSpider (only run V8 and Kraken).&quot;
-  puts &quot;--exclude-kraken     Exclude Kraken (only run SunSpider and V8).&quot;
-  puts &quot;--benchmarks         Only run benchmarks matching the given regular expression.&quot;
-  puts &quot;--measure-gc         Turn off manual calls to gc(), so that GC time is measured.&quot;
-  puts &quot;                     Works best with large values of --inner.  You can also say&quot;
-  puts &quot;                     --measure-gc &lt;conf&gt;, which turns this on for one&quot;
-  puts &quot;                     configuration only.&quot;
-  puts &quot;--verbose or -v      Print more stuff.&quot;
-  puts &quot;--brief              Print only the final result for each VM.&quot;
-  puts &quot;--silent             Don't print progress. This might slightly reduce some&quot;
-  puts &quot;                     performance perturbation.&quot;
-  puts &quot;--remote &lt;sshhosts&gt;  Performance performance measurements remotely, on the given&quot;
-  puts &quot;                     SSH host(s). Easiest way to use this is to specify the SSH&quot;
-  puts &quot;                     user@host string. However, you can also supply a comma-&quot;
-  puts &quot;                     separated list of SSH hosts. Alternatively, you can use this&quot;
-  puts &quot;                     option multiple times to specify multiple hosts. This&quot;
-  puts &quot;                     automatically copies the WebKit release builds of the VMs&quot;
-  puts &quot;                     you specified to all of the hosts.&quot;
-  puts &quot;--ssh-options        Pass additional options to SSH.&quot;
-  puts &quot;--local              Also do a local benchmark run even when doing --remote.&quot;
-  puts &quot;--prepare-only       Only prepare the bencher runscript (a shell script that&quot;
-  puts &quot;                     invokes the VMs to run benchmarks) but don't run it.&quot;
-  puts &quot;--analyze            Only read the output of the runscript but don't do anything&quot;
-  puts &quot;                     else. This requires passing the same arguments to bencher&quot;
-  puts &quot;                     that you passed when running --prepare-only.&quot;
-  puts &quot;--help or -h         Display this message.&quot;
-  puts
-  puts &quot;Example:&quot;
-  puts &quot;bencher TipOfTree:/Volumes/Data/pizlo/OpenSource/WebKitBuild/Release/jsc MyChanges:/Volumes/Data/pizlo/secondary/OpenSource/WebKitBuild/Release/jsc&quot;
-  exit 1
-end
-
-def fail(reason)
-  if reason.respond_to? :backtrace
-    puts &quot;FAILED: #{reason}&quot;
-    puts &quot;Stack trace:&quot;
-    puts reason.backtrace.join(&quot;\n&quot;)
-  else
-    puts &quot;FAILED: #{reason}&quot;
-  end
-  smallUsage
-end
-
-def quickFail(r1,r2)
-  $stderr.puts &quot;#{$0}: #{r1}&quot;
-  puts
-  fail(r2)
-end
-
-def intArg(argName,arg,min,max)
-  result=arg.to_i
-  unless result.to_s == arg
-    quickFail(&quot;Expected an integer value for #{argName}, but got #{arg}.&quot;,
-              &quot;Invalid argument for command-line option&quot;)
-  end
-  if min and result&lt;min
-    quickFail(&quot;Argument for #{argName} cannot be smaller than #{min}.&quot;,
-              &quot;Invalid argument for command-line option&quot;)
-  end
-  if max and result&gt;max
-    quickFail(&quot;Argument for #{argName} cannot be greater than #{max}.&quot;,
-              &quot;Invalid argument for command-line option&quot;)
-  end
-  result
-end
-
-def computeMean(array)
-  sum=0.0
-  array.each {
-    | value |
-    sum += value
-  }
-  sum/array.length
-end
-
-def computeGeometricMean(array)
-  mult=1.0
-  array.each {
-    | value |
-    mult*=value
-  }
-  mult**(1.0/array.length)
-end
-
-def computeHarmonicMean(array)
-  1.0 / computeMean(array.collect{ | value | 1.0 / value })
-end
-
-def computeStdDev(array)
-  case array.length
-  when 0
-    0.0/0.0
-  when 1
-    0.0
-  else
-    begin
-      mean=computeMean(array)
-      sum=0.0
-      array.each {
-        | value |
-        sum += (value-mean)**2
-      }
-      Math.sqrt(sum/(array.length-1))
-    rescue
-      0.0/0.0
-    end
-  end
-end
-
-class Array
-  def shuffle!
-    size.downto(1) { |n| push delete_at(rand(n)) }
-    self
-  end
-end
-
-def inverseBetaRegularized(n)
-  IBR_LOOKUP[n-1]
-end
-
-def numToStr(num)
-  &quot;%.4f&quot;%(num.to_f)
-end
-  
-class NoChange
-  attr_reader :amountFaster
-  
-  def initialize(amountFaster)
-    @amountFaster = amountFaster
-  end
-  
-  def shortForm
-    &quot; &quot;
-  end
-  
-  def longForm
-    &quot;  might be #{numToStr(@amountFaster)}x faster&quot;
-  end
-  
-  def to_s
-    if @amountFaster &lt; 1.01
-      &quot;&quot;
-    else
-      longForm
-    end
-  end
-end
-
-class Faster
-  attr_reader :amountFaster
-  
-  def initialize(amountFaster)
-    @amountFaster = amountFaster
-  end
-  
-  def shortForm
-    &quot;^&quot;
-  end
-  
-  def longForm
-    &quot;^ definitely #{numToStr(@amountFaster)}x faster&quot;
-  end
-  
-  def to_s
-    longForm
-  end
-end
-
-class Slower
-  attr_reader :amountSlower
-  
-  def initialize(amountSlower)
-    @amountSlower = amountSlower
-  end
-  
-  def shortForm
-    &quot;!&quot;
-  end
-  
-  def longForm
-    &quot;! definitely #{numToStr(@amountSlower)}x slower&quot;
-  end
-  
-  def to_s
-    longForm
-  end
-end
-
-class MayBeSlower
-  attr_reader :amountSlower
-  
-  def initialize(amountSlower)
-    @amountSlower = amountSlower
-  end
-  
-  def shortForm
-    &quot;?&quot;
-  end
-  
-  def longForm
-    &quot;? might be #{numToStr(@amountSlower)}x slower&quot;
-  end
-  
-  def to_s
-    if @amountSlower &lt; 1.01
-      &quot;?&quot;
-    else
-      longForm
-    end
-  end
-end
-
-class Stats
-  def initialize
-    @array = []
-  end
-  
-  def add(value)
-    if value.is_a? Stats
-      add(value.array)
-    elsif value.respond_to? :each
-      value.each {
-        | v |
-        add(v)
-      }
-    else
-      @array &lt;&lt; value.to_f
-    end
-  end
-    
-  def array
-    @array
-  end
-  
-  def sum
-    result=0
-    @array.each {
-      | value |
-      result += value
-    }
-    result
-  end
-  
-  def min
-    @array.min
-  end
-  
-  def max
-    @array.max
-  end
-  
-  def size
-    @array.length
-  end
-  
-  def mean
-    computeMean(array)
-  end
-  
-  def arithmeticMean
-    mean
-  end
-  
-  def stdDev
-    computeStdDev(array)
-  end
-
-  def stdErr
-    stdDev/Math.sqrt(size)
-  end
-  
-  # Computes a 95% Student's t distribution confidence interval
-  def confInt
-    if size &lt; 2
-      0.0/0.0
-    else
-      raise if size &gt; 1000
-      Math.sqrt(size-1.0)*stdErr*Math.sqrt(-1.0+1.0/inverseBetaRegularized(size-1))
-    end
-  end
-  
-  def lower
-    mean-confInt
-  end
-  
-  def upper
-    mean+confInt
-  end
-  
-  def geometricMean
-    computeGeometricMean(array)
-  end
-  
-  def harmonicMean
-    computeHarmonicMean(array)
-  end
-  
-  def compareTo(other)
-    if upper &lt; other.lower
-      Faster.new(other.mean/mean)
-    elsif lower &gt; other.upper
-      Slower.new(mean/other.mean)
-    elsif mean &gt; other.mean
-      MayBeSlower.new(mean/other.mean)
-    else
-      NoChange.new(other.mean/mean)
-    end
-  end
-  
-  def to_s
-    &quot;size = #{size}, mean = #{mean}, stdDev = #{stdDev}, stdErr = #{stdErr}, confInt = #{confInt}&quot;
-  end
-end
-
-def doublePuts(out1,out2,msg)
-  out1.puts &quot;#{out2.path}: #{msg}&quot; if $verbosity&gt;=3
-  out2.puts msg
-end
-
-class Benchfile &lt; File
-  @@counter = 0
-  
-  attr_reader :filename, :basename
-  
-  def initialize(name)
-    @basename, @filename = Benchfile.uniqueFilename(name)
-    super(@filename, &quot;w&quot;)
-  end
-  
-  def self.uniqueFilename(name)
-    if name.is_a? Array
-      basename = name[0] + @@counter.to_s + name[1]
-    else
-      basename = name + @@counter.to_s
-    end
-    filename = BENCH_DATA_PATH + &quot;/&quot; + basename
-    @@counter += 1
-    raise &quot;Benchfile #{filename} already exists&quot; if FileTest.exist?(filename)
-    [basename, filename]
-  end
-  
-  def self.create(name)
-    file = Benchfile.new(name)
-    yield file
-    file.close
-    file.basename
-  end
-end
-
-$dataFiles={}
-def ensureFile(key, filename)
-  unless $dataFiles[key]
-    $dataFiles[key] = Benchfile.create(key) {
-      | outp |
-      doublePuts($stderr,outp,IO::read(filename))
-    }
-  end
-  $dataFiles[key]
-end
-  
-def emitBenchRunCodeFile(name, plan, benchDataPath, benchPath)
-  case plan.vm.vmType
-  when :jsc
-    Benchfile.create(&quot;bencher&quot;) {
-      | file |
-      case $timeMode
-      when :preciseTime
-        doublePuts($stderr,file,&quot;function __bencher_curTimeMS() {&quot;)
-        doublePuts($stderr,file,&quot;   return preciseTime()*1000&quot;)
-        doublePuts($stderr,file,&quot;}&quot;)
-      when :date
-        doublePuts($stderr,file,&quot;function __bencher_curTimeMS() {&quot;)
-        doublePuts($stderr,file,&quot;   return Date.now()&quot;)
-        doublePuts($stderr,file,&quot;}&quot;)
-      else
-        raise
-      end
-
-      doublePuts($stderr,file,&quot;if (typeof noInline == 'undefined') noInline = function(){};&quot;)
-
-      if benchDataPath
-        doublePuts($stderr,file,&quot;load(#{benchDataPath.inspect});&quot;)
-        doublePuts($stderr,file,&quot;gc();&quot;)
-        doublePuts($stderr,file,&quot;for (var __bencher_index = 0; __bencher_index &lt; #{$warmup+$inner}; ++__bencher_index) {&quot;)
-        doublePuts($stderr,file,&quot;   before = __bencher_curTimeMS();&quot;)
-        $rerun.times {
-          doublePuts($stderr,file,&quot;   load(#{benchPath.inspect});&quot;)
-        }
-        doublePuts($stderr,file,&quot;   after = __bencher_curTimeMS();&quot;)
-        doublePuts($stderr,file,&quot;   if (__bencher_index &gt;= #{$warmup}) print(\&quot;#{name}: #{plan.vm}: #{plan.iteration}: \&quot; + (__bencher_index - #{$warmup}) + \&quot;: Time: \&quot;+(after-before));&quot;);
-        doublePuts($stderr,file,&quot;   gc();&quot;) unless plan.vm.shouldMeasureGC
-        doublePuts($stderr,file,&quot;}&quot;)
-      else
-        doublePuts($stderr,file,&quot;function __bencher_run(__bencher_what) {&quot;)
-        doublePuts($stderr,file,&quot;   var __bencher_before = __bencher_curTimeMS();&quot;)
-        $rerun.times {
-          doublePuts($stderr,file,&quot;   run(__bencher_what);&quot;)
-        }
-        doublePuts($stderr,file,&quot;   var __bencher_after = __bencher_curTimeMS();&quot;)
-        doublePuts($stderr,file,&quot;   return __bencher_after - __bencher_before;&quot;)
-        doublePuts($stderr,file,&quot;}&quot;)
-        $warmup.times {
-          doublePuts($stderr,file,&quot;__bencher_run(#{benchPath.inspect})&quot;)
-          doublePuts($stderr,file,&quot;gc();&quot;) unless plan.vm.shouldMeasureGC
-        }
-        $inner.times {
-          | innerIndex |
-          doublePuts($stderr,file,&quot;print(\&quot;#{name}: #{plan.vm}: #{plan.iteration}: #{innerIndex}: Time: \&quot;+__bencher_run(#{benchPath.inspect}));&quot;)
-          doublePuts($stderr,file,&quot;gc();&quot;) unless plan.vm.shouldMeasureGC
-        }
-      end
-    }
-  when :dumpRenderTree
-    mainCode = Benchfile.create(&quot;bencher&quot;) {
-      | file |
-      doublePuts($stderr,file,&quot;__bencher_count = 0;&quot;)
-      doublePuts($stderr,file,&quot;function __bencher_doNext(result) {&quot;)
-      doublePuts($stderr,file,&quot;    if (__bencher_count &gt;= #{$warmup})&quot;)
-      doublePuts($stderr,file,&quot;        debug(\&quot;#{name}: #{plan.vm}: #{plan.iteration}: \&quot; + (__bencher_count - #{$warmup}) + \&quot;: Time: \&quot; + result);&quot;)
-      doublePuts($stderr,file,&quot;    __bencher_count++;&quot;)
-      doublePuts($stderr,file,&quot;    if (__bencher_count &lt; #{$inner+$warmup})&quot;)
-      doublePuts($stderr,file,&quot;        __bencher_runImpl(__bencher_doNext);&quot;)
-      doublePuts($stderr,file,&quot;    else&quot;)
-      doublePuts($stderr,file,&quot;        quit();&quot;)
-      doublePuts($stderr,file,&quot;}&quot;)
-      doublePuts($stderr,file,&quot;__bencher_runImpl(__bencher_doNext);&quot;)
-    }
-    
-    cssCode = Benchfile.create(&quot;bencher-css&quot;) {
-      | file |
-      doublePuts($stderr,file,&quot;.pass {\n    font-weight: bold;\n    color: green;\n}\n.fail {\n    font-weight: bold;\n    color: red;\n}\n\#console {\n    white-space: pre-wrap;\n    font-family: monospace;\n}&quot;)
-    }
-    
-    preCode = Benchfile.create(&quot;bencher-pre&quot;) {
-      | file |
-      doublePuts($stderr,file,&quot;if (window.testRunner) {&quot;)
-      doublePuts($stderr,file,&quot;    testRunner.dumpAsText(window.enablePixelTesting);&quot;)
-      doublePuts($stderr,file,&quot;    testRunner.waitUntilDone();&quot;)
-      doublePuts($stderr,file,&quot;    noInline = testRunner.neverInlineFunction || function(){};&quot;)
-      doublePuts($stderr,file,&quot;}&quot;)
-      doublePuts($stderr,file,&quot;&quot;)
-      doublePuts($stderr,file,&quot;function debug(msg)&quot;)
-      doublePuts($stderr,file,&quot;{&quot;)
-      doublePuts($stderr,file,&quot;    var span = document.createElement(\&quot;span\&quot;);&quot;)
-      doublePuts($stderr,file,&quot;    document.getElementById(\&quot;console\&quot;).appendChild(span); // insert it first so XHTML knows the namespace&quot;)
-      doublePuts($stderr,file,&quot;    span.innerHTML = msg + '&lt;br /&gt;';&quot;)
-      doublePuts($stderr,file,&quot;}&quot;)
-      doublePuts($stderr,file,&quot;&quot;)
-      doublePuts($stderr,file,&quot;function quit() {&quot;)
-      doublePuts($stderr,file,&quot;    testRunner.notifyDone();&quot;)
-      doublePuts($stderr,file,&quot;}&quot;)
-      doublePuts($stderr,file,&quot;&quot;)
-      doublePuts($stderr,file,&quot;__bencher_continuation=null;&quot;)
-      doublePuts($stderr,file,&quot;&quot;)
-      doublePuts($stderr,file,&quot;function reportResult(result) {&quot;)
-      doublePuts($stderr,file,&quot;    __bencher_continuation(result);&quot;)
-      doublePuts($stderr,file,&quot;}&quot;)
-      doublePuts($stderr,file,&quot;&quot;)
-      doublePuts($stderr,file,&quot;function __bencher_runImpl(continuation) {&quot;)
-      doublePuts($stderr,file,&quot;    function doit() {&quot;)
-      doublePuts($stderr,file,&quot;        document.getElementById(\&quot;frameparent\&quot;).innerHTML = \&quot;\&quot;;&quot;)
-      doublePuts($stderr,file,&quot;        document.getElementById(\&quot;frameparent\&quot;).innerHTML = \&quot;&lt;iframe id='testframe'&gt;\&quot;;&quot;)
-      doublePuts($stderr,file,&quot;        var testFrame = document.getElementById(\&quot;testframe\&quot;);&quot;)
-      doublePuts($stderr,file,&quot;        testFrame.contentDocument.open();&quot;)
-      doublePuts($stderr,file,&quot;        testFrame.contentDocument.write(\&quot;&lt;!DOCTYPE html&gt;\\n&lt;head&gt;&lt;/head&gt;&lt;body&gt;&lt;div id=\\\&quot;console\\\&quot;&gt;&lt;/div&gt;\&quot;);&quot;)
-      if benchDataPath
-        doublePuts($stderr,file,&quot;        testFrame.contentDocument.write(\&quot;&lt;script src=\\\&quot;#{benchDataPath}\\\&quot;&gt;&lt;/script&gt;\&quot;);&quot;)
-      end
-      doublePuts($stderr,file,&quot;        testFrame.contentDocument.write(\&quot;&lt;script type=\\\&quot;text/javascript\\\&quot;&gt;if (window.testRunner) noInline=window.testRunner.neverInlineFunction || function(){}; if (typeof noInline == 'undefined') noInline=function(){}; __bencher_before = Date.now();&lt;/script&gt;&lt;script src=\\\&quot;#{benchPath}\\\&quot;&gt;&lt;/script&gt;&lt;script type=\\\&quot;text/javascript\\\&quot;&gt;window.parent.reportResult(Date.now() - __bencher_before);&lt;/script&gt;&lt;/body&gt;&lt;/html&gt;\&quot;);&quot;)
-      doublePuts($stderr,file,&quot;        testFrame.contentDocument.close();&quot;)
-      doublePuts($stderr,file,&quot;    }&quot;)
-      doublePuts($stderr,file,&quot;    __bencher_continuation = continuation;&quot;)
-      doublePuts($stderr,file,&quot;    window.setTimeout(doit, 10);&quot;)
-      doublePuts($stderr,file,&quot;}&quot;)
-    }
-
-    Benchfile.create([&quot;bencher-htmldoc&quot;,&quot;.html&quot;]) {
-      | file |
-      doublePuts($stderr,file,&quot;&lt;!DOCTYPE HTML PUBLIC \&quot;-//IETF//DTD HTML//EN\&quot;&gt;\n&lt;html&gt;&lt;head&gt;&lt;link rel=\&quot;stylesheet\&quot; href=\&quot;#{cssCode}\&quot;&gt;&lt;script src=\&quot;#{preCode}\&quot;&gt;&lt;/script&gt;&lt;/head&gt;&lt;body&gt;&lt;div id=\&quot;console\&quot;&gt;&lt;/div&gt;&lt;div id=\&quot;frameparent\&quot;&gt;&lt;/div&gt;&lt;script src=\&quot;#{mainCode}\&quot;&gt;&lt;/script&gt;&lt;/body&gt;&lt;/html&gt;&quot;)
-    }
-  else
-    raise
-  end
-end
-
-def emitBenchRunCode(name, plan, benchDataPath, benchPath)
-  plan.vm.emitRunCode(emitBenchRunCodeFile(name, plan, benchDataPath, benchPath))
-end
-
-def planForDescription(plans, benchFullname, vmName, iteration)
-  raise unless benchFullname =~ /\//
-  suiteName = $~.pre_match
-  benchName = $~.post_match
-  result = plans.select{|v| v.suite.name == suiteName and v.benchmark.name == benchName and v.vm.name == vmName and v.iteration == iteration}
-  raise unless result.size == 1
-  result[0]
-end
-
-class ParsedResult
-  attr_reader :plan, :innerIndex, :time
-  
-  def initialize(plan, innerIndex, time)
-    @plan = plan
-    @innerIndex = innerIndex
-    @time = time
-    
-    raise unless @plan.is_a? BenchPlan
-    raise unless @innerIndex.is_a? Integer
-    raise unless @time.is_a? Numeric
-  end
-  
-  def benchmark
-    plan.benchmark
-  end
-  
-  def suite
-    plan.suite
-  end
-  
-  def vm
-    plan.vm
-  end
-  
-  def outerIndex
-    plan.iteration
-  end
-  
-  def self.parse(plans, string)
-    if string =~ /([a-zA-Z0-9\/-]+): ([a-zA-Z0-9_# ]+): ([0-9]+): ([0-9]+): Time: /
-      benchFullname = $1
-      vmName = $2
-      outerIndex = $3.to_i
-      innerIndex = $4.to_i
-      time = $~.post_match.to_f
-      ParsedResult.new(planForDescription(plans, benchFullname, vmName, outerIndex), innerIndex, time)
-    else
-      nil
-    end
-  end
-end
-
-class VM
-  def initialize(origPath, name, nameKind, svnRevision)
-    @origPath = origPath.to_s
-    @path = origPath.to_s
-    @name = name
-    @nameKind = nameKind
-    
-    if $forceVMKind
-      @vmType = $forceVMKind
-    else
-      if @origPath =~ /DumpRenderTree$/
-        @vmType = :dumpRenderTree
-      else
-        @vmType = :jsc
-      end
-    end
-    
-    @svnRevision = svnRevision
-    
-    # Try to detect information about the VM.
-    if path =~ /\/WebKitBuild\/Release\/([a-zA-Z]+)$/
-      @checkoutPath = $~.pre_match
-      # FIXME: Use some variant of this: 
-      # &lt;bdash&gt;   def retrieve_revision
-      # &lt;bdash&gt;     `perl -I#{@path}/Tools/Scripts -MVCSUtils -e 'print svnRevisionForDirectory(&quot;#{@path}&quot;);'`.to_i
-      # &lt;bdash&gt;   end
-      unless @svnRevision
-        begin
-          Dir.chdir(@checkoutPath) {
-            $stderr.puts &quot;&gt;&gt; cd #{@checkoutPath} &amp;&amp; svn info&quot; if $verbosity&gt;=2
-            IO.popen(&quot;svn info&quot;, &quot;r&quot;) {
-              | inp |
-              inp.each_line {
-                | line |
-                if line =~ /Revision: ([0-9]+)/
-                  @svnRevision = $1
-                end
-              }
-            }
-          }
-          unless @svnRevision
-            $stderr.puts &quot;Warning: running svn info for #{name} silently failed.&quot;
-          end
-        rescue =&gt; e
-          # Failed to detect svn revision.
-          $stderr.puts &quot;Warning: could not get svn revision information for #{name}: #{e}&quot;
-        end
-      end
-    else
-      $stderr.puts &quot;Warning: could not identify checkout location for #{name}&quot;
-    end
-    
-    if @path =~ /\/Release\/([a-zA-Z]+)$/
-      @libPath, @relativeBinPath = $~.pre_match+&quot;/Release&quot;, &quot;./#{$1}&quot;
-    elsif @path =~ /\/Contents\/Resources\/([a-zA-Z]+)$/
-      @libPath = $~.pre_match
-    elsif @path =~ /\/JavaScriptCore.framework\/Resources\/([a-zA-Z]+)$/
-      @libPath, @relativeBinPath = $~.pre_match, $&amp;[1..-1]
-    end
-  end
-  
-  def canCopyIntoBenchPath
-    if @libPath and @relativeBinPath
-      true
-    else
-      false
-    end
-  end
-  
-  def copyIntoBenchPath
-    raise unless canCopyIntoBenchPath
-    basename, filename = Benchfile.uniqueFilename(&quot;vm&quot;)
-    raise unless Dir.mkdir(filename)
-    cmd = &quot;cp -a #{@libPath.inspect}/* #{filename.inspect}&quot;
-    $stderr.puts &quot;&gt;&gt; #{cmd}&quot; if $verbosity&gt;=2
-    raise unless system(cmd)
-    @path = &quot;#{basename}/#{@relativeBinPath}&quot;
-    @libPath = basename
-  end
-  
-  def to_s
-    @name
-  end
-  
-  def name
-    @name
-  end
-  
-  def shouldMeasureGC
-    $measureGC == true or ($measureGC == name)
-  end
-  
-  def origPath
-    @origPath
-  end
-  
-  def path
-    @path
-  end
-  
-  def nameKind
-    @nameKind
-  end
-  
-  def vmType
-    @vmType
-  end
-  
-  def checkoutPath
-    @checkoutPath
-  end
-  
-  def svnRevision
-    @svnRevision
-  end
-  
-  def printFunction
-    case @vmType
-    when :jsc
-      &quot;print&quot;
-    when :dumpRenderTree
-      &quot;debug&quot;
-    else
-      raise @vmType
-    end
-  end
-  
-  def emitRunCode(fileToRun)
-    myLibPath = @libPath
-    myLibPath = &quot;&quot; unless myLibPath
-    $script.puts &quot;export DYLD_LIBRARY_PATH=#{myLibPath.to_s.inspect}&quot;
-    $script.puts &quot;export DYLD_FRAMEWORK_PATH=#{myLibPath.to_s.inspect}&quot;
-    $script.puts &quot;#{path} #{fileToRun}&quot;
-  end
-end
-
-class StatsAccumulator
-  def initialize
-    @stats = []
-    ($outer*$inner).times {
-      @stats &lt;&lt; Stats.new
-    }
-  end
-  
-  def statsForIteration(outerIteration, innerIteration)
-    @stats[outerIteration*$inner + innerIteration]
-  end
-  
-  def stats
-    result = Stats.new
-    @stats.each {
-      | stat |
-      result.add(yield stat)
-    }
-    result
-  end
-  
-  def geometricMeanStats
-    stats {
-      | stat |
-      stat.geometricMean
-    }
-  end
-  
-  def arithmeticMeanStats
-    stats {
-      | stat |
-      stat.arithmeticMean
-    }
-  end
-end
-
-module Benchmark
-  attr_accessor :benchmarkSuite
-  attr_reader :name
-  
-  def fullname
-    benchmarkSuite.name + &quot;/&quot; + name
-  end
-  
-  def to_s
-    fullname
-  end
-end
-
-class SunSpiderBenchmark
-  include Benchmark
-  
-  def initialize(name)
-    @name = name
-  end
-  
-  def emitRunCode(plan)
-    emitBenchRunCode(fullname, plan, nil, ensureFile(&quot;SunSpider-#{@name}&quot;, &quot;#{SUNSPIDER_PATH}/#{@name}.js&quot;))
-  end
-end
-
-class V8Benchmark
-  include Benchmark
-  
-  def initialize(name)
-    @name = name
-  end
-  
-  def emitRunCode(plan)
-    emitBenchRunCode(fullname, plan, nil, ensureFile(&quot;V8-#{@name}&quot;, &quot;#{V8_PATH}/v8-#{@name}.js&quot;))
-  end
-end
-
-class KrakenBenchmark
-  include Benchmark
-  
-  def initialize(name)
-    @name = name
-  end
-  
-  def emitRunCode(plan)
-    emitBenchRunCode(fullname, plan, ensureFile(&quot;KrakenData-#{@name}&quot;, &quot;#{KRAKEN_PATH}/#{@name}-data.js&quot;), ensureFile(&quot;Kraken-#{@name}&quot;, &quot;#{KRAKEN_PATH}/#{@name}.js&quot;))
-  end
-end
-
-class BenchmarkSuite
-  def initialize(name, path, preferredMean)
-    @name = name
-    @path = path
-    @preferredMean = preferredMean
-    @benchmarks = []
-  end
-  
-  def name
-    @name
-  end
-  
-  def to_s
-    @name
-  end
-  
-  def path
-    @path
-  end
-  
-  def add(benchmark)
-    if not $benchmarkPattern or &quot;#{@name}/#{benchmark.name}&quot; =~ $benchmarkPattern
-      benchmark.benchmarkSuite = self
-      @benchmarks &lt;&lt; benchmark
-    end
-  end
-  
-  def benchmarks
-    @benchmarks
-  end
-  
-  def benchmarkForName(name)
-    result = @benchmarks.select{|v| v.name == name}
-    raise unless result.length == 1
-    result[0]
-  end
-  
-  def empty?
-    @benchmarks.empty?
-  end
-  
-  def retain_if
-    @benchmarks.delete_if {
-      | benchmark |
-      not yield benchmark
-    }
-  end
-  
-  def preferredMean
-    @preferredMean
-  end
-  
-  def computeMean(stat)
-    stat.send @preferredMean
-  end
-end
-
-class BenchRunPlan
-  def initialize(benchmark, vm, iteration)
-    @benchmark = benchmark
-    @vm = vm
-    @iteration = iteration
-  end
-  
-  def benchmark
-    @benchmark
-  end
-  
-  def suite
-    @benchmark.benchmarkSuite
-  end
-  
-  def vm
-    @vm
-  end
-  
-  def iteration
-    @iteration
-  end
-  
-  def emitRunCode
-    @benchmark.emitRunCode(self)
-  end
-end
-
-class BenchmarkOnVM
-  def initialize(benchmark, suiteOnVM)
-    @benchmark = benchmark
-    @suiteOnVM = suiteOnVM
-    @stats = Stats.new
-  end
-  
-  def to_s
-    &quot;#{@benchmark} on #{@suiteOnVM.vm}&quot;
-  end
-  
-  def benchmark
-    @benchmark
-  end
-  
-  def vm
-    @suiteOnVM.vm
-  end
-  
-  def vmStats
-    @suiteOnVM.vmStats
-  end
-  
-  def suite
-    @benchmark.benchmarkSuite
-  end
-  
-  def suiteOnVM
-    @suiteOnVM
-  end
-  
-  def stats
-    @stats
-  end
-  
-  def parseResult(result)
-    raise &quot;VM mismatch; I've got #{vm} and they've got #{result.vm}&quot; unless result.vm == vm
-    raise unless result.benchmark == @benchmark
-    @stats.add(result.time)
-  end
-end
-
-class SuiteOnVM &lt; StatsAccumulator
-  def initialize(vm, vmStats, suite)
-    super()
-    @vm = vm
-    @vmStats = vmStats
-    @suite = suite
-    
-    raise unless @vm.is_a? VM
-    raise unless @vmStats.is_a? StatsAccumulator
-    raise unless @suite.is_a? BenchmarkSuite
-  end
-  
-  def to_s
-    &quot;#{@suite} on #{@vm}&quot;
-  end
-  
-  def suite
-    @suite
-  end
-  
-  def vm
-    @vm
-  end
-  
-  def vmStats
-    raise unless @vmStats
-    @vmStats
-  end
-end
-
-class BenchPlan
-  def initialize(benchmarkOnVM, iteration)
-    @benchmarkOnVM = benchmarkOnVM
-    @iteration = iteration
-  end
-  
-  def to_s
-    &quot;#{@benchmarkOnVM} \##{@iteration+1}&quot;
-  end
-  
-  def benchmarkOnVM
-    @benchmarkOnVM
-  end
-  
-  def benchmark
-    @benchmarkOnVM.benchmark
-  end
-  
-  def suite
-    @benchmarkOnVM.suite
-  end
-  
-  def vm
-    @benchmarkOnVM.vm
-  end
-  
-  def iteration
-    @iteration
-  end
-  
-  def parseResult(result)
-    raise unless result.plan == self
-    @benchmarkOnVM.parseResult(result)
-    @benchmarkOnVM.vmStats.statsForIteration(@iteration, result.innerIndex).add(result.time)
-    @benchmarkOnVM.suiteOnVM.statsForIteration(@iteration, result.innerIndex).add(result.time)
-  end
-end
-
-def lpad(str,chars)
-  if str.length&gt;chars
-    str
-  else
-    &quot;%#{chars}s&quot;%(str)
-  end
-end
-
-def rpad(str,chars)
-  while str.length&lt;chars
-    str+=&quot; &quot;
-  end
-  str
-end
-
-def center(str,chars)
-  while str.length&lt;chars
-    str+=&quot; &quot;
-    if str.length&lt;chars
-      str=&quot; &quot;+str
-    end
-  end
-  str
-end
-
-def statsToStr(stats)
-  if $inner*$outer == 1
-    string = numToStr(stats.mean)
-    raise unless string =~ /\./
-    left = $~.pre_match
-    right = $~.post_match
-    lpad(left,12)+&quot;.&quot;+rpad(right,9)
-  else
-    lpad(numToStr(stats.mean),11)+&quot;+-&quot;+rpad(numToStr(stats.confInt),9)
-  end
-end
-
-def plural(num)
-  if num == 1
-    &quot;&quot;
-  else
-    &quot;s&quot;
-  end
-end
-
-def wrap(str, columns)
-  array = str.split
-  result = &quot;&quot;
-  curLine = array.shift
-  array.each {
-    | curStr |
-    if (curLine + &quot; &quot; + curStr).size &gt; columns
-      result += curLine + &quot;\n&quot;
-      curLine = curStr
-    else
-      curLine += &quot; &quot; + curStr
-    end
-  }
-  result + curLine + &quot;\n&quot;
-end
-  
-def runAndGetResults
-  results = nil
-  Dir.chdir(BENCH_DATA_PATH) {
-    IO.popen(&quot;sh ./runscript&quot;, &quot;r&quot;) {
-      | inp |
-      results = inp.read
-    }
-    raise &quot;Script did not complete correctly: #{$?}&quot; unless $?.success?
-  }
-  raise unless results
-  results
-end
-
-def parseAndDisplayResults(results)
-  vmStatses = []
-  $vms.each {
-    vmStatses &lt;&lt; StatsAccumulator.new
-  }
-  
-  suitesOnVMs = []
-  suitesOnVMsForSuite = {}
-  $suites.each {
-    | suite |
-    suitesOnVMsForSuite[suite] = []
-  }
-  suitesOnVMsForVM = {}
-  $vms.each {
-    | vm |
-    suitesOnVMsForVM[vm] = []
-  }
-  
-  benchmarksOnVMs = []
-  benchmarksOnVMsForBenchmark = {}
-  $benchmarks.each {
-    | benchmark |
-    benchmarksOnVMsForBenchmark[benchmark] = []
-  }
-  
-  $vms.each_with_index {
-    | vm, vmIndex |
-    vmStats = vmStatses[vmIndex]
-    $suites.each {
-      | suite |
-      suiteOnVM = SuiteOnVM.new(vm, vmStats, suite)
-      suitesOnVMs &lt;&lt; suiteOnVM
-      suitesOnVMsForSuite[suite] &lt;&lt; suiteOnVM
-      suitesOnVMsForVM[vm] &lt;&lt; suiteOnVM
-      suite.benchmarks.each {
-        | benchmark |
-        benchmarkOnVM = BenchmarkOnVM.new(benchmark, suiteOnVM)
-        benchmarksOnVMs &lt;&lt; benchmarkOnVM
-        benchmarksOnVMsForBenchmark[benchmark] &lt;&lt; benchmarkOnVM
-      }
-    }
-  }
-  
-  plans = []
-  benchmarksOnVMs.each {
-    | benchmarkOnVM |
-    $outer.times {
-      | iteration |
-      plans &lt;&lt; BenchPlan.new(benchmarkOnVM, iteration)
-    }
-  }
-
-  hostname = nil
-  hwmodel = nil
-  results.each_line {
-    | line |
-    line.chomp!
-    if line =~ /HOSTNAME:([^.]+)/
-      hostname = $1
-    elsif line =~ /HARDWARE:hw\.model: /
-      hwmodel = $~.post_match.chomp
-    else
-      result = ParsedResult.parse(plans, line.chomp)
-      if result
-        result.plan.parseResult(result)
-      end
-    end
-  }
-  
-  # Compute the geomean of the preferred means of results on a SuiteOnVM
-  overallResults = []
-  $vms.each {
-    | vm |
-    result = Stats.new
-    $outer.times {
-      | outerIndex |
-      $inner.times {
-        | innerIndex |
-        curResult = Stats.new
-        suitesOnVMsForVM[vm].each {
-          | suiteOnVM |
-          # For a given iteration, suite, and VM, compute the suite's preferred mean
-          # over the data collected for all benchmarks in that suite. We'll have one
-          # sample per benchmark. For example on V8 this will be the geomean of 1
-          # sample for crypto, 1 sample for deltablue, and so on, and 1 sample for
-          # splay.
-          curResult.add(suiteOnVM.suite.computeMean(suiteOnVM.statsForIteration(outerIndex, innerIndex)))
-        }
-        
-        # curResult now holds 1 sample for each of the means computed in the above
-        # loop. Compute the geomean over this, and store it.
-        result.add(curResult.geometricMean)
-      }
-    }
-
-    # $overallResults will have a Stats for each VM. That Stats object will hold
-    # $inner*$outer geomeans, allowing us to compute the arithmetic mean and
-    # confidence interval of the geomeans of preferred means. Convoluted, but
-    # useful and probably sound.
-    overallResults &lt;&lt; result
-  }
-  
-  if $verbosity &gt;= 2
-    benchmarksOnVMs.each {
-      | benchmarkOnVM |
-      $stderr.puts &quot;#{benchmarkOnVM}: #{benchmarkOnVM.stats}&quot;
-    }
-    
-    $vms.each_with_index {
-      | vm, vmIndex |
-      vmStats = vmStatses[vmIndex]
-      $stderr.puts &quot;#{vm} (arithmeticMean): #{vmStats.arithmeticMeanStats}&quot;
-      $stderr.puts &quot;#{vm} (geometricMean): #{vmStats.geometricMeanStats}&quot;
-    }
-  end
-
-  reportName =
-    (if ($vms.collect {
-           | vm |
-           vm.nameKind
-         }.index :auto)
-       &quot;&quot;
-     else
-       $vms.collect {
-         | vm |
-         vm.to_s
-       }.join(&quot;_&quot;) + &quot;_&quot;
-     end) +
-    ($suites.collect {
-       | suite |
-       suite.to_s
-     }.join(&quot;&quot;)) + &quot;_&quot; +
-    (if hostname
-       hostname + &quot;_&quot;
-     else
-       &quot;&quot;
-     end)+
-    (begin
-       time = Time.now
-       &quot;%04d%02d%02d_%02d%02d&quot; %
-         [ time.year, time.month, time.day,
-           time.hour, time.min ]
-     end) +
-    &quot;_benchReport.txt&quot;
-
-  unless $brief
-    puts &quot;Generating benchmark report at #{reportName}&quot;
-  end
-  
-  outp = $stdout
-  begin
-    outp = File.open(reportName,&quot;w&quot;)
-  rescue =&gt; e
-    $stderr.puts &quot;Error: could not save report to #{reportName}: #{e}&quot;
-    $stderr.puts
-  end
-  
-  def createVMsString
-    result = &quot;&quot;
-    result += &quot;   &quot; if $suites.size &gt; 1
-    result += rpad(&quot;&quot;, $benchpad)
-    result += &quot; &quot;
-    $vms.size.times {
-      | index |
-      if index != 0
-        result += &quot; &quot;+NoChange.new(0).shortForm
-      end
-      result += lpad(center($vms[index].name, 9+9+2), 11+9+2)
-    }
-    result += &quot;    &quot;
-    if $vms.size &gt;= 3
-      result += center(&quot;#{$vms[-1].name} v. #{$vms[0].name}&quot;,26)
-    elsif $vms.size &gt;= 2
-      result += &quot; &quot;*26
-    end
-    result
-  end
-  
-  columns = [createVMsString.size, 78].max
-  
-  outp.print &quot;Benchmark report for &quot;
-  if $suites.size == 1
-    outp.print $suites[0].to_s
-  elsif $suites.size == 2
-    outp.print &quot;#{$suites[0]} and #{$suites[1]}&quot;
-  else
-    outp.print &quot;#{$suites[0..-2].join(', ')}, and #{$suites[-1]}&quot;
-  end
-  if hostname
-    outp.print &quot; on #{hostname}&quot;
-  end
-  if hwmodel
-    outp.print &quot; (#{hwmodel})&quot;
-  end
-  outp.puts &quot;.&quot;
-  outp.puts
-  
-  # This looks stupid; revisit later.
-  if false
-    $suites.each {
-      | suite |
-      outp.puts &quot;#{suite} at #{suite.path}&quot;
-    }
-    
-    outp.puts
-  end
-  
-  outp.puts &quot;VMs tested:&quot;
-  $vms.each {
-    | vm |
-    outp.print &quot;\&quot;#{vm.name}\&quot; at #{vm.origPath}&quot;
-    if vm.svnRevision
-      outp.print &quot; (r#{vm.svnRevision})&quot;
-    end
-    outp.puts
-  }
-  
-  outp.puts
-  
-  outp.puts wrap(&quot;Collected #{$outer*$inner} sample#{plural($outer*$inner)} per benchmark/VM, &quot;+
-                 &quot;with #{$outer} VM invocation#{plural($outer)} per benchmark.&quot;+
-                 (if $rerun &gt; 1 then (&quot; Ran #{$rerun} benchmark iterations, and measured the &quot;+
-                                      &quot;total time of those iterations, for each sample.&quot;)
-                  else &quot;&quot; end)+
-                 (if $measureGC == true then (&quot; No manual garbage collection invocations were &quot;+
-                                              &quot;emitted.&quot;)
-                  elsif $measureGC then (&quot; Emitted a call to gc() between sample measurements for &quot;+
-                                         &quot;all VMs except #{$measureGC}.&quot;)
-                  else (&quot; Emitted a call to gc() between sample measurements.&quot;) end)+
-                 (if $warmup == 0 then (&quot; Did not include any warm-up iterations; measurements &quot;+
-                                        &quot;began with the very first iteration.&quot;)
-                  else (&quot; Used #{$warmup*$rerun} benchmark iteration#{plural($warmup*$rerun)} per VM &quot;+
-                        &quot;invocation for warm-up.&quot;) end)+
-                 (case $timeMode
-                  when :preciseTime then (&quot; Used the jsc-specific preciseTime() function to get &quot;+
-                                          &quot;microsecond-level timing.&quot;)
-                  when :date then (&quot; Used the portable Date.now() method to get millisecond-&quot;+
-                                   &quot;level timing.&quot;)
-                  else raise end)+
-                 &quot; Reporting benchmark execution times with 95% confidence &quot;+
-                 &quot;intervals in milliseconds.&quot;,
-                 columns)
-  
-  outp.puts
-  
-  def printVMs(outp)
-    outp.puts createVMsString
-  end
-  
-  def summaryStats(outp, accumulators, name, &amp;proc)
-    outp.print &quot;   &quot; if $suites.size &gt; 1
-    outp.print rpad(name, $benchpad)
-    outp.print &quot; &quot;
-    accumulators.size.times {
-      | index |
-      if index != 0
-        outp.print &quot; &quot;+accumulators[index].stats(&amp;proc).compareTo(accumulators[index-1].stats(&amp;proc)).shortForm
-      end
-      outp.print statsToStr(accumulators[index].stats(&amp;proc))
-    }
-    if accumulators.size&gt;=2
-      outp.print(&quot;    &quot;+accumulators[-1].stats(&amp;proc).compareTo(accumulators[0].stats(&amp;proc)).longForm)
-    end
-    outp.puts
-  end
-  
-  def meanName(currentMean, preferredMean)
-    result = &quot;&lt;#{currentMean}&gt;&quot;
-    if &quot;#{currentMean}Mean&quot; == preferredMean.to_s
-      result += &quot; *&quot;
-    end
-    result
-  end
-  
-  def allSummaryStats(outp, accumulators, preferredMean)
-    summaryStats(outp, accumulators, meanName(&quot;arithmetic&quot;, preferredMean)) {
-      | stat |
-      stat.arithmeticMean
-    }
-    
-    summaryStats(outp, accumulators, meanName(&quot;geometric&quot;, preferredMean)) {
-      | stat |
-      stat.geometricMean
-    }
-    
-    summaryStats(outp, accumulators, meanName(&quot;harmonic&quot;, preferredMean)) {
-      | stat |
-      stat.harmonicMean
-    }
-  end
-  
-  $suites.each {
-    | suite |
-    printVMs(outp)
-    if $suites.size &gt; 1
-      outp.puts &quot;#{suite.name}:&quot;
-    else
-      outp.puts
-    end
-    suite.benchmarks.each {
-      | benchmark |
-      outp.print &quot;   &quot; if $suites.size &gt; 1
-      outp.print rpad(benchmark.name, $benchpad)
-      outp.print &quot; &quot;
-      myConfigs = benchmarksOnVMsForBenchmark[benchmark]
-      myConfigs.size.times {
-        | index |
-        if index != 0
-          outp.print &quot; &quot;+myConfigs[index].stats.compareTo(myConfigs[index-1].stats).shortForm
-        end
-        outp.print statsToStr(myConfigs[index].stats)
-      }
-      if $vms.size&gt;=2
-        outp.print(&quot;    &quot;+myConfigs[-1].stats.compareTo(myConfigs[0].stats).to_s)
-      end
-      outp.puts
-    }
-    outp.puts
-    allSummaryStats(outp, suitesOnVMsForSuite[suite], suite.preferredMean)
-    outp.puts if $suites.size &gt; 1
-  }
-  
-  if $suites.size &gt; 1
-    printVMs(outp)
-    outp.puts &quot;All benchmarks:&quot;
-    allSummaryStats(outp, vmStatses, nil)
-    
-    outp.puts
-    printVMs(outp)
-    outp.puts &quot;Geomean of preferred means:&quot;
-    outp.print &quot;   &quot;
-    outp.print rpad(&quot;&lt;scaled-result&gt;&quot;, $benchpad)
-    outp.print &quot; &quot;
-    $vms.size.times {
-      | index |
-      if index != 0
-        outp.print &quot; &quot;+overallResults[index].compareTo(overallResults[index-1]).shortForm
-      end
-      outp.print statsToStr(overallResults[index])
-    }
-    if overallResults.size&gt;=2
-      outp.print(&quot;    &quot;+overallResults[-1].compareTo(overallResults[0]).longForm)
-    end
-    outp.puts
-  end
-  outp.puts
-  
-  if outp != $stdout
-    outp.close
-  end
-  
-  if outp != $stdout and not $brief
-    puts
-    File.open(reportName) {
-      | inp |
-      puts inp.read
-    }
-  end
-  
-  if $brief
-    puts(overallResults.collect{|stats| stats.mean}.join(&quot;\t&quot;))
-    puts(overallResults.collect{|stats| stats.confInt}.join(&quot;\t&quot;))
-  end
-  
-  
-end
-
-begin
-  $sawBenchOptions = false
-  
-  def resetBenchOptionsIfNecessary
-    unless $sawBenchOptions
-      $includeSunSpider = false
-      $includeV8 = false
-      $includeKraken = false
-      $sawBenchOptions = true
-    end
-  end
-  
-  GetoptLong.new(['--rerun', GetoptLong::REQUIRED_ARGUMENT],
-                 ['--inner', GetoptLong::REQUIRED_ARGUMENT],
-                 ['--outer', GetoptLong::REQUIRED_ARGUMENT],
-                 ['--warmup', GetoptLong::REQUIRED_ARGUMENT],
-                 ['--timing-mode', GetoptLong::REQUIRED_ARGUMENT],
-                 ['--sunspider-only', GetoptLong::NO_ARGUMENT],
-                 ['--v8-only', GetoptLong::NO_ARGUMENT],
-                 ['--kraken-only', GetoptLong::NO_ARGUMENT],
-                 ['--exclude-sunspider', GetoptLong::NO_ARGUMENT],
-                 ['--exclude-v8', GetoptLong::NO_ARGUMENT],
-                 ['--exclude-kraken', GetoptLong::NO_ARGUMENT],
-                 ['--sunspider', GetoptLong::NO_ARGUMENT],
-                 ['--v8', GetoptLong::NO_ARGUMENT],
-                 ['--kraken', GetoptLong::NO_ARGUMENT],
-                 ['--benchmarks', GetoptLong::REQUIRED_ARGUMENT],
-                 ['--measure-gc', GetoptLong::OPTIONAL_ARGUMENT],
-                 ['--force-vm-kind', GetoptLong::REQUIRED_ARGUMENT],
-                 ['--force-vm-copy', GetoptLong::NO_ARGUMENT],
-                 ['--dont-copy-vms', GetoptLong::NO_ARGUMENT],
-                 ['--verbose', '-v', GetoptLong::NO_ARGUMENT],
-                 ['--brief', GetoptLong::NO_ARGUMENT],
-                 ['--silent', GetoptLong::NO_ARGUMENT],
-                 ['--remote', GetoptLong::REQUIRED_ARGUMENT],
-                 ['--local', GetoptLong::NO_ARGUMENT],
-                 ['--ssh-options', GetoptLong::REQUIRED_ARGUMENT],
-                 ['--slave', GetoptLong::NO_ARGUMENT],
-                 ['--prepare-only', GetoptLong::NO_ARGUMENT],
-                 ['--analyze', GetoptLong::REQUIRED_ARGUMENT],
-                 ['--vms', GetoptLong::REQUIRED_ARGUMENT],
-                 ['--help', '-h', GetoptLong::NO_ARGUMENT]).each {
-    | opt, arg |
-    case opt
-    when '--rerun'
-      $rerun = intArg(opt,arg,1,nil)
-    when '--inner'
-      $inner = intArg(opt,arg,1,nil)
-    when '--outer'
-      $outer = intArg(opt,arg,1,nil)
-    when '--warmup'
-      $warmup = intArg(opt,arg,0,nil)
-    when '--timing-mode'
-      if arg.upcase == &quot;PRECISETIME&quot;
-        $timeMode = :preciseTime
-      elsif arg.upcase == &quot;DATE&quot;
-        $timeMode = :date
-      elsif arg.upcase == &quot;AUTO&quot;
-        $timeMode = :auto
-      else
-        quickFail(&quot;Expected either 'preciseTime', 'date', or 'auto' for --time-mode, but got '#{arg}'.&quot;,
-                  &quot;Invalid argument for command-line option&quot;)
-      end
-    when '--force-vm-kind'
-      if arg.upcase == &quot;JSC&quot;
-        $forceVMKind = :jsc
-      elsif arg.upcase == &quot;DUMPRENDERTREE&quot;
-        $forceVMKind = :dumpRenderTree
-      elsif arg.upcase == &quot;AUTO&quot;
-        $forceVMKind = nil
-      else
-        quickFail(&quot;Expected either 'jsc' or 'DumpRenderTree' for --force-vm-kind, but got '#{arg}'.&quot;,
-                  &quot;Invalid argument for command-line option&quot;)
-      end
-    when '--force-vm-copy'
-      $needToCopyVMs = true
-    when '--dont-copy-vms'
-      $dontCopyVMs = true
-    when '--sunspider-only'
-      $includeV8 = false
-      $includeKraken = false
-    when '--v8-only'
-      $includeSunSpider = false
-      $includeKraken = false
-    when '--kraken-only'
-      $includeSunSpider = false
-      $includeV8 = false
-    when '--exclude-sunspider'
-      $includeSunSpider = false
-    when '--exclude-v8'
-      $includeV8 = false
-    when '--exclude-kraken'
-      $includeKraken = false
-    when '--sunspider'
-      resetBenchOptionsIfNecessary
-      $includeSunSpider = true
-    when '--v8'
-      resetBenchOptionsIfNecessary
-      $includeV8 = true
-    when '--kraken'
-      resetBenchOptionsIfNecessary
-      $includeKraken = true
-    when '--benchmarks'
-      $benchmarkPattern = Regexp.new(arg)
-    when '--measure-gc'
-      if arg == ''
-        $measureGC = true
-      else
-        $measureGC = arg
-      end
-    when '--verbose'
-      $verbosity += 1
-    when '--brief'
-      $brief = true
-    when '--silent'
-      $silent = true
-    when '--remote'
-      $remoteHosts += arg.split(',')
-      $needToCopyVMs = true
-    when '--ssh-options'
-      $sshOptions &lt;&lt; arg
-    when '--local'
-      $alsoLocal = true
-    when '--prepare-only'
-      $run = false
-    when '--analyze'
-      $prepare = false
-      $run = false
-      $analyze &lt;&lt; arg
-    when '--help'
-      usage
-    else
-      raise &quot;bad option: #{opt}&quot;
-    end
-  }
-  
-  # If the --dont-copy-vms option was passed, it overrides the --force-vm-copy option.
-  if $dontCopyVMs
-    $needToCopyVMs = false
-  end
-  
-  SUNSPIDER = BenchmarkSuite.new(&quot;SunSpider&quot;, SUNSPIDER_PATH, :arithmeticMean)
-  [&quot;3d-cube&quot;, &quot;3d-morph&quot;, &quot;3d-raytrace&quot;, &quot;access-binary-trees&quot;,
-   &quot;access-fannkuch&quot;, &quot;access-nbody&quot;, &quot;access-nsieve&quot;,
-   &quot;bitops-3bit-bits-in-byte&quot;, &quot;bitops-bits-in-byte&quot;, &quot;bitops-bitwise-and&quot;,
-   &quot;bitops-nsieve-bits&quot;, &quot;controlflow-recursive&quot;, &quot;crypto-aes&quot;,
-   &quot;crypto-md5&quot;, &quot;crypto-sha1&quot;, &quot;date-format-tofte&quot;, &quot;date-format-xparb&quot;,
-   &quot;math-cordic&quot;, &quot;math-partial-sums&quot;, &quot;math-spectral-norm&quot;, &quot;regexp-dna&quot;,
-   &quot;string-base64&quot;, &quot;string-fasta&quot;, &quot;string-tagcloud&quot;,
-   &quot;string-unpack-code&quot;, &quot;string-validate-input&quot;].each {
-    | name |
-    SUNSPIDER.add SunSpiderBenchmark.new(name)
-  }
-
-  V8 = BenchmarkSuite.new(&quot;V8&quot;, V8_PATH, :geometricMean)
-  [&quot;crypto&quot;, &quot;deltablue&quot;, &quot;earley-boyer&quot;, &quot;raytrace&quot;,
-   &quot;regexp&quot;, &quot;richards&quot;, &quot;splay&quot;].each {
-    | name |
-    V8.add V8Benchmark.new(name)
-  }
-
-  KRAKEN = BenchmarkSuite.new(&quot;Kraken&quot;, KRAKEN_PATH, :arithmeticMean)
-  [&quot;ai-astar&quot;, &quot;audio-beat-detection&quot;, &quot;audio-dft&quot;, &quot;audio-fft&quot;,
-   &quot;audio-oscillator&quot;, &quot;imaging-darkroom&quot;, &quot;imaging-desaturate&quot;,
-   &quot;imaging-gaussian-blur&quot;, &quot;json-parse-financial&quot;,
-   &quot;json-stringify-tinderbox&quot;, &quot;stanford-crypto-aes&quot;,
-   &quot;stanford-crypto-ccm&quot;, &quot;stanford-crypto-pbkdf2&quot;,
-   &quot;stanford-crypto-sha256-iterative&quot;].each {
-    | name |
-    KRAKEN.add KrakenBenchmark.new(name)
-  }
-
-  ARGV.each {
-    | vm |
-    if vm =~ /([a-zA-Z0-9_ ]+):/
-      name = $1
-      nameKind = :given
-      vm = $~.post_match
-    else
-      name = &quot;Conf\##{$vms.length+1}&quot;
-      nameKind = :auto
-    end
-    $stderr.puts &quot;#{name}: #{vm}&quot; if $verbosity &gt;= 1
-    $vms &lt;&lt; VM.new(Pathname.new(vm).realpath, name, nameKind, nil)
-  }
-  
-  if $vms.empty?
-    quickFail(&quot;Please specify at least on configuraiton on the command line.&quot;,
-              &quot;Insufficient arguments&quot;)
-  end
-  
-  $vms.each {
-    | vm |
-    if vm.vmType != :jsc and $timeMode != :date
-      $timeMode = :date
-      $stderr.puts &quot;Warning: using Date.now() instead of preciseTime() because #{vm} doesn't support the latter.&quot;
-    end
-  }
-  
-  if FileTest.exist? BENCH_DATA_PATH
-    cmd = &quot;rm -rf #{BENCH_DATA_PATH}&quot;
-    $stderr.puts &quot;&gt;&gt; #{cmd}&quot; if $verbosity &gt;= 2
-    raise unless system cmd
-  end
-  
-  Dir.mkdir BENCH_DATA_PATH
-  
-  if $needToCopyVMs
-    canCopyIntoBenchPath = true
-    $vms.each {
-      | vm |
-      canCopyIntoBenchPath = false unless vm.canCopyIntoBenchPath
-    }
-    
-    if canCopyIntoBenchPath
-      $vms.each {
-        | vm |
-        $stderr.puts &quot;Copying #{vm} into #{BENCH_DATA_PATH}...&quot;
-        vm.copyIntoBenchPath
-      }
-      $stderr.puts &quot;All VMs are in place.&quot;
-    else
-      $stderr.puts &quot;Warning: don't know how to copy some VMs into #{BENCH_DATA_PATH}, so I won't do it.&quot;
-    end
-  end
-  
-  if $measureGC and $measureGC != true
-    found = false
-    $vms.each {
-      | vm |
-      if vm.name == $measureGC
-        found = true
-      end
-    }
-    unless found
-      $stderr.puts &quot;Warning: --measure-gc option ignored because no VM is named #{$measureGC}&quot;
-    end
-  end
-  
-  if $outer*$inner == 1
-    $stderr.puts &quot;Warning: will only collect one sample per benchmark/VM.  Confidence interval calculation will fail.&quot;
-  end
-  
-  $stderr.puts &quot;Using timeMode = #{$timeMode}.&quot; if $verbosity &gt;= 1
-  
-  $suites = []
-  
-  if $includeSunSpider and not SUNSPIDER.empty?
-    $suites &lt;&lt; SUNSPIDER
-  end
-  
-  if $includeV8 and not V8.empty?
-    $suites &lt;&lt; V8
-  end
-  
-  if $includeKraken and not KRAKEN.empty?
-    $suites &lt;&lt; KRAKEN
-  end
-  
-  $benchmarks = []
-  $suites.each {
-    | suite |
-    $benchmarks += suite.benchmarks
-  }
-  
-  $runPlans = []
-  $vms.each {
-    | vm |
-    $benchmarks.each {
-      | benchmark |
-      $outer.times {
-        | iteration |
-        $runPlans &lt;&lt; BenchRunPlan.new(benchmark, vm, iteration)
-      }
-    }
-  }
-  
-  $runPlans.shuffle!
-  
-  $suitepad = $suites.collect {
-    | suite |
-    suite.to_s.size
-  }.max + 1
-  
-  $benchpad = ($benchmarks +
-               [&quot;&lt;arithmetic&gt; *&quot;, &quot;&lt;geometric&gt; *&quot;, &quot;&lt;harmonic&gt; *&quot;]).collect {
-    | benchmark |
-    if benchmark.respond_to? :name
-      benchmark.name.size
-    else
-      benchmark.size
-    end
-  }.max + 1
-
-  $vmpad = $vms.collect {
-    | vm |
-    vm.to_s.size
-  }.max + 1
-  
-  if $prepare
-    File.open(&quot;#{BENCH_DATA_PATH}/runscript&quot;, &quot;w&quot;) {
-      | file |
-      file.puts &quot;echo \&quot;HOSTNAME:\\c\&quot;&quot;
-      file.puts &quot;hostname&quot;
-      file.puts &quot;echo&quot;
-      file.puts &quot;echo \&quot;HARDWARE:\\c\&quot;&quot;
-      file.puts &quot;/usr/sbin/sysctl hw.model&quot;
-      file.puts &quot;echo&quot;
-      file.puts &quot;set -e&quot;
-      $script = file
-      $runPlans.each_with_index {
-        | plan, idx |
-        if $verbosity == 0 and not $silent
-          text1 = lpad(idx.to_s,$runPlans.size.to_s.size)+&quot;/&quot;+$runPlans.size.to_s
-          text2 = plan.benchmark.to_s+&quot;/&quot;+plan.vm.to_s
-          file.puts(&quot;echo &quot;+(&quot;\r#{text1} #{rpad(text2,$suitepad+1+$benchpad+1+$vmpad)}&quot;.inspect)[0..-2]+&quot;\\c\&quot; 1&gt;&amp;2&quot;)
-          file.puts(&quot;echo &quot;+(&quot;\r#{text1} #{text2}&quot;.inspect)[0..-2]+&quot;\\c\&quot; 1&gt;&amp;2&quot;)
-        end
-        plan.emitRunCode
-      }
-      if $verbosity == 0 and not $silent
-        file.puts(&quot;echo &quot;+(&quot;\r#{$runPlans.size}/#{$runPlans.size} #{' '*($suitepad+1+$benchpad+1+$vmpad)}&quot;.inspect)[0..-2]+&quot;\\c\&quot; 1&gt;&amp;2&quot;)
-        file.puts(&quot;echo &quot;+(&quot;\r#{$runPlans.size}/#{$runPlans.size}&quot;.inspect)+&quot; 1&gt;&amp;2&quot;)
-      end
-    }
-  end
-  
-  if $run
-    unless $remoteHosts.empty?
-      $stderr.puts &quot;Packaging benchmarking directory for remote hosts...&quot; if $verbosity==0
-      Dir.chdir(TEMP_PATH) {
-        cmd = &quot;tar -czf payload.tar.gz benchdata&quot;
-        $stderr.puts &quot;&gt;&gt; #{cmd}&quot; if $verbosity&gt;=2
-        raise unless system(cmd)
-      }
-      
-      def grokHost(host)
-        if host =~ /:([0-9]+)$/
-          &quot;-p &quot; + $1 + &quot; &quot; + $~.pre_match.inspect
-        else
-          host.inspect
-        end
-      end
-      
-      def sshRead(host, command)
-        cmd = &quot;ssh #{$sshOptions.collect{|x| x.inspect}.join(' ')} #{grokHost(host)} #{command.inspect}&quot;
-        $stderr.puts &quot;&gt;&gt; #{cmd}&quot; if $verbosity&gt;=2
-        result = &quot;&quot;
-        IO.popen(cmd, &quot;r&quot;) {
-          | inp |
-          inp.each_line {
-            | line |
-            $stderr.puts &quot;#{host}: #{line}&quot; if $verbosity&gt;=2
-            result += line
-          }
-        }
-        raise &quot;#{$?}&quot; unless $?.success?
-        result
-      end
-      
-      def sshWrite(host, command, data)
-        cmd = &quot;ssh #{$sshOptions.collect{|x| x.inspect}.join(' ')} #{grokHost(host)} #{command.inspect}&quot;
-        $stderr.puts &quot;&gt;&gt; #{cmd}&quot; if $verbosity&gt;=2
-        IO.popen(cmd, &quot;w&quot;) {
-          | outp |
-          outp.write(data)
-        }
-        raise &quot;#{$?}&quot; unless $?.success?
-      end
-      
-      $remoteHosts.each {
-        | host |
-        $stderr.puts &quot;Sending benchmark payload to #{host}...&quot; if $verbosity==0
-        
-        remoteTempPath = JSON::parse(sshRead(host, &quot;cat ~/.bencher&quot;))[&quot;tempPath&quot;]
-        raise unless remoteTempPath
-        
-        sshWrite(host, &quot;cd #{remoteTempPath.inspect} &amp;&amp; rm -rf benchdata &amp;&amp; tar -xz&quot;, IO::read(&quot;#{TEMP_PATH}/payload.tar.gz&quot;))
-        
-        $stderr.puts &quot;Running on #{host}...&quot; if $verbosity==0
-        
-        parseAndDisplayResults(sshRead(host, &quot;cd #{(remoteTempPath+'/benchdata').inspect} &amp;&amp; sh runscript&quot;))
-      }
-    end
-    
-    if not $remoteHosts.empty? and $alsoLocal
-      $stderr.puts &quot;Running locally...&quot;
-    end
-    
-    if $remoteHosts.empty? or $alsoLocal
-      parseAndDisplayResults(runAndGetResults)
-    end
-  end
-  
-  $analyze.each_with_index {
-    | filename, index |
-    if index &gt;= 1
-      puts
-    end
-    parseAndDisplayResults(IO::read(filename))
-  }
-  
-  if $prepare and not $run and $analyze.empty?
-    puts wrap(&quot;Benchmarking script and data are in #{BENCH_DATA_PATH}. You can run &quot;+
-              &quot;the benchmarks and get the results by doing:&quot;, 78)
-    puts
-    puts &quot;cd #{BENCH_DATA_PATH}&quot;
-    puts &quot;sh runscript &gt; results.txt&quot;
-    puts
-    puts wrap(&quot;Then you can analyze the results by running bencher with the same arguments &quot;+
-              &quot;as now, but replacing --prepare-only with --analyze results.txt.&quot;, 78)
-  end
-rescue =&gt; e
-  fail(e)
-end
-  
-  
</del></span></pre></div>
<a id="trunkToolsScriptsrunjscbenchmarksfromrev174093trunkToolsScriptsbencher"></a>
<div class="copfile"><h4>Copied: trunk/Tools/Scripts/run-jsc-benchmarks (from rev 174093, trunk/Tools/Scripts/bencher) (0 => 174123)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Tools/Scripts/run-jsc-benchmarks                                (rev 0)
+++ trunk/Tools/Scripts/run-jsc-benchmarks        2014-09-30 21:11:57 UTC (rev 174123)
</span><span class="lines">@@ -0,0 +1,3371 @@
</span><ins>+#!/usr/bin/env ruby
+
+# Copyright (C) 2011, 2012, 2013, 2014 Apple Inc. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+# 1. Redistributions of source code must retain the above copyright
+#    notice, this list of conditions and the following disclaimer.
+# 2. Redistributions in binary form must reproduce the above copyright
+#    notice, this list of conditions and the following disclaimer in the
+#    documentation and/or other materials provided with the distribution.
+#
+# THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+# THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+# BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+# THE POSSIBILITY OF SUCH DAMAGE.
+
+require 'rubygems'
+
+require 'getoptlong'
+require 'pathname'
+require 'shellwords'
+require 'socket'
+
+begin
+  require 'json'
+rescue LoadError =&gt; e
+  $stderr.puts &quot;It does not appear that you have the 'json' package installed.  Try running 'sudo gem install json'.&quot;
+  exit 1
+end
+
+SCRIPT_PATH = Pathname.new(__FILE__).realpath
+
+raise unless SCRIPT_PATH.dirname.basename.to_s == &quot;Scripts&quot;
+raise unless SCRIPT_PATH.dirname.dirname.basename.to_s == &quot;Tools&quot;
+
+OPENSOURCE_PATH = SCRIPT_PATH.dirname.dirname.dirname
+
+SUNSPIDER_PATH = OPENSOURCE_PATH + &quot;PerformanceTests&quot; + &quot;SunSpider&quot; + &quot;tests&quot; + &quot;sunspider-1.0&quot;
+LONGSPIDER_PATH = OPENSOURCE_PATH + &quot;PerformanceTests&quot; + &quot;LongSpider&quot;
+V8_PATH = OPENSOURCE_PATH + &quot;PerformanceTests&quot; + &quot;SunSpider&quot; + &quot;tests&quot; + &quot;v8-v6&quot;
+JSREGRESS_PATH = OPENSOURCE_PATH + &quot;LayoutTests&quot; + &quot;js&quot; + &quot;regress&quot; + &quot;script-tests&quot;
+OCTANE_WRAPPER_PATH = OPENSOURCE_PATH + &quot;PerformanceTests&quot; + &quot;Octane&quot; + &quot;wrappers&quot;
+
+TEMP_PATH = OPENSOURCE_PATH + &quot;BenchmarkTemp&quot;
+
+if TEMP_PATH.exist?
+  raise unless TEMP_PATH.directory?
+else
+  Dir.mkdir(TEMP_PATH)
+end
+
+BENCH_DATA_PATH = TEMP_PATH + &quot;benchdata&quot;
+
+IBR_LOOKUP=[0.00615583, 0.0975, 0.22852, 0.341628, 0.430741, 0.500526, 0.555933, 
+            0.600706, 0.637513, 0.668244, 0.694254, 0.716537, 0.735827, 0.752684, 
+            0.767535, 0.780716, 0.792492, 0.803074, 0.812634, 0.821313, 0.829227, 
+            0.836472, 0.843129, 0.849267, 0.854943, 0.860209, 0.865107, 0.869674, 
+            0.873942, 0.877941, 0.881693, 0.885223, 0.888548, 0.891686, 0.894652, 
+            0.897461, 0.900124, 0.902652, 0.905056, 0.907343, 0.909524, 0.911604, 
+            0.91359, 0.91549, 0.917308, 0.919049, 0.920718, 0.92232, 0.923859, 0.925338, 
+            0.926761, 0.92813, 0.929449, 0.930721, 0.931948, 0.933132, 0.934275, 0.93538, 
+            0.936449, 0.937483, 0.938483, 0.939452, 0.940392, 0.941302, 0.942185, 
+            0.943042, 0.943874, 0.944682, 0.945467, 0.94623, 0.946972, 0.947694, 
+            0.948396, 0.94908, 0.949746, 0.950395, 0.951027, 0.951643, 0.952244, 
+            0.952831, 0.953403, 0.953961, 0.954506, 0.955039, 0.955559, 0.956067, 
+            0.956563, 0.957049, 0.957524, 0.957988, 0.958443, 0.958887, 0.959323, 
+            0.959749, 0.960166, 0.960575, 0.960975, 0.961368, 0.961752, 0.962129, 
+            0.962499, 0.962861, 0.963217, 0.963566, 0.963908, 0.964244, 0.964574, 
+            0.964897, 0.965215, 0.965527, 0.965834, 0.966135, 0.966431, 0.966722, 
+            0.967007, 0.967288, 0.967564, 0.967836, 0.968103, 0.968366, 0.968624, 
+            0.968878, 0.969128, 0.969374, 0.969617, 0.969855, 0.97009, 0.970321, 
+            0.970548, 0.970772, 0.970993, 0.97121, 0.971425, 0.971636, 0.971843, 
+            0.972048, 0.97225, 0.972449, 0.972645, 0.972839, 0.973029, 0.973217, 
+            0.973403, 0.973586, 0.973766, 0.973944, 0.97412, 0.974293, 0.974464, 
+            0.974632, 0.974799, 0.974963, 0.975125, 0.975285, 0.975443, 0.975599, 
+            0.975753, 0.975905, 0.976055, 0.976204, 0.97635, 0.976495, 0.976638, 
+            0.976779, 0.976918, 0.977056, 0.977193, 0.977327, 0.97746, 0.977592, 
+            0.977722, 0.97785, 0.977977, 0.978103, 0.978227, 0.978349, 0.978471, 
+            0.978591, 0.978709, 0.978827, 0.978943, 0.979058, 0.979171, 0.979283, 
+            0.979395, 0.979504, 0.979613, 0.979721, 0.979827, 0.979933, 0.980037, 
+            0.98014, 0.980242, 0.980343, 0.980443, 0.980543, 0.980641, 0.980738, 
+            0.980834, 0.980929, 0.981023, 0.981116, 0.981209, 0.9813, 0.981391, 0.981481, 
+            0.981569, 0.981657, 0.981745, 0.981831, 0.981916, 0.982001, 0.982085, 
+            0.982168, 0.982251, 0.982332, 0.982413, 0.982493, 0.982573, 0.982651, 
+            0.982729, 0.982807, 0.982883, 0.982959, 0.983034, 0.983109, 0.983183, 
+            0.983256, 0.983329, 0.983401, 0.983472, 0.983543, 0.983613, 0.983683, 
+            0.983752, 0.98382, 0.983888, 0.983956, 0.984022, 0.984089, 0.984154, 
+            0.984219, 0.984284, 0.984348, 0.984411, 0.984474, 0.984537, 0.984599, 
+            0.98466, 0.984721, 0.984782, 0.984842, 0.984902, 0.984961, 0.985019, 
+            0.985077, 0.985135, 0.985193, 0.985249, 0.985306, 0.985362, 0.985417, 
+            0.985472, 0.985527, 0.985582, 0.985635, 0.985689, 0.985742, 0.985795, 
+            0.985847, 0.985899, 0.985951, 0.986002, 0.986053, 0.986103, 0.986153, 
+            0.986203, 0.986252, 0.986301, 0.98635, 0.986398, 0.986446, 0.986494, 
+            0.986541, 0.986588, 0.986635, 0.986681, 0.986727, 0.986773, 0.986818, 
+            0.986863, 0.986908, 0.986953, 0.986997, 0.987041, 0.987084, 0.987128, 
+            0.987171, 0.987213, 0.987256, 0.987298, 0.98734, 0.987381, 0.987423, 
+            0.987464, 0.987504, 0.987545, 0.987585, 0.987625, 0.987665, 0.987704, 
+            0.987744, 0.987783, 0.987821, 0.98786, 0.987898, 0.987936, 0.987974, 
+            0.988011, 0.988049, 0.988086, 0.988123, 0.988159, 0.988196, 0.988232, 
+            0.988268, 0.988303, 0.988339, 0.988374, 0.988409, 0.988444, 0.988479, 
+            0.988513, 0.988547, 0.988582, 0.988615, 0.988649, 0.988682, 0.988716, 
+            0.988749, 0.988782, 0.988814, 0.988847, 0.988879, 0.988911, 0.988943, 
+            0.988975, 0.989006, 0.989038, 0.989069, 0.9891, 0.989131, 0.989161, 0.989192, 
+            0.989222, 0.989252, 0.989282, 0.989312, 0.989342, 0.989371, 0.989401, 
+            0.98943, 0.989459, 0.989488, 0.989516, 0.989545, 0.989573, 0.989602, 0.98963, 
+            0.989658, 0.989685, 0.989713, 0.98974, 0.989768, 0.989795, 0.989822, 
+            0.989849, 0.989876, 0.989902, 0.989929, 0.989955, 0.989981, 0.990007, 
+            0.990033, 0.990059, 0.990085, 0.99011, 0.990136, 0.990161, 0.990186, 
+            0.990211, 0.990236, 0.990261, 0.990285, 0.99031, 0.990334, 0.990358, 
+            0.990383, 0.990407, 0.99043, 0.990454, 0.990478, 0.990501, 0.990525, 
+            0.990548, 0.990571, 0.990594, 0.990617, 0.99064, 0.990663, 0.990686, 
+            0.990708, 0.990731, 0.990753, 0.990775, 0.990797, 0.990819, 0.990841, 
+            0.990863, 0.990885, 0.990906, 0.990928, 0.990949, 0.99097, 0.990991, 
+            0.991013, 0.991034, 0.991054, 0.991075, 0.991096, 0.991116, 0.991137, 
+            0.991157, 0.991178, 0.991198, 0.991218, 0.991238, 0.991258, 0.991278, 
+            0.991298, 0.991317, 0.991337, 0.991356, 0.991376, 0.991395, 0.991414, 
+            0.991433, 0.991452, 0.991471, 0.99149, 0.991509, 0.991528, 0.991547, 
+            0.991565, 0.991584, 0.991602, 0.99162, 0.991639, 0.991657, 0.991675, 
+            0.991693, 0.991711, 0.991729, 0.991746, 0.991764, 0.991782, 0.991799, 
+            0.991817, 0.991834, 0.991851, 0.991869, 0.991886, 0.991903, 0.99192, 
+            0.991937, 0.991954, 0.991971, 0.991987, 0.992004, 0.992021, 0.992037, 
+            0.992054, 0.99207, 0.992086, 0.992103, 0.992119, 0.992135, 0.992151, 
+            0.992167, 0.992183, 0.992199, 0.992215, 0.99223, 0.992246, 0.992262, 
+            0.992277, 0.992293, 0.992308, 0.992324, 0.992339, 0.992354, 0.992369, 
+            0.992384, 0.9924, 0.992415, 0.992429, 0.992444, 0.992459, 0.992474, 0.992489, 
+            0.992503, 0.992518, 0.992533, 0.992547, 0.992561, 0.992576, 0.99259, 
+            0.992604, 0.992619, 0.992633, 0.992647, 0.992661, 0.992675, 0.992689, 
+            0.992703, 0.992717, 0.99273, 0.992744, 0.992758, 0.992771, 0.992785, 
+            0.992798, 0.992812, 0.992825, 0.992839, 0.992852, 0.992865, 0.992879, 
+            0.992892, 0.992905, 0.992918, 0.992931, 0.992944, 0.992957, 0.99297, 
+            0.992983, 0.992995, 0.993008, 0.993021, 0.993034, 0.993046, 0.993059, 
+            0.993071, 0.993084, 0.993096, 0.993109, 0.993121, 0.993133, 0.993145, 
+            0.993158, 0.99317, 0.993182, 0.993194, 0.993206, 0.993218, 0.99323, 0.993242, 
+            0.993254, 0.993266, 0.993277, 0.993289, 0.993301, 0.993312, 0.993324, 
+            0.993336, 0.993347, 0.993359, 0.99337, 0.993382, 0.993393, 0.993404, 
+            0.993416, 0.993427, 0.993438, 0.993449, 0.99346, 0.993472, 0.993483, 
+            0.993494, 0.993505, 0.993516, 0.993527, 0.993538, 0.993548, 0.993559, 
+            0.99357, 0.993581, 0.993591, 0.993602, 0.993613, 0.993623, 0.993634, 
+            0.993644, 0.993655, 0.993665, 0.993676, 0.993686, 0.993697, 0.993707, 
+            0.993717, 0.993727, 0.993738, 0.993748, 0.993758, 0.993768, 0.993778, 
+            0.993788, 0.993798, 0.993808, 0.993818, 0.993828, 0.993838, 0.993848, 
+            0.993858, 0.993868, 0.993877, 0.993887, 0.993897, 0.993907, 0.993916, 
+            0.993926, 0.993935, 0.993945, 0.993954, 0.993964, 0.993973, 0.993983, 
+            0.993992, 0.994002, 0.994011, 0.99402, 0.99403, 0.994039, 0.994048, 0.994057, 
+            0.994067, 0.994076, 0.994085, 0.994094, 0.994103, 0.994112, 0.994121, 
+            0.99413, 0.994139, 0.994148, 0.994157, 0.994166, 0.994175, 0.994183, 
+            0.994192, 0.994201, 0.99421, 0.994218, 0.994227, 0.994236, 0.994244, 
+            0.994253, 0.994262, 0.99427, 0.994279, 0.994287, 0.994296, 0.994304, 
+            0.994313, 0.994321, 0.994329, 0.994338, 0.994346, 0.994354, 0.994363, 
+            0.994371, 0.994379, 0.994387, 0.994395, 0.994404, 0.994412, 0.99442, 
+            0.994428, 0.994436, 0.994444, 0.994452, 0.99446, 0.994468, 0.994476, 
+            0.994484, 0.994492, 0.9945, 0.994508, 0.994516, 0.994523, 0.994531, 0.994539, 
+            0.994547, 0.994554, 0.994562, 0.99457, 0.994577, 0.994585, 0.994593, 0.9946, 
+            0.994608, 0.994615, 0.994623, 0.994631, 0.994638, 0.994645, 0.994653, 
+            0.99466, 0.994668, 0.994675, 0.994683, 0.99469, 0.994697, 0.994705, 0.994712, 
+            0.994719, 0.994726, 0.994734, 0.994741, 0.994748, 0.994755, 0.994762, 
+            0.994769, 0.994777, 0.994784, 0.994791, 0.994798, 0.994805, 0.994812, 
+            0.994819, 0.994826, 0.994833, 0.99484, 0.994847, 0.994854, 0.99486, 0.994867, 
+            0.994874, 0.994881, 0.994888, 0.994895, 0.994901, 0.994908, 0.994915, 
+            0.994922, 0.994928, 0.994935, 0.994942, 0.994948, 0.994955, 0.994962, 
+            0.994968, 0.994975, 0.994981, 0.994988, 0.994994, 0.995001, 0.995007, 
+            0.995014, 0.99502, 0.995027, 0.995033, 0.99504, 0.995046, 0.995052, 0.995059, 
+            0.995065, 0.995071, 0.995078, 0.995084, 0.99509, 0.995097, 0.995103, 
+            0.995109, 0.995115, 0.995121, 0.995128, 0.995134, 0.99514, 0.995146, 
+            0.995152, 0.995158, 0.995164, 0.995171, 0.995177, 0.995183, 0.995189, 
+            0.995195, 0.995201, 0.995207, 0.995213, 0.995219, 0.995225, 0.995231, 
+            0.995236, 0.995242, 0.995248, 0.995254, 0.99526, 0.995266, 0.995272, 
+            0.995277, 0.995283, 0.995289, 0.995295, 0.995301, 0.995306, 0.995312, 
+            0.995318, 0.995323, 0.995329, 0.995335, 0.99534, 0.995346, 0.995352, 
+            0.995357, 0.995363, 0.995369, 0.995374, 0.99538, 0.995385, 0.995391, 
+            0.995396, 0.995402, 0.995407, 0.995413, 0.995418, 0.995424, 0.995429, 
+            0.995435, 0.99544, 0.995445, 0.995451, 0.995456, 0.995462, 0.995467, 
+            0.995472, 0.995478, 0.995483, 0.995488, 0.995493, 0.995499, 0.995504, 
+            0.995509, 0.995515, 0.99552, 0.995525, 0.99553, 0.995535, 0.995541, 0.995546, 
+            0.995551, 0.995556, 0.995561, 0.995566, 0.995571, 0.995577, 0.995582, 
+            0.995587, 0.995592, 0.995597, 0.995602, 0.995607, 0.995612, 0.995617, 
+            0.995622, 0.995627, 0.995632, 0.995637, 0.995642, 0.995647, 0.995652, 
+            0.995657, 0.995661, 0.995666, 0.995671, 0.995676, 0.995681, 0.995686, 
+            0.995691, 0.995695, 0.9957, 0.995705, 0.99571, 0.995715, 0.995719, 0.995724, 
+            0.995729, 0.995734, 0.995738, 0.995743, 0.995748, 0.995753, 0.995757, 
+            0.995762, 0.995767, 0.995771, 0.995776, 0.995781, 0.995785, 0.99579, 
+            0.995794, 0.995799, 0.995804, 0.995808, 0.995813, 0.995817, 0.995822, 
+            0.995826, 0.995831, 0.995835, 0.99584, 0.995844, 0.995849, 0.995853, 
+            0.995858, 0.995862, 0.995867, 0.995871, 0.995876, 0.99588, 0.995885, 
+            0.995889, 0.995893, 0.995898, 0.995902, 0.995906, 0.995911, 0.995915, 
+            0.99592, 0.995924, 0.995928, 0.995932, 0.995937, 0.995941, 0.995945, 0.99595, 
+            0.995954, 0.995958, 0.995962, 0.995967, 0.995971, 0.995975, 0.995979, 
+            0.995984, 0.995988, 0.995992, 0.995996, 0.996, 0.996004, 0.996009, 0.996013, 
+            0.996017, 0.996021, 0.996025, 0.996029, 0.996033, 0.996037, 0.996041, 
+            0.996046, 0.99605, 0.996054, 0.996058, 0.996062, 0.996066, 0.99607, 0.996074, 
+            0.996078, 0.996082, 0.996086, 0.99609, 0.996094, 0.996098, 0.996102, 
+            0.996106, 0.99611, 0.996114, 0.996117, 0.996121, 0.996125, 0.996129, 
+            0.996133, 0.996137, 0.996141, 0.996145, 0.996149, 0.996152, 0.996156, 
+            0.99616, 0.996164]
+
+# Run-time configuration parameters (can be set with command-line options)
+
+$rerun=1
+$inner=1
+$warmup=1
+$outer=4
+$quantum=1000
+$includeSunSpider=true
+$includeLongSpider=true
+$includeV8=true
+$includeKraken=true
+$includeJSBench=true
+$includeJSRegress=true
+$includeAsmBench=true
+$includeDSPJS=true
+$includeBrowsermarkJS=false
+$includeBrowsermarkDOM=false
+$includeOctane=true
+$includeCompressionBench = true
+$measureGC=false
+$benchmarkPattern=nil
+$verbosity=0
+$timeMode=:preciseTime
+$forceVMKind=nil
+$brief=false
+$silent=false
+$remoteHosts=[]
+$alsoLocal=false
+$sshOptions=[]
+$vms = []
+$environment = {}
+$needToCopyVMs = false
+$dontCopyVMs = false
+$allDRT = true
+$outputName = nil
+$sunSpiderWarmup = true
+$configPath = Pathname.new(ENV[&quot;HOME&quot;]) + &quot;.run-jsc-benchmarks&quot;
+
+$prepare = true
+$run = true
+$analyze = []
+
+# Helpful functions and classes
+
+def smallUsage
+  puts &quot;Use the --help option to get basic usage information.&quot;
+  exit 1
+end
+
+def usage
+  puts &quot;run-jsc-benchmarks [options] &lt;vm1&gt; [&lt;vm2&gt; ...]&quot;
+  puts
+  puts &quot;Runs one or more JavaScript runtimes against SunSpider, V8, and/or Kraken&quot;
+  puts &quot;benchmarks, and reports detailed statistics.  What makes run-jsc-benchmarks&quot;
+  puts &quot;special is that each benchmark/VM configuration is run in a single VM invocation,&quot;
+  puts &quot;and the invocations are run in random order.  This minimizes systematics due to&quot;
+  puts &quot;one benchmark polluting the running time of another.  The fine-grained&quot;
+  puts &quot;interleaving of VM invocations further minimizes systematics due to changes in&quot;
+  puts &quot;the performance or behavior of your machine.&quot;
+  puts 
+  puts &quot;Run-jsc-benchmarks is highly configurable.  You can compare as many VMs as you&quot;
+  puts &quot;like.  You can change the amount of warm-up iterations, number of iterations&quot;
+  puts &quot;executed per VM invocation, and the number of VM invocations per benchmark.&quot;
+  puts
+  puts &quot;The &lt;vm&gt; should be either a path to a JavaScript runtime executable (such as&quot;
+  puts &quot;jsc), or a string of the form &lt;name&gt;:&lt;path&gt;, where the &lt;path&gt; is the path to&quot;
+  puts &quot;the executable and &lt;name&gt; is the name that you would like to give the&quot;
+  puts &quot;configuration for the purposeof reporting.  If no name is given, a generic name&quot;
+  puts &quot;of the form Conf#&lt;n&gt; will be ascribed to the configuration automatically.&quot;
+  puts
+  puts &quot;It's also possible to specify per-VM environment variables. For example, you&quot;
+  puts &quot;might specify a VM like Foo:JSC_useJIT=false:/path/to/jsc, in which case the&quot;
+  puts &quot;harness will set the JSC_useJIT environment variable to false just before running&quot;
+  puts &quot;the given VM. Note that the harness will not unset the environment variable, so&quot;
+  puts &quot;you must ensure that your other VMs will use the opposite setting&quot;
+  puts &quot;(JSC_useJIT=true in this case).&quot;
+  puts
+  puts &quot;Options:&quot;
+  puts &quot;--rerun &lt;n&gt;          Set the number of iterations of the benchmark that&quot;
+  puts &quot;                     contribute to the measured run time.  Default is #{$rerun}.&quot;
+  puts &quot;--inner &lt;n&gt;          Set the number of inner (per-runtime-invocation)&quot;
+  puts &quot;                     iterations.  Default is #{$inner}.&quot;
+  puts &quot;--outer &lt;n&gt;          Set the number of runtime invocations for each benchmark.&quot;
+  puts &quot;                     Default is #{$outer}.&quot;
+  puts &quot;--warmup &lt;n&gt;         Set the number of warm-up runs per invocation.  Default&quot;
+  puts &quot;                     is #{$warmup}. This has a different effect on different kinds&quot;
+  puts &quot;                     benchmarks. Some benchmarks have no notion of warm-up.&quot;
+  puts &quot;--no-ss-warmup       Disable SunSpider-based warm-up runs.&quot;
+  puts &quot;--quantum &lt;n&gt;        Set the duration in milliseconds for which an iteration of&quot;
+  puts &quot;                     a throughput benchmark should be run.  Default is #{$quantum}.&quot;
+  puts &quot;--timing-mode        Set the way that time is measured.  Possible values&quot;
+  puts &quot;                     are 'preciseTime' and 'date'.  Default is 'preciseTime'.&quot;
+  puts &quot;--force-vm-kind      Turn off auto-detection of VM kind, and assume that it is&quot;
+  puts &quot;                     the one specified.  Valid arguments are 'jsc', &quot;
+  puts &quot;                     'DumpRenderTree', or 'WebKitTestRunner'.&quot;
+  puts &quot;--force-vm-copy      Force VM builds to be copied to the working directory.&quot;
+  puts &quot;                     This may reduce pathologies resulting from path names.&quot;
+  puts &quot;--dont-copy-vms      Don't copy VMs even when doing a remote benchmarking run;&quot;
+  puts &quot;                     instead assume that they are already there.&quot;
+  puts &quot;--sunspider          Only run SunSpider.&quot;
+  puts &quot;--v8-spider          Only run V8.&quot;
+  puts &quot;--kraken             Only run Kraken.&quot;
+  puts &quot;--js-bench           Only run JSBench.&quot;
+  puts &quot;--js-regress         Only run JSRegress.&quot;
+  puts &quot;--dsp                Only run DSP.&quot;
+  puts &quot;--asm-bench          Only run AsmBench.&quot;
+  puts &quot;--browsermark-js     Only run browsermark-js.&quot;
+  puts &quot;--browsermark-dom    Only run browsermark-dom.&quot;
+  puts &quot;--octane             Only run Octane.&quot;
+  puts &quot;--compression-bench  Only run compression bench&quot;
+  puts &quot;                     The default is to run all benchmarks. The above options can&quot;
+  puts &quot;                     be combined to run any subset (so --sunspider --dsp will run&quot;
+  puts &quot;                     both SunSpider and DSP).&quot;
+  puts &quot;--benchmarks         Only run benchmarks matching the given regular expression.&quot;
+  puts &quot;--measure-gc         Turn off manual calls to gc(), so that GC time is measured.&quot;
+  puts &quot;                     Works best with large values of --inner.  You can also say&quot;
+  puts &quot;                     --measure-gc &lt;conf&gt;, which turns this on for one&quot;
+  puts &quot;                     configuration only.&quot;
+  puts &quot;--verbose or -v      Print more stuff.&quot;
+  puts &quot;--brief              Print only the final result for each VM.&quot;
+  puts &quot;--silent             Don't print progress. This might slightly reduce some&quot;
+  puts &quot;                     performance perturbation.&quot;
+  puts &quot;--remote &lt;sshhosts&gt;  Perform performance measurements remotely, on the given&quot;
+  puts &quot;                     SSH host(s). Easiest way to use this is to specify the SSH&quot;
+  puts &quot;                     user@host string. However, you can also supply a comma-&quot;
+  puts &quot;                     separated list of SSH hosts. Alternatively, you can use this&quot;
+  puts &quot;                     option multiple times to specify multiple hosts. This&quot;
+  puts &quot;                     automatically copies the WebKit release builds of the VMs&quot;
+  puts &quot;                     you specified to all of the hosts.&quot;
+  puts &quot;--ssh-options        Pass additional options to SSH.&quot;
+  puts &quot;--local              Also do a local benchmark run even when doing --remote.&quot;
+  puts &quot;--vms                Use a JSON file to specify which VMs to run, as opposed to&quot;
+  puts &quot;                     specifying them on the command line.&quot;
+  puts &quot;--prepare-only       Only prepare the runscript (a shell script that&quot;
+  puts &quot;                     invokes the VMs to run benchmarks) but don't run it.&quot;
+  puts &quot;--analyze            Only read the output of the runscript but don't do anything&quot;
+  puts &quot;                     else. This requires passing the same arguments that you&quot;
+  puts &quot;                     passed when running --prepare-only.&quot;
+  puts &quot;--output-name        Base of the filenames to put results into. Will write a file&quot;
+  puts &quot;                     called &lt;base&gt;_report.txt and &lt;base&gt;.json.  By default this&quot;
+  puts &quot;                     name is automatically synthesized from the machine name,&quot;
+  puts &quot;                     date, set of benchmarks run, and set of configurations.&quot;
+  puts &quot;--environment        JSON file that specifies the environment variables that should&quot;
+  puts &quot;                     be used for particular VMs and benchmarks.&quot;
+  puts &quot;--config &lt;path&gt;      Specify the path of the configuration file. Defaults to&quot;
+  puts &quot;                     ~/.run-jsc-benchmarks&quot;
+  puts &quot;--help or -h         Display this message.&quot;
+  puts
+  puts &quot;Example:&quot;
+  puts &quot;run-jsc-benchmarks TipOfTree:/Volumes/Data/pizlo/OpenSource/WebKitBuild/Release/jsc MyChanges:/Volumes/Data/pizlo/secondary/OpenSource/WebKitBuild/Release/jsc&quot;
+  exit 1
+end
+
+def fail(reason)
+  if reason.respond_to? :backtrace
+    puts &quot;FAILED: #{reason.inspect}&quot;
+    puts &quot;Stack trace:&quot;
+    puts reason.backtrace.join(&quot;\n&quot;)
+  else
+    puts &quot;FAILED: #{reason.inspect}&quot;
+  end
+  smallUsage
+end
+
+def quickFail(r1,r2)
+  $stderr.puts &quot;#{$0}: #{r1}&quot;
+  puts
+  fail(r2)
+end
+
+def intArg(argName,arg,min,max)
+  result=arg.to_i
+  unless result.to_s == arg
+    quickFail(&quot;Expected an integer value for #{argName}, but got #{arg}.&quot;,
+              &quot;Invalid argument for command-line option&quot;)
+  end
+  if min and result&lt;min
+    quickFail(&quot;Argument for #{argName} cannot be smaller than #{min}.&quot;,
+              &quot;Invalid argument for command-line option&quot;)
+  end
+  if max and result&gt;max
+    quickFail(&quot;Argument for #{argName} cannot be greater than #{max}.&quot;,
+              &quot;Invalid argument for command-line option&quot;)
+  end
+  result
+end
+
+def computeMean(array)
+  sum=0.0
+  array.each {
+    | value |
+    sum += value
+  }
+  sum/array.length
+end
+
+def computeGeometricMean(array)
+  sum = 0.0
+  array.each {
+    | value |
+    sum += Math.log(value)
+  }
+  Math.exp(sum * (1.0/array.length))
+end
+
+def computeHarmonicMean(array)
+  1.0 / computeMean(array.collect{ | value | 1.0 / value })
+end
+
+def computeStdDev(array)
+  case array.length
+  when 0
+    0.0/0.0
+  when 1
+    0.0
+  else
+    begin
+      mean=computeMean(array)
+      sum=0.0
+      array.each {
+        | value |
+        sum += (value-mean)**2
+      }
+      Math.sqrt(sum/(array.length-1))
+    rescue
+      0.0/0.0
+    end
+  end
+end
+
+class Array
+  def shuffle!
+    size.downto(1) { |n| push delete_at(rand(n)) }
+    self
+  end
+end
+
+def inverseBetaRegularized(n)
+  IBR_LOOKUP[n-1]
+end
+
+def numToStr(num, decimalShift)
+  (&quot;%.&quot; + (4 + decimalShift).to_s + &quot;f&quot;) % (num.to_f)
+end
+
+class CantSay
+  def initialize
+  end
+  
+  def shortForm
+    &quot; &quot;
+  end
+  
+  def longForm
+    &quot;&quot;
+  end
+  
+  def to_s
+    &quot;&quot;
+  end
+end
+  
+class NoChange
+  attr_reader :amountFaster
+  
+  def initialize(amountFaster)
+    @amountFaster = amountFaster
+  end
+  
+  def shortForm
+    &quot; &quot;
+  end
+  
+  def longForm
+    &quot;  might be #{numToStr(@amountFaster, 0)}x faster&quot;
+  end
+  
+  def to_s
+    if @amountFaster &lt; 1.01
+      &quot;&quot;
+    else
+      longForm
+    end
+  end
+end
+
+class Faster
+  attr_reader :amountFaster
+  
+  def initialize(amountFaster)
+    @amountFaster = amountFaster
+  end
+  
+  def shortForm
+    &quot;^&quot;
+  end
+  
+  def longForm
+    &quot;^ definitely #{numToStr(@amountFaster, 0)}x faster&quot;
+  end
+  
+  def to_s
+    longForm
+  end
+end
+
+class Slower
+  attr_reader :amountSlower
+  
+  def initialize(amountSlower)
+    @amountSlower = amountSlower
+  end
+  
+  def shortForm
+    &quot;!&quot;
+  end
+  
+  def longForm
+    &quot;! definitely #{numToStr(@amountSlower, 0)}x slower&quot;
+  end
+  
+  def to_s
+    longForm
+  end
+end
+
+class MayBeSlower
+  attr_reader :amountSlower
+  
+  def initialize(amountSlower)
+    @amountSlower = amountSlower
+  end
+  
+  def shortForm
+    &quot;?&quot;
+  end
+  
+  def longForm
+    &quot;? might be #{numToStr(@amountSlower, 0)}x slower&quot;
+  end
+  
+  def to_s
+    if @amountSlower &lt; 1.01
+      &quot;?&quot;
+    else
+      longForm
+    end
+  end
+end
+
+def jsonSanitize(value)
+  if value.is_a? Fixnum
+    value
+  elsif value.is_a? Float
+    if value.nan? or value.infinite?
+      value.to_s
+    else
+      value
+    end
+  elsif value.is_a? Array
+    value.map{|v| jsonSanitize(v)}
+  elsif value.nil?
+    value
+  else
+    raise &quot;Unrecognized value #{value.inspect}&quot;
+  end
+end
+
+class Stats
+  def initialize
+    @array = []
+  end
+  
+  def add(value)
+    if not value or not @array
+      @array = nil
+    elsif value.is_a? Float
+      if value.nan? or value.infinite?
+        @array = nil
+      else
+        @array &lt;&lt; value
+      end
+    elsif value.is_a? Stats
+      add(value.array)
+    elsif value.respond_to? :each
+      value.each {
+        | v |
+        add(v)
+      }
+    else
+      @array &lt;&lt; value.to_f
+    end
+  end
+  
+  def status
+    if @array
+      :ok
+    else
+      :error
+    end
+  end
+  
+  def error?
+    # TODO: We're probably still not handling this case correctly. 
+    not @array or @array.empty?
+  end
+  
+  def ok?
+    not not @array
+  end
+    
+  def array
+    @array
+  end
+  
+  def sum
+    result=0
+    @array.each {
+      | value |
+      result += value
+    }
+    result
+  end
+  
+  def min
+    @array.min
+  end
+  
+  def max
+    @array.max
+  end
+  
+  def size
+    @array.length
+  end
+  
+  def mean
+    computeMean(array)
+  end
+  
+  def arithmeticMean
+    mean
+  end
+  
+  def stdDev
+    computeStdDev(array)
+  end
+
+  def stdErr
+    stdDev/Math.sqrt(size)
+  end
+  
+  # Computes a 95% Student's t distribution confidence interval
+  def confInt
+    if size &lt; 2
+      0.0/0.0
+    else
+      raise if size &gt; 1000
+      Math.sqrt(size-1.0)*stdErr*Math.sqrt(-1.0+1.0/inverseBetaRegularized(size-1))
+    end
+  end
+  
+  def lower
+    mean-confInt
+  end
+  
+  def upper
+    mean+confInt
+  end
+  
+  def geometricMean
+    computeGeometricMean(array)
+  end
+  
+  def harmonicMean
+    computeHarmonicMean(array)
+  end
+  
+  def compareTo(other)
+    return CantSay.new unless ok? and other.ok?
+    
+    if upper &lt; other.lower
+      Faster.new(other.mean/mean)
+    elsif lower &gt; other.upper
+      Slower.new(mean/other.mean)
+    elsif mean &gt; other.mean
+      MayBeSlower.new(mean/other.mean)
+    else
+      NoChange.new(other.mean/mean)
+    end
+  end
+  
+  def to_s
+    &quot;size = #{size}, mean = #{mean}, stdDev = #{stdDev}, stdErr = #{stdErr}, confInt = #{confInt}&quot;
+  end
+  
+  def jsonMap
+    if ok?
+      {&quot;data&quot;=&gt;jsonSanitize(@array), &quot;mean&quot;=&gt;jsonSanitize(mean), &quot;confInt&quot;=&gt;jsonSanitize(confInt)}
+    else
+      &quot;ERROR&quot;
+    end
+  end
+end
+
+def doublePuts(out1,out2,msg)
+  out1.puts &quot;#{out2.path}: #{msg}&quot; if $verbosity&gt;=3
+  out2.puts msg
+end
+
+class Benchfile &lt; File
+  @@counter = 0
+  
+  attr_reader :filename, :basename
+  
+  def initialize(name)
+    @basename, @filename = Benchfile.uniqueFilename(name)
+    super(@filename, &quot;w&quot;)
+  end
+  
+  def self.uniqueFilename(name)
+    if name.is_a? Array
+      basename = name[0] + @@counter.to_s + name[1]
+    else
+      basename = name + @@counter.to_s
+    end
+    filename = BENCH_DATA_PATH + basename
+    @@counter += 1
+    raise &quot;Benchfile #{filename} already exists&quot; if FileTest.exist?(filename)
+    [basename, filename]
+  end
+  
+  def self.create(name)
+    file = Benchfile.new(name)
+    yield file
+    file.close
+    file.basename
+  end
+end
+
+$dataFiles={}
+def ensureFile(key, filename)
+  unless $dataFiles[key]
+    $dataFiles[key] = Benchfile.create(key) {
+      | outp |
+      doublePuts($stderr,outp,IO::read(filename))
+    }
+  end
+  $dataFiles[key]
+end
+
+# Helper for files that cannot be renamed.
+$absoluteFiles={}
+def ensureAbsoluteFile(filename, basedir=nil)
+  return if $absoluteFiles[filename]
+  filename = Pathname.new(filename)
+
+  directory = Pathname.new('')
+  if basedir and filename.dirname != basedir
+    remainingPath = filename.dirname
+    while remainingPath != basedir
+      directory = remainingPath.basename + directory
+      remainingPath = remainingPath.dirname
+    end
+    if not $absoluteFiles[directory]
+      cmd = &quot;mkdir -p #{Shellwords.shellescape((BENCH_DATA_PATH + directory).to_s)}&quot;
+      $stderr.puts &quot;&gt;&gt; #{cmd}&quot; if $verbosity &gt;= 2
+      raise unless system(cmd)
+      intermediateDirectory = Pathname.new(directory)
+      while intermediateDirectory.basename.to_s != &quot;.&quot;
+        $absoluteFiles[intermediateDirectory] = true
+        intermediateDirectory = intermediateDirectory.dirname
+      end
+    end
+  end
+  
+  cmd = &quot;cp #{Shellwords.shellescape(filename.to_s)} #{Shellwords.shellescape((BENCH_DATA_PATH + directory + filename.basename).to_s)}&quot;
+  $stderr.puts &quot;&gt;&gt; #{cmd}&quot; if $verbosity &gt;= 2
+  raise unless system(cmd)
+  $absoluteFiles[filename] = true
+end
+
+# Helper for large benchmarks with lots of files and directories.
+def ensureBenchmarkFiles(rootdir)
+    toProcess = [rootdir]
+    while not toProcess.empty?
+      currdir = toProcess.pop
+      Dir.foreach(currdir.to_s) {
+        | filename |
+        path = currdir + filename
+        next if filename.match(/^\./)
+        toProcess.push(path) if File.directory?(path.to_s)
+        ensureAbsoluteFile(path, rootdir) if File.file?(path.to_s)
+      }
+    end
+end
+
+class JSCommand
+  attr_reader :js, :html
+  def initialize(js, html)
+    @js = js
+    @html = html
+  end
+end
+
+def loadCommandForFile(key, filename)
+  file = ensureFile(key, filename)
+  JSCommand.new(&quot;load(#{file.inspect});&quot;, &quot;&lt;script src=#{file.inspect}&gt;&lt;/script&gt;&quot;)
+end
+
+def simpleCommand(command)
+  JSCommand.new(command, &quot;&lt;script type=\&quot;text/javascript\&quot;&gt;#{command}&lt;/script&gt;&quot;)
+end
+
+# Benchmark that consists of a single file and must be loaded in its own global object each
+# time (i.e. run()).
+class SingleFileTimedBenchmarkParameters
+  attr_reader :benchPath
+  
+  def initialize(benchPath)
+    @benchPath = benchPath
+  end
+  
+  def kind
+    :singleFileTimedBenchmark
+  end
+end
+
+# Benchmark that consists of one or more data files that should be loaded globally, followed
+# by a command to run the benchmark.
+class MultiFileTimedBenchmarkParameters
+  attr_reader :dataPaths, :command
+
+  def initialize(dataPaths, command)
+    @dataPaths = dataPaths
+    @command = command
+  end
+  
+  def kind
+    :multiFileTimedBenchmark
+  end
+end
+
+# Benchmark that consists of one or more data files that should be loaded globally, followed
+# by a command to run a short tick of the benchmark. The benchmark should be run for as many
+# ticks as possible, for one quantum (quantum is 1000ms by default).
+class ThroughputBenchmarkParameters
+  attr_reader :dataPaths, :setUpCommand, :command, :tearDownCommand, :doWarmup, :deterministic, :minimumIterations
+
+  def initialize(dataPaths, setUpCommand, command, tearDownCommand, doWarmup, deterministic, minimumIterations)
+    @dataPaths = dataPaths
+    @setUpCommand = setUpCommand
+    @command = command
+    @tearDownCommand = tearDownCommand
+    @doWarmup = doWarmup
+    @deterministic = deterministic
+    @minimumIterations = minimumIterations
+  end
+  
+  def kind
+    :throughputBenchmark
+  end
+end
+
+# Benchmark that can only run in DumpRenderTree or WebKitTestRunner, that has its own callback for reporting
+# results. Other than that it's just like SingleFileTimedBenchmark.
+class SingleFileTimedCallbackBenchmarkParameters
+  attr_reader :callbackDecl, :benchPath
+  
+  def initialize(callbackDecl, benchPath)
+    @callbackDecl = callbackDecl
+    @benchPath = benchPath
+  end
+  
+  def kind
+    :singleFileTimedCallbackBenchmark
+  end
+end
+
+def emitTimerFunctionCode(file)
+  case $timeMode
+  when :preciseTime
+    doublePuts($stderr,file,&quot;function __bencher_curTimeMS() {&quot;)
+    doublePuts($stderr,file,&quot;   return preciseTime()*1000&quot;)
+    doublePuts($stderr,file,&quot;}&quot;)
+  when :date
+    doublePuts($stderr,file,&quot;function __bencher_curTimeMS() {&quot;)
+    doublePuts($stderr,file,&quot;   return Date.now()&quot;)
+    doublePuts($stderr,file,&quot;}&quot;)
+  else
+    raise
+  end
+end
+
+def emitBenchRunCodeFile(name, plan, benchParams)
+  case plan.vm.vmType
+  when :jsc
+    Benchfile.create(&quot;bencher&quot;) {
+      | file |
+      emitTimerFunctionCode(file)
+      
+      if benchParams.kind == :multiFileTimedBenchmark
+        benchParams.dataPaths.each {
+          | path |
+          doublePuts($stderr,file,&quot;load(#{path.inspect});&quot;)
+        }
+        doublePuts($stderr,file,&quot;gc();&quot;)
+        doublePuts($stderr,file,&quot;for (var __bencher_index = 0; __bencher_index &lt; #{$warmup+$inner}; ++__bencher_index) {&quot;)
+        doublePuts($stderr,file,&quot;   var __before = __bencher_curTimeMS();&quot;)
+        $rerun.times {
+          doublePuts($stderr,file,&quot;   #{benchParams.command.js}&quot;)
+        }
+        doublePuts($stderr,file,&quot;   var __after = __bencher_curTimeMS();&quot;)
+        doublePuts($stderr,file,&quot;   if (__bencher_index &gt;= #{$warmup}) print(\&quot;#{name}: #{plan.vm}: #{plan.iteration}: \&quot; + (__bencher_index - #{$warmup}) + \&quot;: Time: \&quot;+(__after-__before));&quot;);
+        doublePuts($stderr,file,&quot;   gc();&quot;) unless plan.vm.shouldMeasureGC
+        doublePuts($stderr,file,&quot;}&quot;)
+      elsif benchParams.kind == :throughputBenchmark
+        emitTimerFunctionCode(file)
+        benchParams.dataPaths.each {
+          | path |
+          doublePuts($stderr,file,&quot;load(#{path.inspect});&quot;)
+        }
+        doublePuts($stderr,file,&quot;#{benchParams.setUpCommand.js}&quot;)
+        if benchParams.doWarmup
+          warmup = $warmup
+        else
+          warmup = 0
+        end
+        doublePuts($stderr,file,&quot;for (var __bencher_index = 0; __bencher_index &lt; #{warmup + $inner}; __bencher_index++) {&quot;)
+        doublePuts($stderr,file,&quot;    var __before = __bencher_curTimeMS();&quot;)
+        doublePuts($stderr,file,&quot;    var __after = __before;&quot;)
+        doublePuts($stderr,file,&quot;    var __runs = 0;&quot;)
+        doublePuts($stderr,file,&quot;    var __expected = #{$quantum};&quot;)
+        doublePuts($stderr,file,&quot;    while (true) {&quot;)
+        $rerun.times {
+          doublePuts($stderr,file,&quot;       #{benchParams.command.js}&quot;)
+        }
+        doublePuts($stderr,file,&quot;       __runs++;&quot;)
+        doublePuts($stderr,file,&quot;       __after = __bencher_curTimeMS();&quot;)
+        if benchParams.deterministic
+          doublePuts($stderr,file,&quot;       if (true) {&quot;)
+        else
+          doublePuts($stderr,file,&quot;       if (__after - __before &gt;= __expected) {&quot;)
+        end
+        doublePuts($stderr,file,&quot;           if (__runs &gt;= #{benchParams.minimumIterations} || __bencher_index &lt; #{warmup})&quot;)
+        doublePuts($stderr,file,&quot;               break;&quot;)
+        doublePuts($stderr,file,&quot;           __expected += #{$quantum}&quot;)
+        doublePuts($stderr,file,&quot;       }&quot;)
+        doublePuts($stderr,file,&quot;    }&quot;)
+        doublePuts($stderr,file,&quot;    if (__bencher_index &gt;= #{warmup}) print(\&quot;#{name}: #{plan.vm}: #{plan.iteration}: \&quot; + (__bencher_index - #{warmup}) + \&quot;: Time: \&quot;+((__after-__before)/__runs));&quot;)
+        doublePuts($stderr,file,&quot;}&quot;)
+        doublePuts($stderr,file,&quot;#{benchParams.tearDownCommand.js}&quot;)
+      else
+        raise unless benchParams.kind == :singleFileTimedBenchmark
+        doublePuts($stderr,file,&quot;function __bencher_run(__bencher_what) {&quot;)
+        doublePuts($stderr,file,&quot;   var __bencher_before = __bencher_curTimeMS();&quot;)
+        $rerun.times {
+          doublePuts($stderr,file,&quot;   run(__bencher_what);&quot;)
+        }
+        doublePuts($stderr,file,&quot;   var __bencher_after = __bencher_curTimeMS();&quot;)
+        doublePuts($stderr,file,&quot;   return __bencher_after - __bencher_before;&quot;)
+        doublePuts($stderr,file,&quot;}&quot;)
+        $warmup.times {
+          doublePuts($stderr,file,&quot;__bencher_run(#{benchParams.benchPath.inspect})&quot;)
+          doublePuts($stderr,file,&quot;gc();&quot;) unless plan.vm.shouldMeasureGC
+        }
+        $inner.times {
+          | innerIndex |
+          doublePuts($stderr,file,&quot;print(\&quot;#{name}: #{plan.vm}: #{plan.iteration}: #{innerIndex}: Time: \&quot;+__bencher_run(#{benchParams.benchPath.inspect}));&quot;)
+          doublePuts($stderr,file,&quot;gc();&quot;) unless plan.vm.shouldMeasureGC
+        }
+      end
+    }
+  when :dumpRenderTree, :webkitTestRunner
+    case $timeMode
+    when :preciseTime
+      curTime = &quot;(testRunner.preciseTime()*1000)&quot;
+    when :date
+      curTime = &quot;(Date.now())&quot;
+    else
+      raise
+    end
+
+    mainCode = Benchfile.create(&quot;bencher&quot;) {
+      | file |
+      doublePuts($stderr,file,&quot;__bencher_count = 0;&quot;)
+      doublePuts($stderr,file,&quot;function __bencher_doNext(result) {&quot;)
+      doublePuts($stderr,file,&quot;    if (__bencher_count &gt;= #{$warmup})&quot;)
+      doublePuts($stderr,file,&quot;        debug(\&quot;#{name}: #{plan.vm}: #{plan.iteration}: \&quot; + (__bencher_count - #{$warmup}) + \&quot;: Time: \&quot; + result);&quot;)
+      doublePuts($stderr,file,&quot;    __bencher_count++;&quot;)
+      doublePuts($stderr,file,&quot;    if (__bencher_count &lt; #{$inner+$warmup})&quot;)
+      doublePuts($stderr,file,&quot;        __bencher_runImpl(__bencher_doNext);&quot;)
+      doublePuts($stderr,file,&quot;    else&quot;)
+      doublePuts($stderr,file,&quot;        quit();&quot;)
+      doublePuts($stderr,file,&quot;}&quot;)
+      doublePuts($stderr,file,&quot;__bencher_runImpl(__bencher_doNext);&quot;)
+    }
+    
+    cssCode = Benchfile.create(&quot;bencher-css&quot;) {
+      | file |
+      doublePuts($stderr,file,&quot;.pass {\n    font-weight: bold;\n    color: green;\n}\n.fail {\n    font-weight: bold;\n    color: red;\n}\n\#console {\n    white-space: pre-wrap;\n    font-family: monospace;\n}&quot;)
+    }
+    
+    preCode = Benchfile.create(&quot;bencher-pre&quot;) {
+      | file |
+      doublePuts($stderr,file,&quot;if (window.testRunner) {&quot;)
+      doublePuts($stderr,file,&quot;    testRunner.dumpAsText(window.enablePixelTesting);&quot;)
+      doublePuts($stderr,file,&quot;    testRunner.waitUntilDone();&quot;)
+      doublePuts($stderr,file,&quot;}&quot;)
+      doublePuts($stderr,file,&quot;&quot;)
+      doublePuts($stderr,file,&quot;function debug(msg)&quot;)
+      doublePuts($stderr,file,&quot;{&quot;)
+      doublePuts($stderr,file,&quot;    var span = document.createElement(\&quot;span\&quot;);&quot;)
+      doublePuts($stderr,file,&quot;    document.getElementById(\&quot;console\&quot;).appendChild(span); // insert it first so XHTML knows the namespace&quot;)
+      doublePuts($stderr,file,&quot;    span.innerHTML = msg + '&lt;br /&gt;';&quot;)
+      doublePuts($stderr,file,&quot;}&quot;)
+      doublePuts($stderr,file,&quot;&quot;)
+      doublePuts($stderr,file,&quot;function quit() {&quot;)
+      doublePuts($stderr,file,&quot;    testRunner.notifyDone();&quot;)
+      doublePuts($stderr,file,&quot;}&quot;)
+      doublePuts($stderr,file,&quot;&quot;)
+      doublePuts($stderr,file,&quot;__bencher_continuation=null;&quot;)
+      doublePuts($stderr,file,&quot;&quot;)
+      doublePuts($stderr,file,&quot;function reportResult(result) {&quot;)
+      doublePuts($stderr,file,&quot;    __bencher_continuation(result);&quot;)
+      doublePuts($stderr,file,&quot;}&quot;)
+      if benchParams.kind == :singleFileTimedCallbackBenchmark
+        doublePuts($stderr,file,&quot;&quot;)
+        doublePuts($stderr,file,benchParams.callbackDecl)
+      end
+      doublePuts($stderr,file,&quot;&quot;)
+      doublePuts($stderr,file,&quot;function __bencher_runImpl(continuation) {&quot;)
+      doublePuts($stderr,file,&quot;    function doit() {&quot;)
+      doublePuts($stderr,file,&quot;        document.getElementById(\&quot;frameparent\&quot;).innerHTML = \&quot;\&quot;;&quot;)
+      doublePuts($stderr,file,&quot;        document.getElementById(\&quot;frameparent\&quot;).innerHTML = \&quot;&lt;iframe id='testframe'&gt;\&quot;;&quot;)
+      doublePuts($stderr,file,&quot;        var testFrame = document.getElementById(\&quot;testframe\&quot;);&quot;)
+      doublePuts($stderr,file,&quot;        testFrame.contentDocument.open();&quot;)
+      doublePuts($stderr,file,&quot;        testFrame.contentDocument.write(\&quot;&lt;!DOCTYPE html&gt;\\n&lt;head&gt;&lt;/head&gt;&lt;body&gt;&lt;div id=\\\&quot;console\\\&quot;&gt;&lt;/div&gt;\&quot;);&quot;)
+      if benchParams.kind == :throughputBenchmark or benchParams.kind == :multiFileTimedBenchmark
+        benchParams.dataPaths.each {
+          | path |
+          doublePuts($stderr,file,&quot;        testFrame.contentDocument.write(\&quot;&lt;script src=#{path.inspect.inspect[1..-2]}&gt;&lt;/script&gt;\&quot;);&quot;)
+        }
+      end
+      if benchParams.kind == :throughputBenchmark
+        if benchParams.doWarmup
+          warmup = $warmup
+        else
+          warmup = 0
+        end
+        doublePuts($stderr,file,&quot;        testFrame.contentDocument.write(\&quot;&lt;script type=\\\&quot;text/javascript\\\&quot;&gt;\&quot;);&quot;)
+        doublePuts($stderr,file,&quot;        testFrame.contentDocument.write(\&quot;#{benchParams.setUpCommand.js}\&quot;);&quot;)
+        doublePuts($stderr,file,&quot;        testFrame.contentDocument.write(\&quot;var __bencher_before = #{curTime};\&quot;);&quot;)
+        doublePuts($stderr,file,&quot;        testFrame.contentDocument.write(\&quot;var __bencher_after = __bencher_before;\&quot;);&quot;)
+        doublePuts($stderr,file,&quot;        testFrame.contentDocument.write(\&quot;var __bencher_expected = #{$quantum};\&quot;);&quot;)
+        doublePuts($stderr,file,&quot;        testFrame.contentDocument.write(\&quot;var __bencher_runs = 0;\&quot;);&quot;)
+        doublePuts($stderr,file,&quot;        testFrame.contentDocument.write(\&quot;while (true) {\&quot;);&quot;)
+        doublePuts($stderr,file,&quot;        testFrame.contentDocument.write(\&quot;    #{benchParams.command.js}\&quot;);&quot;)
+        doublePuts($stderr,file,&quot;        testFrame.contentDocument.write(\&quot;    __bencher_runs++;\&quot;);&quot;)
+        doublePuts($stderr,file,&quot;        testFrame.contentDocument.write(\&quot;    __bencher_after = #{curTime};\&quot;);&quot;)
+        if benchParams.deterministic
+          doublePuts($stderr,file,&quot;        testFrame.contentDocument.write(\&quot;    if (true) {\&quot;);&quot;)
+        else
+          doublePuts($stderr,file,&quot;        testFrame.contentDocument.write(\&quot;    if (__bencher_after - __bencher_before &gt;= __bencher_expected) {\&quot;);&quot;)
+        end
+        doublePuts($stderr,file,&quot;        testFrame.contentDocument.write(\&quot;        if (__bencher_runs &gt;= #{benchParams.minimumIterations} || window.parent.__bencher_count &lt; #{warmup})\&quot;);&quot;)
+        doublePuts($stderr,file,&quot;        testFrame.contentDocument.write(\&quot;            break;\&quot;);&quot;)
+        doublePuts($stderr,file,&quot;        testFrame.contentDocument.write(\&quot;        __bencher_expected += #{$quantum}\&quot;);&quot;)
+        doublePuts($stderr,file,&quot;        testFrame.contentDocument.write(\&quot;    }\&quot;);&quot;)
+        doublePuts($stderr,file,&quot;        testFrame.contentDocument.write(\&quot;}\&quot;);&quot;)
+        doublePuts($stderr,file,&quot;        testFrame.contentDocument.write(\&quot;#{benchParams.tearDownCommand.js}\&quot;);&quot;)
+        doublePuts($stderr,file,&quot;        testFrame.contentDocument.write(\&quot;window.parent.reportResult((__bencher_after - __bencher_before) / __bencher_runs);\&quot;);&quot;)
+        doublePuts($stderr,file,&quot;        testFrame.contentDocument.write(\&quot;&lt;/script&gt;\&quot;);&quot;)
+      else
+        doublePuts($stderr,file,&quot;        testFrame.contentDocument.write(\&quot;&lt;script type=\\\&quot;text/javascript\\\&quot;&gt;var __bencher_before = #{curTime};&lt;/script&gt;\&quot;);&quot;)
+        if benchParams.kind == :multiFileTimedBenchmark
+          doublePuts($stderr,file,&quot;        testFrame.contentDocument.write(#{benchParams.command.html.inspect});&quot;)
+        else
+          doublePuts($stderr,file,&quot;        testFrame.contentDocument.write(\&quot;&lt;script src=#{benchParams.benchPath.inspect.inspect[1..-2]}&gt;&lt;/script&gt;\&quot;);&quot;)
+        end
+        unless benchParams.kind == :singleFileTimedCallbackBenchmark
+          doublePuts($stderr,file,&quot;        testFrame.contentDocument.write(\&quot;&lt;script type=\\\&quot;text/javascript\\\&quot;&gt;window.parent.reportResult(#{curTime} - __bencher_before);&lt;/script&gt;\&quot;);&quot;)
+        end
+      end
+      doublePuts($stderr,file,&quot;        testFrame.contentDocument.write(\&quot;&lt;/body&gt;&lt;/html&gt;\&quot;);&quot;)
+      doublePuts($stderr,file,&quot;        testFrame.contentDocument.close();&quot;)
+      doublePuts($stderr,file,&quot;    }&quot;)
+      doublePuts($stderr,file,&quot;    __bencher_continuation = continuation;&quot;)
+      doublePuts($stderr,file,&quot;    window.setTimeout(doit, 10);&quot;)
+      doublePuts($stderr,file,&quot;}&quot;)
+    }
+
+    Benchfile.create([&quot;bencher-htmldoc&quot;,&quot;.html&quot;]) {
+      | file |
+      doublePuts($stderr,file,&quot;&lt;!DOCTYPE HTML PUBLIC \&quot;-//IETF//DTD HTML//EN\&quot;&gt;\n&lt;html&gt;&lt;head&gt;&lt;link rel=\&quot;stylesheet\&quot; href=\&quot;#{cssCode}\&quot;&gt;&lt;script src=\&quot;#{preCode}\&quot;&gt;&lt;/script&gt;&lt;/head&gt;&lt;body&gt;&lt;div id=\&quot;console\&quot;&gt;&lt;/div&gt;&lt;div id=\&quot;frameparent\&quot;&gt;&lt;/div&gt;&lt;script src=\&quot;#{mainCode}\&quot;&gt;&lt;/script&gt;&lt;/body&gt;&lt;/html&gt;&quot;)
+    }
+  else
+    raise
+  end
+end
+
+def emitBenchRunCode(name, plan, benchParams)
+  plan.vm.emitRunCode(emitBenchRunCodeFile(name, plan, benchParams), plan)
+end
+
+class FileCreator
+  def initialize(filename)
+    @filename = filename
+    @state = :empty
+  end
+  
+  def puts(text)
+    $script.print &quot;echo #{Shellwords.shellescape(text)}&quot;
+    if @state == :empty
+      $script.print &quot; &gt; &quot;
+      @state = :nonEmpty
+    else
+      $script.print &quot; &gt;&gt; &quot;
+    end
+    $script.puts &quot;#{Shellwords.shellescape(@filename)}&quot;
+  end
+  
+  def close
+    if @state == :empty
+      $script.puts &quot;rm -f #{Shellwords.shellescape(text)}&quot;
+      $script.puts &quot;touch #{Shellwords.shellescape(text)}&quot;
+    end
+  end
+  
+  def self.open(filename)
+    outp = FileCreator.new(filename)
+    yield outp
+    outp.close
+  end
+end
+
+def emitSelfContainedBenchRunCode(name, plan, targetFile, configFile, benchmark)
+  FileCreator.open(configFile) {
+    | outp |
+    outp.puts &quot;__bencher_message = \&quot;#{name}: #{plan.vm}: #{plan.iteration}: \&quot;;&quot;
+    outp.puts &quot;__bencher_warmup = #{$warmup};&quot;
+    outp.puts &quot;__bencher_inner = #{$inner};&quot;
+    outp.puts &quot;__bencher_benchmark = #{benchmark.to_json};&quot;
+    case $timeMode
+    when :preciseTime
+      outp.puts &quot;__bencher_curTime = (function(){ return testRunner.preciseTime() * 1000; });&quot;
+    when :date
+      outp.puts &quot;__bencher_curTime = (function(){ return Date.now(); });&quot;
+    else
+      raise
+    end
+  }
+  
+  plan.vm.emitRunCode(targetFile, plan)
+end
+
+def planForDescription(string, plans, benchFullname, vmName, iteration)
+  raise &quot;Unexpected benchmark full name: #{benchFullname.inspect}, string: #{string.inspect}&quot; unless benchFullname =~ /\//
+  suiteName = $~.pre_match
+  return nil if suiteName == &quot;WARMUP&quot;
+  benchName = $~.post_match
+  result = plans.select{|v| v.suite.name == suiteName and v.benchmark.name == benchName and v.vm.name == vmName and v.iteration == iteration}
+  raise &quot;Unexpected result dimensions: #{result.inspect}, string: #{string.inspect}&quot; unless result.size == 1
+  result[0]
+end
+
+class ParsedResult
+  attr_reader :plan, :innerIndex, :time, :result
+  
+  def initialize(plan, innerIndex, time)
+    @plan = plan
+    @innerIndex = innerIndex
+    if time == :crashed
+      @result = :error
+    else
+      @time = time
+      @result = :success
+    end
+    
+    raise unless @plan.is_a? BenchPlan
+    raise unless @innerIndex.is_a? Integer
+    raise unless @time.is_a? Numeric or @result == :error
+  end
+  
+  def benchmark
+    plan.benchmark
+  end
+  
+  def suite
+    plan.suite
+  end
+  
+  def vm
+    plan.vm
+  end
+  
+  def outerIndex
+    plan.iteration
+  end
+  
+  def self.create(plan, innerIndex, time)
+    if plan
+      ParsedResult.new(plan, innerIndex, time)
+    else
+      nil
+    end
+  end
+  
+  def self.parse(plans, string)
+    if string =~ /([a-zA-Z0-9\/_.-]+): ([a-zA-Z0-9_#. ]+): ([0-9]+): ([0-9]+): Time: /
+      benchFullname = $1
+      vmName = $2
+      outerIndex = $3.to_i
+      innerIndex = $4.to_i
+      time = $~.post_match.to_f
+      ParsedResult.create(planForDescription(string, plans, benchFullname, vmName, outerIndex), innerIndex, time)
+    elsif string =~ /([a-zA-Z0-9\/_.-]+): ([a-zA-Z0-9_#. ]+): ([0-9]+): ([0-9]+): CRASHED/
+      benchFullname = $1
+      vmName = $2
+      outerIndex = $3.to_i
+      innerIndex = $4.to_i
+      time = $~.post_match.to_f
+      ParsedResult.create(planForDescription(string, plans, benchFullname, vmName, outerIndex), innerIndex, :crashed)
+    else
+      nil
+    end
+  end
+end
+
+class VM
+  @@extraEnvSet = {}
+  
+  def initialize(origPath, name, nameKind, svnRevision)
+    @origPath = origPath.to_s
+    @path = origPath.to_s
+    @name = name
+    @nameKind = nameKind
+    @extraEnv = {}
+    
+    if $forceVMKind
+      @vmType = $forceVMKind
+    else
+      if @origPath =~ /DumpRenderTree$/
+        @vmType = :dumpRenderTree
+      elsif @origPath =~ /WebKitTestRunner$/
+        @vmType = :webkitTestRunner
+      else
+        @vmType = :jsc
+      end
+    end
+    
+    @svnRevision = svnRevision
+    
+    # Try to detect information about the VM.
+    if path =~ /\/WebKitBuild\/(Release|Debug)+\/([a-zA-Z]+)$/
+      @checkoutPath = $~.pre_match
+      # FIXME: Use some variant of this: 
+      # &lt;bdash&gt;   def retrieve_revision
+      # &lt;bdash&gt;     `perl -I#{@path}/Tools/Scripts -MVCSUtils -e 'print svnRevisionForDirectory(&quot;#{@path}&quot;);'`.to_i
+      # &lt;bdash&gt;   end
+      unless @svnRevision
+        begin
+          Dir.chdir(@checkoutPath) {
+            $stderr.puts &quot;&gt;&gt; cd #{@checkoutPath} &amp;&amp; svn info&quot; if $verbosity&gt;=2
+            IO.popen(&quot;svn info&quot;, &quot;r&quot;) {
+              | inp |
+              inp.each_line {
+                | line |
+                if line =~ /Revision: ([0-9]+)/
+                  @svnRevision = $1
+                end
+              }
+            }
+          }
+          unless @svnRevision
+            $stderr.puts &quot;Warning: running svn info for #{name} silently failed.&quot;
+          end
+        rescue =&gt; e
+          # Failed to detect svn revision.
+          $stderr.puts &quot;Warning: could not get svn revision information for #{name}: #{e}&quot;
+        end
+      end
+    else
+      $stderr.puts &quot;Warning: could not identify checkout location for #{name}&quot;
+    end
+    
+    if @path =~ /\/Release\/([a-zA-Z]+)$/
+      @libPath, @relativeBinPath = [$~.pre_match+&quot;/Release&quot;], &quot;./#{$1}&quot;
+    elsif @path =~ /\/Debug\/([a-zA-Z]+)$/
+      @libPath, @relativeBinPath = [$~.pre_match+&quot;/Debug&quot;], &quot;./#{$1}&quot;
+    elsif @path =~ /\/Contents\/Resources\/([a-zA-Z]+)$/
+      @libPath = [$~.pre_match + &quot;/Contents/Resources&quot;, $~.pre_match + &quot;/Contents/Frameworks&quot;]
+    elsif @path =~ /\/JavaScriptCore.framework\/Resources\/([a-zA-Z]+)$/
+      @libPath, @relativeBinPath = [$~.pre_match], $&amp;[1..-1]
+    end
+  end
+  
+  def canCopyIntoBenchPath
+    if @libPath and @relativeBinPath
+      true
+    else
+      false
+    end
+  end
+  
+  def addExtraEnv(key, val)
+    @extraEnv[key] = val
+    @@extraEnvSet[key] = true
+  end
+  
+  def copyIntoBenchPath
+    raise unless canCopyIntoBenchPath
+    basename, filename = Benchfile.uniqueFilename(&quot;vm&quot;)
+    raise unless Dir.mkdir(filename)
+    @libPath.each {
+      | libPathPart |
+      cmd = &quot;cp -a #{Shellwords.shellescape(libPathPart)}/* #{Shellwords.shellescape(filename.to_s)}&quot;
+      $stderr.puts &quot;&gt;&gt; #{cmd}&quot; if $verbosity&gt;=2
+      raise unless system(cmd)
+    }
+    @path = &quot;#{basename}/#{@relativeBinPath}&quot;
+    @libPath = [basename]
+  end
+  
+  def to_s
+    @name
+  end
+  
+  def name
+    @name
+  end
+  
+  def shouldMeasureGC
+    $measureGC == true or ($measureGC == name)
+  end
+  
+  def origPath
+    @origPath
+  end
+  
+  def path
+    @path
+  end
+  
+  def nameKind
+    @nameKind
+  end
+  
+  def vmType
+    @vmType
+  end
+  
+  def checkoutPath
+    @checkoutPath
+  end
+  
+  def svnRevision
+    @svnRevision
+  end
+  
+  def extraEnv
+    @extraEnv
+  end
+  
+  def printFunction
+    case @vmType
+    when :jsc
+      &quot;print&quot;
+    when :dumpRenderTree, :webkitTestRunner
+      &quot;debug&quot;
+    else
+      raise @vmType
+    end
+  end
+  
+  def emitRunCode(fileToRun, plan)
+    myLibPath = @libPath
+    myLibPath = [] unless myLibPath
+    @@extraEnvSet.keys.each {
+      | key |
+      $script.puts &quot;unset #{Shellwords.shellescape(key)}&quot;
+    }
+    $script.puts &quot;export DYLD_LIBRARY_PATH=#{Shellwords.shellescape(myLibPath.join(':').to_s)}&quot;
+    $script.puts &quot;export DYLD_FRAMEWORK_PATH=#{Shellwords.shellescape(myLibPath.join(':').to_s)}&quot;
+    @extraEnv.each_pair {
+      | key, val |
+      $script.puts &quot;export #{Shellwords.shellescape(key)}=#{Shellwords.shellescape(val)}&quot;
+    }
+    plan.environment.each_pair {
+        | key, val |
+        $script.puts &quot;export #{Shellwords.shellescape(key)}=#{Shellwords.shellescape(val)}&quot;
+    }
+    $script.puts &quot;#{path} #{fileToRun} 2&gt;&amp;1 || {&quot;
+    $script.puts &quot;    echo &quot; + Shellwords.shellescape(&quot;#{name} failed to run!&quot;) + &quot; 1&gt;&amp;2&quot;
+    $inner.times {
+      | iteration |
+      $script.puts &quot;    echo &quot; + Shellwords.shellescape(&quot;#{plan.prefix}: #{iteration}: CRASHED&quot;)
+    }
+    $script.puts &quot;}&quot;
+    plan.environment.keys.each {
+        | key |
+        $script.puts &quot;export #{Shellwords.shellescape(key)}=&quot;
+    }
+  end
+end
+
+class StatsAccumulator
+  def initialize
+    @stats = []
+    ($outer*$inner).times {
+      @stats &lt;&lt; Stats.new
+    }
+  end
+  
+  def statsForIteration(outerIteration, innerIteration)
+    @stats[outerIteration*$inner + innerIteration]
+  end
+  
+  def stats
+    result = Stats.new
+    @stats.each {
+      | stat |
+      if stat.ok?
+        result.add(yield stat)
+      else
+        result.add(nil)
+      end
+    }
+    result
+  end
+  
+  def geometricMeanStats
+    stats {
+      | stat |
+      stat.geometricMean
+    }
+  end
+  
+  def arithmeticMeanStats
+    stats {
+      | stat |
+      stat.arithmeticMean
+    }
+  end
+end
+
+module Benchmark
+  attr_accessor :benchmarkSuite
+  attr_reader :name
+  
+  def fullname
+    benchmarkSuite.name + &quot;/&quot; + name
+  end
+  
+  def to_s
+    fullname
+  end
+  
+  def weight
+    1
+  end
+  
+  def weightString
+    raise unless weight.is_a? Fixnum
+    raise unless weight &gt;= 1
+    if weight == 1
+      &quot;&quot;
+    else
+      &quot;x#{weight} &quot;
+    end
+  end
+end
+
+class SunSpiderBenchmark
+  include Benchmark
+  
+  def initialize(name)
+    @name = name
+  end
+  
+  def emitRunCode(plan)
+    emitBenchRunCode(fullname, plan, SingleFileTimedBenchmarkParameters.new(ensureFile(&quot;SunSpider-#{@name}&quot;, &quot;#{SUNSPIDER_PATH}/#{@name}.js&quot;)))
+  end
+end
+
+class LongSpiderBenchmark
+  include Benchmark
+  
+  def initialize(name)
+    @name = name
+  end
+  
+  def emitRunCode(plan)
+    emitBenchRunCode(fullname, plan, SingleFileTimedBenchmarkParameters.new(ensureFile(&quot;LongSpider-#{@name}&quot;, &quot;#{LONGSPIDER_PATH}/#{@name}.js&quot;)))
+  end
+end
+
+class V8Benchmark
+  include Benchmark
+  
+  def initialize(name)
+    @name = name
+  end
+  
+  def emitRunCode(plan)
+    emitBenchRunCode(fullname, plan, SingleFileTimedBenchmarkParameters.new(ensureFile(&quot;V8-#{@name}&quot;, &quot;#{V8_PATH}/v8-#{@name}.js&quot;)))
+  end
+end
+
+class V8RealBenchmark
+  include Benchmark
+  
+  attr_reader :v8SuiteName
+  
+  def initialize(v8SuiteName, name, weight, minimumIterations)
+    @v8SuiteName = v8SuiteName
+    @name = name
+    @weight = weight
+    @minimumIterations = minimumIterations
+  end
+  
+  def weight
+    @weight
+  end
+  
+  def emitRunCode(plan)
+    emitBenchRunCode(fullname, plan, ThroughputBenchmarkParameters.new([&quot;base&quot;, @v8SuiteName, &quot;jsc-#{@name}&quot;].collect{|v| ensureFile(&quot;V8Real-#{v}&quot;, &quot;#{V8_REAL_PATH}/#{v}.js&quot;)}, simpleCommand(&quot;jscSetUp();&quot;), simpleCommand(&quot;jscRun();&quot;), simpleCommand(&quot;jscTearDown();&quot;), true, false, @minimumIterations))
+  end
+end
+
+class OctaneBenchmark
+  include Benchmark
+  
+  def initialize(files, name, weight, doWarmup, deterministic, minimumIterations)
+    @files = files
+    @name = name
+    @weight = weight
+    @doWarmup = doWarmup
+    @deterministic = deterministic
+    @minimumIterations = minimumIterations
+  end
+  
+  def weight
+    @weight
+  end
+  
+  def emitRunCode(plan)
+    files = []
+    files += ([&quot;base&quot;] + @files).collect {
+      | v |
+      ensureFile(&quot;Octane-#{v}&quot;, &quot;#{OCTANE_PATH}/#{v}.js&quot;)
+    }
+    files += [&quot;jsc-#{@name}&quot;].collect {
+      | v |
+      ensureFile(&quot;Octane-#{v}&quot;, &quot;#{OCTANE_WRAPPER_PATH}/#{v}.js&quot;)
+    }
+    emitBenchRunCode(fullname, plan, ThroughputBenchmarkParameters.new(files, simpleCommand(&quot;jscSetUp();&quot;), simpleCommand(&quot;jscRun();&quot;), simpleCommand(&quot;jscTearDown();&quot;), @doWarmup, @deterministic, @minimumIterations))
+  end
+end
+
+class KrakenBenchmark
+  include Benchmark
+  
+  def initialize(name)
+    @name = name
+  end
+  
+  def emitRunCode(plan)
+    emitBenchRunCode(fullname, plan, MultiFileTimedBenchmarkParameters.new([ensureFile(&quot;KrakenData-#{@name}&quot;, &quot;#{KRAKEN_PATH}/#{@name}-data.js&quot;)], loadCommandForFile(&quot;Kraken-#{@name}&quot;, &quot;#{KRAKEN_PATH}/#{@name}.js&quot;)))
+  end
+end
+
+class JSBenchBenchmark
+  include Benchmark
+  
+  attr_reader :jsBenchMode
+  
+  def initialize(name, jsBenchMode)
+    @name = name
+    @jsBenchMode = jsBenchMode
+  end
+  
+  def emitRunCode(plan)
+    callbackDecl  = &quot;function JSBNG_handleResult(result) {\n&quot;
+    callbackDecl += &quot;    if (result.error) {\n&quot;
+    callbackDecl += &quot;        console.log(\&quot;Did not run benchmark correctly!\&quot;);\n&quot;
+    callbackDecl += &quot;        quit();\n&quot;
+    callbackDecl += &quot;    }\n&quot;
+    callbackDecl += &quot;    reportResult(result.time);\n&quot;
+    callbackDecl += &quot;}\n&quot;
+    emitBenchRunCode(fullname, plan, SingleFileTimedCallbackBenchmarkParameters.new(callbackDecl, ensureFile(&quot;JSBench-#{@name}&quot;, &quot;#{JSBENCH_PATH}/#{@name}/#{@jsBenchMode}.js&quot;)))
+  end
+end
+
+class JSRegressBenchmark
+  include Benchmark
+  
+  def initialize(name)
+    @name = name
+  end
+  
+  def emitRunCode(plan)
+    emitBenchRunCode(fullname, plan, SingleFileTimedBenchmarkParameters.new(ensureFile(&quot;JSRegress-#{@name}&quot;, &quot;#{JSREGRESS_PATH}/#{@name}.js&quot;)))
+  end
+end
+
+class AsmBenchBenchmark
+  include Benchmark
+  
+  def initialize(name)
+    @name = name
+  end
+  
+  def emitRunCode(plan)
+    emitBenchRunCode(fullname, plan, SingleFileTimedBenchmarkParameters.new(ensureFile(&quot;AsmBench-#{@name}&quot;, &quot;#{ASMBENCH_PATH}/#{@name}.js&quot;)))
+  end
+end
+
+
+class CompressionBenchBenchmark
+  include Benchmark
+  
+  def initialize(files, name, model)
+    @files = files
+    @name = name;
+    @name = name + &quot;-&quot; + model if !model.empty?
+    @name = @name.gsub(&quot; &quot;, &quot;-&quot;).downcase
+    @scriptName = name
+    @weight = 1
+    @doWarmup = true
+    @deterministic = true
+    @minimumIterations = 1
+    @model = model
+  end
+  
+  def weight
+    @weight
+  end
+  
+  def emitRunCode(plan)
+    emitBenchRunCode(fullname, plan, ThroughputBenchmarkParameters.new(([&quot;base&quot;] + @files + [&quot;jsc-#{@scriptName}&quot;]).collect{|v| ensureFile(&quot;Compression-#{v}&quot;, &quot;#{COMPRESSIONBENCH_PATH}/#{v}.js&quot;)}, simpleCommand(&quot;jscSetUp('#{@model}');&quot;), simpleCommand(&quot;jscRun();&quot;), simpleCommand(&quot;jscTearDown();&quot;), @doWarmup, @deterministic, @minimumIterations))
+  end
+end
+
+class DSPJSFiltrrBenchmark
+  include Benchmark
+  
+  def initialize(name, filterKey)
+    @name = name
+    @filterKey = filterKey
+  end
+  
+  def emitRunCode(plan)
+    ensureAbsoluteFile(DSPJS_FILTRR_PATH + &quot;filtrr.js&quot;)
+    ensureAbsoluteFile(DSPJS_FILTRR_PATH + &quot;filtrr_back.jpg&quot;)
+    ensureAbsoluteFile(DSPJS_FILTRR_PATH + &quot;filtrr-jquery.min.js&quot;)
+    ensureAbsoluteFile(DSPJS_FILTRR_PATH + &quot;filtrr-bencher.html&quot;)
+    emitSelfContainedBenchRunCode(fullname, plan, &quot;filtrr-bencher.html&quot;, &quot;bencher-config.js&quot;, @filterKey)
+  end
+end
+
+class DSPJSVP8Benchmark
+  include Benchmark
+  
+  def initialize
+    @name = &quot;route9-vp8&quot;
+  end
+  
+  def weight
+    5
+  end
+  
+  def emitRunCode(plan)
+    ensureBenchmarkFiles(DSPJS_ROUTE9_PATH)
+    emitSelfContainedBenchRunCode(fullname, plan, &quot;route9-bencher.html&quot;, &quot;bencher-config.js&quot;, &quot;&quot;)
+  end
+end
+
+class DSPStarfieldBenchmark
+  include Benchmark
+
+  def initialize
+    @name = &quot;starfield&quot;
+  end
+  
+  def weight
+    5
+  end
+
+  def emitRunCode(plan)
+    ensureBenchmarkFiles(DSPJS_STARFIELD_PATH)
+    emitSelfContainedBenchRunCode(fullname, plan, &quot;starfield-bencher.html&quot;, &quot;bencher-config.js&quot;, &quot;&quot;)
+  end
+end
+
+class DSPJSJSLinuxBenchmark
+  include Benchmark
+  def initialize
+    @name = &quot;bellard-jslinux&quot;
+  end
+
+  def weight
+    5
+  end
+
+  def emitRunCode(plan)
+    ensureBenchmarkFiles(DSPJS_JSLINUX_PATH)
+    emitSelfContainedBenchRunCode(fullname, plan, &quot;jslinux-bencher.html&quot;, &quot;bencher-config.js&quot;, &quot;&quot;)
+  end
+end
+
+class DSPJSQuake3Benchmark
+  include Benchmark
+
+  def initialize
+    @name = &quot;zynaps-quake3&quot;
+  end
+
+  def weight
+    5
+  end
+
+  def emitRunCode(plan)
+    ensureBenchmarkFiles(DSPJS_QUAKE3_PATH)
+    emitSelfContainedBenchRunCode(fullname, plan, &quot;quake-bencher.html&quot;, &quot;bencher-config.js&quot;, &quot;&quot;)
+  end
+end
+
+class DSPJSMandelbrotBenchmark
+  include Benchmark
+
+  def initialize
+    @name = &quot;zynaps-mandelbrot&quot;
+  end
+
+  def weight
+    5
+  end
+
+  def emitRunCode(plan)
+    ensureBenchmarkFiles(DSPJS_MANDELBROT_PATH)
+    emitSelfContainedBenchRunCode(fullname, plan, &quot;mandelbrot-bencher.html&quot;, &quot;bencher-config.js&quot;, &quot;&quot;)
+  end
+end
+
+class DSPJSAmmoJSASMBenchmark
+  include Benchmark
+
+  def initialize
+    @name = &quot;ammojs-asm-js&quot;
+  end
+
+  def weight
+    5
+  end
+
+  def emitRunCode(plan)
+    ensureBenchmarkFiles(DSPJS_AMMOJS_ASMJS_PATH)
+    emitSelfContainedBenchRunCode(fullname, plan, &quot;ammo-asmjs-bencher.html&quot;, &quot;bencher-config.js&quot;, &quot;&quot;)
+  end
+end
+
+class DSPJSAmmoJSRegularBenchmark
+  include Benchmark
+
+  def initialize
+    @name = &quot;ammojs-regular-js&quot;
+  end
+
+  def weight
+    5
+  end
+
+  def emitRunCode(plan)
+    ensureBenchmarkFiles(DSPJS_AMMOJS_REGULAR_PATH)
+    emitSelfContainedBenchRunCode(fullname, plan, &quot;ammo-regular-bencher.html&quot;, &quot;bencher-config.js&quot;, &quot;&quot;)
+  end
+end
+
+class BrowsermarkJSBenchmark
+  include Benchmark
+    
+  def initialize(name)
+    @name = name
+  end
+  
+  def emitRunCode(plan)
+    emitBenchRunCode(fullname, plan, ThroughputBenchmarkParameters.new([ensureFile(name, &quot;#{BROWSERMARK_JS_PATH}/#{name}/test.js&quot;), ensureFile(&quot;browsermark-bencher&quot;, &quot;#{BROWSERMARK_JS_PATH}/browsermark-bencher.js&quot;)], simpleCommand(&quot;jscSetUp();&quot;), simpleCommand(&quot;jscRun();&quot;), simpleCommand(&quot;jscTearDown();&quot;), true, 32))
+  end
+end
+
+class BrowsermarkDOMBenchmark
+  include Benchmark
+    
+  def initialize(name)
+    @name = name
+  end
+  
+  def emitRunCode(plan)
+    ensureBenchmarkFiles(BROWSERMARK_PATH)
+    emitSelfContainedBenchRunCode(fullname, plan, &quot;tests/benchmarks/dom/#{name}/index.html&quot;, &quot;bencher-config.js&quot;, name)
+  end
+end
+
+class BenchmarkSuite
+  def initialize(name, preferredMean, decimalShift)
+    @name = name
+    @preferredMean = preferredMean
+    @benchmarks = []
+    @subSuites = []
+    @decimalShift = decimalShift
+  end
+  
+  def name
+    @name
+  end
+  
+  def to_s
+    @name
+  end
+  
+  def decimalShift
+    @decimalShift
+  end
+  
+  def addIgnoringPattern(benchmark)
+    benchmark.benchmarkSuite = self
+    @benchmarks &lt;&lt; benchmark
+  end
+  
+  def add(benchmark)
+    if not $benchmarkPattern or &quot;#{@name}/#{benchmark.name}&quot; =~ $benchmarkPattern
+        addIgnoringPattern(benchmark)
+    end
+  end
+  
+  def addSubSuite(subSuite)
+    @subSuites &lt;&lt; subSuite
+  end
+  
+  def benchmarks
+    @benchmarks
+  end
+  
+  def benchmarkForName(name)
+    result = @benchmarks.select{|v| v.name == name}
+    raise unless result.length == 1
+    result[0]
+  end
+  
+  def hasBenchmark(benchmark)
+    array = @benchmarks.select{|v| v == benchmark}
+    raise unless array.length == 1 or array.length == 0
+    array.length == 1
+  end
+  
+  def subSuites
+    @subSuites
+  end
+  
+  def suites
+    [self] + @subSuites
+  end
+  
+  def suitesWithBenchmark(benchmark)
+    result = [self]
+    @subSuites.each {
+      | subSuite |
+      if subSuite.hasBenchmark(benchmark)
+        result &lt;&lt; subSuite
+      end
+    }
+    result
+  end
+  
+  def empty?
+    @benchmarks.empty?
+  end
+  
+  def retain_if
+    @benchmarks.delete_if {
+      | benchmark |
+      not yield benchmark
+    }
+  end
+  
+  def preferredMean
+    @preferredMean
+  end
+  
+  def computeMean(stat)
+    if stat.ok?
+      (stat.send @preferredMean) * (10 ** decimalShift)
+    else
+      nil
+    end
+  end
+end
+
+class BenchRunPlan
+  def initialize(benchmark, vm, iteration)
+    @benchmark = benchmark
+    @vm = vm
+    @iteration = iteration
+    @environment = {}
+    if $environment.has_key?(vm.name)
+      if $environment[vm.name].has_key?(benchmark.benchmarkSuite.name)
+        if $environment[vm.name][benchmark.benchmarkSuite.name].has_key?(benchmark.name)
+          @environment = $environment[vm.name][benchmark.benchmarkSuite.name][benchmark.name]
+        end
+      end
+    end
+  end
+  
+  def benchmark
+    @benchmark
+  end
+  
+  def suite
+    @benchmark.benchmarkSuite
+  end
+  
+  def vm
+    @vm
+  end
+  
+  def iteration
+    @iteration
+  end

+  def environment
+    @environment
+  end

+  def prefix
+    &quot;#{@benchmark.fullname}: #{vm.name}: #{iteration}&quot;
+  end
+  
+  def emitRunCode
+    @benchmark.emitRunCode(self)
+  end
+  
+  def to_s
+    benchmark.to_s + &quot;/&quot; + vm.to_s
+  end
+end
+
+class BenchmarkOnVM
+  def initialize(benchmark, suiteOnVM, subSuitesOnVM)
+    @benchmark = benchmark
+    @suiteOnVM = suiteOnVM
+    @subSuitesOnVM = subSuitesOnVM
+    @stats = Stats.new
+  end
+  
+  def to_s
+    &quot;#{@benchmark} on #{@suiteOnVM.vm}&quot;
+  end
+  
+  def benchmark
+    @benchmark
+  end
+  
+  def vm
+    @suiteOnVM.vm
+  end
+  
+  def vmStats
+    @suiteOnVM.vmStats
+  end
+  
+  def suite
+    @benchmark.benchmarkSuite
+  end
+  
+  def suiteOnVM
+    @suiteOnVM
+  end
+  
+  def subSuitesOnVM
+    @subSuitesOnVM
+  end
+  
+  def stats
+    @stats
+  end
+  
+  def parseResult(result)
+    raise &quot;VM mismatch; I've got #{vm} and they've got #{result.vm}&quot; unless result.vm == vm
+    raise unless result.benchmark == @benchmark
+    @stats.add(result.time)
+  end
+end
+
+class NamedStatsAccumulator &lt; StatsAccumulator
+  def initialize(name)
+    super()
+    @name = name
+  end
+  
+  def reportingName
+    @name
+  end
+end
+
+class SuiteOnVM &lt; StatsAccumulator
+  def initialize(vm, vmStats, suite)
+    super()
+    @vm = vm
+    @vmStats = vmStats
+    @suite = suite
+    
+    raise unless @vm.is_a? VM
+    raise unless @vmStats.is_a? StatsAccumulator
+    raise unless @suite.is_a? BenchmarkSuite
+  end
+  
+  def to_s
+    &quot;#{@suite} on #{@vm}&quot;
+  end
+  
+  def suite
+    @suite
+  end
+  
+  def vm
+    @vm
+  end
+
+  def reportingName
+    @vm.name
+  end
+  
+  def vmStats
+    raise unless @vmStats
+    @vmStats
+  end
+end
+
+class SubSuiteOnVM &lt; StatsAccumulator
+  def initialize(vm, suite)
+    super()
+    @vm = vm
+    @suite = suite
+    raise unless @vm.is_a? VM
+    raise unless @suite.is_a? BenchmarkSuite
+  end
+  
+  def to_s
+    &quot;#{@suite} on #{@vm}&quot;
+  end
+  
+  def suite
+    @suite
+  end
+  
+  def vm
+    @vm
+  end
+  
+  def reportingName
+    @vm.name
+  end
+end
+
+class BenchPlan
+  def initialize(benchmarkOnVM, iteration)
+    @benchmarkOnVM = benchmarkOnVM
+    @iteration = iteration
+  end
+  
+  def to_s
+    &quot;#{@benchmarkOnVM} \##{@iteration+1}&quot;
+  end
+  
+  def benchmarkOnVM
+    @benchmarkOnVM
+  end
+  
+  def benchmark
+    @benchmarkOnVM.benchmark
+  end
+  
+  def suite
+    @benchmarkOnVM.suite
+  end
+  
+  def vm
+    @benchmarkOnVM.vm
+  end
+  
+  def iteration
+    @iteration
+  end
+  
+  def parseResult(result)
+    raise unless result.plan == self
+    @benchmarkOnVM.parseResult(result)
+    benchmark.weight.times {
+      @benchmarkOnVM.vmStats.statsForIteration(@iteration, result.innerIndex).add(result.time)
+      @benchmarkOnVM.suiteOnVM.statsForIteration(@iteration, result.innerIndex).add(result.time)
+      @benchmarkOnVM.subSuitesOnVM.each {
+        | subSuite |
+        subSuite.statsForIteration(@iteration, result.innerIndex).add(result.time)
+      }
+    }
+  end
+end
+
+def lpad(str,chars)
+  if str.length&gt;chars
+    str
+  else
+    &quot;%#{chars}s&quot;%(str)
+  end
+end
+
+def rpad(str,chars)
+  while str.length &lt; chars
+    str+=&quot; &quot;
+  end
+  str
+end
+
+def center(str,chars)
+  while str.length&lt;chars
+    str+=&quot; &quot;
+    if str.length&lt;chars
+      str=&quot; &quot;+str
+    end
+  end
+  str
+end
+
+def statsToStr(stats, decimalShift)
+  if stats.error?
+    lpad(center(&quot;ERROR&quot;, 10+10+2), 12+10+2)
+  elsif $inner*$outer == 1
+    string = numToStr(stats.mean, decimalShift)
+    raise unless string =~ /\./
+    left = $~.pre_match
+    right = $~.post_match
+    lpad(left, 13 - decimalShift) + &quot;.&quot; + rpad(right, 10 + decimalShift)
+  else
+    lpad(numToStr(stats.mean, decimalShift), 12) + &quot;+-&quot; + rpad(numToStr(stats.confInt, decimalShift), 10)
+  end
+end
+
+def plural(num)
+  if num == 1
+    &quot;&quot;
+  else
+    &quot;s&quot;
+  end
+end
+
+def wrap(str, columns)
+  array = str.split
+  result = &quot;&quot;
+  curLine = array.shift
+  array.each {
+    | curStr |
+    if (curLine + &quot; &quot; + curStr).size &gt; columns
+      result += curLine + &quot;\n&quot;
+      curLine = curStr
+    else
+      curLine += &quot; &quot; + curStr
+    end
+  }
+  result + curLine + &quot;\n&quot;
+end
+  
+def runAndGetResults
+  results = nil
+  Dir.chdir(BENCH_DATA_PATH) {
+    $stderr.puts &quot;&gt;&gt; sh ./runscript&quot; if $verbosity &gt;= 2
+    raise &quot;Script did not complete correctly: #{$?}&quot; unless system(&quot;sh ./runscript &gt; runlog&quot;)
+    results = IO::read(&quot;runlog&quot;)
+  }
+  raise unless results
+  results
+end
+
+def parseAndDisplayResults(results)
+  vmStatses = []
+  $vms.each {
+    | vm |
+    vmStatses &lt;&lt; NamedStatsAccumulator.new(vm.name)
+  }
+  
+  suitesOnVMs = []
+  suitesOnVMsForSuite = {}
+  subSuitesOnVMsForSubSuite = {}
+  $suites.each {
+    | suite |
+    suitesOnVMsForSuite[suite] = []
+    suite.subSuites.each {
+      | subSuite |
+      subSuitesOnVMsForSubSuite[subSuite] = []
+    }
+  }
+  suitesOnVMsForVM = {}
+  $vms.each {
+    | vm |
+    suitesOnVMsForVM[vm] = []
+  }
+  
+  benchmarksOnVMs = []
+  benchmarksOnVMsForBenchmark = {}
+  $benchmarks.each {
+    | benchmark |
+    benchmarksOnVMsForBenchmark[benchmark] = []
+  }
+  
+  $vms.each_with_index {
+    | vm, vmIndex |
+    vmStats = vmStatses[vmIndex]
+    $suites.each {
+      | suite |
+      suiteOnVM = SuiteOnVM.new(vm, vmStats, suite)
+      subSuitesOnVM = suite.subSuites.map {
+        | subSuite |
+        result = SubSuiteOnVM.new(vm, subSuite)
+        subSuitesOnVMsForSubSuite[subSuite] &lt;&lt; result
+        result
+      }
+      suitesOnVMs &lt;&lt; suiteOnVM
+      suitesOnVMsForSuite[suite] &lt;&lt; suiteOnVM
+      suitesOnVMsForVM[vm] &lt;&lt; suiteOnVM
+      suite.benchmarks.each {
+        | benchmark |
+        subSuitesOnVMForThisBenchmark = []
+        subSuitesOnVM.each {
+          | subSuiteOnVM |
+          if subSuiteOnVM.suite.hasBenchmark(benchmark)
+            subSuitesOnVMForThisBenchmark &lt;&lt; subSuiteOnVM
+          end
+        }
+        benchmarkOnVM = BenchmarkOnVM.new(benchmark, suiteOnVM, subSuitesOnVMForThisBenchmark)
+        benchmarksOnVMs &lt;&lt; benchmarkOnVM
+        benchmarksOnVMsForBenchmark[benchmark] &lt;&lt; benchmarkOnVM
+      }
+    }
+  }
+  
+  plans = []
+  benchmarksOnVMs.each {
+    | benchmarkOnVM |
+    $outer.times {
+      | iteration |
+      plans &lt;&lt; BenchPlan.new(benchmarkOnVM, iteration)
+    }
+  }
+
+  hostname = nil
+  hwmodel = nil
+  results.each_line {
+    | line |
+    line.chomp!
+    if line =~ /HOSTNAME:([^.]+)/
+      hostname = $1
+    elsif line =~ /HARDWARE:hw\.model: /
+      hwmodel = $~.post_match.chomp
+    else
+      result = ParsedResult.parse(plans, line)
+      if result
+        result.plan.parseResult(result)
+      end
+    end
+  }
+  
+  # Compute the geomean of the preferred means of results on a SuiteOnVM
+  overallResults = []
+  $vms.each {
+    | vm |
+    result = Stats.new
+    $outer.times {
+      | outerIndex |
+      $inner.times {
+        | innerIndex |
+        curResult = Stats.new
+        suitesOnVMsForVM[vm].each {
+          | suiteOnVM |
+          # For a given iteration, suite, and VM, compute the suite's preferred mean
+          # over the data collected for all benchmarks in that suite. We'll have one
+          # sample per benchmark. For example on V8 this will be the geomean of 1
+          # sample for crypto, 1 sample for deltablue, and so on, and 1 sample for
+          # splay.
+          curResult.add(suiteOnVM.suite.computeMean(suiteOnVM.statsForIteration(outerIndex, innerIndex)))
+        }
+        
+        # curResult now holds 1 sample for each of the means computed in the above
+        # loop. Compute the geomean over this, and store it.
+        if curResult.ok?
+          result.add(curResult.geometricMean)
+        else
+          result.add(nil)
+        end
+      }
+    }
+
+    # $overallResults will have a Stats for each VM. That Stats object will hold
+    # $inner*$outer geomeans, allowing us to compute the arithmetic mean and
+    # confidence interval of the geomeans of preferred means. Convoluted, but
+    # useful and probably sound.
+    overallResults &lt;&lt; result
+  }
+  
+  if $verbosity &gt;= 2
+    benchmarksOnVMs.each {
+      | benchmarkOnVM |
+      $stderr.puts &quot;#{benchmarkOnVM}: #{benchmarkOnVM.stats}&quot;
+    }
+    
+    $vms.each_with_index {
+      | vm, vmIndex |
+      vmStats = vmStatses[vmIndex]
+      $stderr.puts &quot;#{vm} (arithmeticMean): #{vmStats.arithmeticMeanStats}&quot;
+      $stderr.puts &quot;#{vm} (geometricMean): #{vmStats.geometricMeanStats}&quot;
+    }
+  end
+
+  if $outputName
+    reportName = $outputName
+  else
+    reportName =
+      (if ($vms.collect {
+             | vm |
+             vm.nameKind
+           }.index :auto)
+         &quot;&quot;
+       else
+         text = $vms.collect {
+           | vm |
+           vm.to_s
+         }.join(&quot;_&quot;) + &quot;_&quot;
+         if text.size &gt;= 40
+           &quot;&quot;
+         else
+           text
+         end
+       end) +
+      ($suites.collect {
+         | suite |
+         suite.to_s
+       }.join(&quot;&quot;)) + &quot;_&quot; +
+      (if hostname
+         hostname + &quot;_&quot;
+       else
+         &quot;&quot;
+       end)+
+      (begin
+         time = Time.now
+         &quot;%04d%02d%02d_%02d%02d&quot; %
+           [ time.year, time.month, time.day,
+             time.hour, time.min ]
+       end)
+  end
+
+  unless $brief
+    puts &quot;Generating benchmark report at #{Dir.pwd}/#{reportName}_report.txt&quot;
+    puts &quot;And raw data at #{Dir.pwd}/#{reportName}.json&quot;
+  end
+  
+  outp = $stdout
+  json = {}
+  begin
+    outp = File.open(reportName + &quot;_report.txt&quot;,&quot;w&quot;)
+  rescue =&gt; e
+    $stderr.puts &quot;Error: could not save report to #{reportName}_report.txt: #{e}&quot;
+    $stderr.puts
+  end
+  
+  def createVMsString
+    result = &quot;&quot;
+    result += &quot;   &quot; if $allSuites.size &gt; 1
+    result += rpad(&quot;&quot;, $benchpad + $weightpad)
+    result += &quot; &quot;
+    $vms.size.times {
+      | index |
+      if index != 0
+        result += &quot; &quot;+NoChange.new(0).shortForm
+      end
+      result += lpad(center($vms[index].name, 10+10+2), 12+10+2)
+    }
+    result += &quot;    &quot;
+    if $vms.size &gt;= 3
+      result += center(&quot;#{$vms[-1].name} v. #{$vms[0].name}&quot;,26)
+    elsif $vms.size &gt;= 2
+      result += &quot; &quot;*26
+    end
+    result
+  end
+  
+  def andJoin(list)
+    if list.size == 1
+      list[0].to_s
+    elsif list.size == 2
+      &quot;#{list[0]} and #{list[1]}&quot;
+    else
+      &quot;#{list[0..-2].join(', ')}, and #{list[-1]}&quot;
+    end
+  end
+  
+  json[&quot;vms&quot;] = $vms.collect{|v| v.name}
+  json[&quot;suites&quot;] = {}
+  json[&quot;runlog&quot;] = results
+  
+  columns = [createVMsString.size, 78].max
+  
+  outp.print &quot;Benchmark report for &quot;
+  outp.print andJoin($suites)
+  if hostname
+    outp.print &quot; on #{hostname}&quot;
+  end
+  if hwmodel
+    outp.print &quot; (#{hwmodel})&quot;
+  end
+  outp.puts &quot;.&quot;
+  outp.puts
+  
+  outp.puts &quot;VMs tested:&quot;
+  $vms.each {
+    | vm |
+    outp.print &quot;\&quot;#{vm.name}\&quot; at #{vm.origPath}&quot;
+    if vm.svnRevision
+      outp.print &quot; (r#{vm.svnRevision})&quot;
+    end
+    outp.puts
+    vm.extraEnv.each_pair {
+      | key, val |
+      outp.puts &quot;    export #{key}=#{val}&quot;
+    }
+  }
+  
+  outp.puts
+  
+  outp.puts wrap(&quot;Collected #{$outer*$inner} sample#{plural($outer*$inner)} per benchmark/VM, &quot;+
+                 &quot;with #{$outer} VM invocation#{plural($outer)} per benchmark.&quot;+
+                 (if $rerun &gt; 1 then (&quot; Ran #{$rerun} benchmark iterations, and measured the &quot;+
+                                      &quot;total time of those iterations, for each sample.&quot;)
+                  else &quot;&quot; end)+
+                 (if $measureGC == true then (&quot; No manual garbage collection invocations were &quot;+
+                                              &quot;emitted.&quot;)
+                  elsif $measureGC then (&quot; Emitted a call to gc() between sample measurements for &quot;+
+                                         &quot;all VMs except #{$measureGC}.&quot;)
+                  else (&quot; Emitted a call to gc() between sample measurements.&quot;) end)+
+                 (if $warmup == 0 then (&quot; Did not include any warm-up iterations; measurements &quot;+
+                                        &quot;began with the very first iteration.&quot;)
+                  else (&quot; Used #{$warmup*$rerun} benchmark iteration#{plural($warmup*$rerun)} per VM &quot;+
+                        &quot;invocation for warm-up.&quot;) end)+
+                 (case $timeMode
+                  when :preciseTime then (&quot; Used the jsc-specific preciseTime() function to get &quot;+
+                                          &quot;microsecond-level timing.&quot;)
+                  when :date then (&quot; Used the portable Date.now() method to get millisecond-&quot;+
+                                   &quot;level timing.&quot;)
+                  else raise end)+
+                 &quot; Reporting benchmark execution times with 95% confidence &quot;+
+                 &quot;intervals in milliseconds.&quot;,
+                 columns)
+  
+  outp.puts
+  
+  def printVMs(outp)
+    outp.puts createVMsString
+  end
+  
+  def summaryStats(outp, json, jsonKey, accumulators, name, decimalShift, &amp;proc)
+    resultingJson = {}
+    outp.print &quot;   &quot; if $allSuites.size &gt; 1
+    outp.print rpad(name, $benchpad + $weightpad)
+    outp.print &quot; &quot;
+    accumulators.size.times {
+      | index |
+      if index != 0
+        outp.print &quot; &quot;+accumulators[index].stats(&amp;proc).compareTo(accumulators[index-1].stats(&amp;proc)).shortForm
+      end
+      outp.print statsToStr(accumulators[index].stats(&amp;proc), decimalShift)
+      resultingJson[accumulators[index].reportingName] = accumulators[index].stats(&amp;proc).jsonMap
+    }
+    if accumulators.size&gt;=2
+      outp.print(&quot;    &quot;+accumulators[-1].stats(&amp;proc).compareTo(accumulators[0].stats(&amp;proc)).longForm)
+    end
+    outp.puts
+    json[jsonKey] = resultingJson
+  end
+  
+  def meanName(currentMean, preferredMean)
+    result = &quot;&lt;#{currentMean}&gt;&quot;
+    if &quot;#{currentMean}Mean&quot; == preferredMean.to_s
+      result += &quot; *&quot;
+    end
+    result
+  end
+  
+  def allSummaryStats(outp, json, accumulators, preferredMean, decimalShift)
+    summaryStats(outp, json, &quot;&lt;arithmetic&gt;&quot;, accumulators, meanName(&quot;arithmetic&quot;, preferredMean), decimalShift) {
+      | stat |
+      stat.arithmeticMean
+    }
+    
+    summaryStats(outp, json, &quot;&lt;geometric&gt;&quot;, accumulators, meanName(&quot;geometric&quot;, preferredMean), decimalShift) {
+      | stat |
+      stat.geometricMean
+    }
+    
+    summaryStats(outp, json, &quot;&lt;harmonic&gt;&quot;, accumulators, meanName(&quot;harmonic&quot;, preferredMean), decimalShift) {
+      | stat |
+      stat.harmonicMean
+    }
+  end
+  
+  $suites.each {
+    | suite |
+    suiteJson = {}
+    subSuiteJsons = {}
+    suite.subSuites.each {
+      | subSuite |
+      subSuiteJsons[subSuite] = {}
+    }
+    
+    printVMs(outp)
+    if $allSuites.size &gt; 1
+      outp.puts(andJoin(suite.suites.map{|v| v.name}) + &quot;:&quot;)
+    else
+      outp.puts
+    end
+    suite.benchmarks.each {
+      | benchmark |
+      benchmarkJson = {}
+      outp.print &quot;   &quot; if $allSuites.size &gt; 1
+      outp.print rpad(benchmark.name, $benchpad) + rpad(benchmark.weightString, $weightpad)
+      if benchmark.name.size &gt; $benchNameClip
+        outp.puts
+        outp.print &quot;   &quot; if $allSuites.size &gt; 1
+        outp.print((&quot; &quot; * $benchpad) + (&quot; &quot; * $weightpad))
+      end
+      outp.print &quot; &quot;
+      myConfigs = benchmarksOnVMsForBenchmark[benchmark]
+      myConfigs.size.times {
+        | index |
+        if index != 0
+          outp.print &quot; &quot;+myConfigs[index].stats.compareTo(myConfigs[index-1].stats).shortForm
+        end
+        outp.print statsToStr(myConfigs[index].stats, suite.decimalShift)
+        benchmarkJson[myConfigs[index].vm.name] = myConfigs[index].stats.jsonMap
+      }
+      if $vms.size&gt;=2
+        outp.print(&quot;    &quot;+myConfigs[-1].stats.compareTo(myConfigs[0].stats).to_s)
+      end
+      outp.puts
+      suiteJson[benchmark.name] = benchmarkJson
+      suite.subSuites.each {
+        | subSuite |
+        if subSuite.hasBenchmark(benchmark)
+          subSuiteJsons[subSuite][benchmark.name] = benchmarkJson
+        end
+      }
+    }
+    outp.puts
+    unless suite.subSuites.empty?
+      suite.subSuites.each {
+        | subSuite |
+        outp.puts &quot;#{subSuite.name}:&quot;
+        allSummaryStats(outp, subSuiteJsons[subSuite], subSuitesOnVMsForSubSuite[subSuite], subSuite.preferredMean, subSuite.decimalShift)
+        outp.puts
+      }
+      outp.puts &quot;#{suite.name} including #{andJoin(suite.subSuites.map{|v| v.name})}:&quot;
+    end
+    allSummaryStats(outp, suiteJson, suitesOnVMsForSuite[suite], suite.preferredMean, suite.decimalShift)
+    outp.puts if $allSuites.size &gt; 1
+    
+    json[&quot;suites&quot;][suite.name] = suiteJson
+    suite.subSuites.each {
+      | subSuite |
+      json[&quot;suites&quot;][subSuite.name] = subSuiteJsons[subSuite]
+    }
+  }
+  
+  if $suites.size &gt; 1
+    printVMs(outp)
+    outp.puts &quot;All benchmarks:&quot;
+    allSummaryStats(outp, json, vmStatses, nil, 0)
+    
+    scaledResultJson = {}
+    
+    outp.puts
+    printVMs(outp)
+    outp.puts &quot;Geomean of preferred means:&quot;
+    outp.print &quot;   &quot;
+    outp.print rpad(&quot;&lt;scaled-result&gt;&quot;, $benchpad + $weightpad)
+    outp.print &quot; &quot;
+    $vms.size.times {
+      | index |
+      if index != 0
+        outp.print &quot; &quot;+overallResults[index].compareTo(overallResults[index-1]).shortForm
+      end
+      outp.print statsToStr(overallResults[index], 0)
+      scaledResultJson[$vms[index].name] = overallResults[index].jsonMap
+    }
+    if overallResults.size&gt;=2
+      outp.print(&quot;    &quot;+overallResults[-1].compareTo(overallResults[0]).longForm)
+    end
+    outp.puts
+    
+    json[&quot;&lt;scaled-result&gt;&quot;] = scaledResultJson
+  end
+  outp.puts
+  
+  if outp != $stdout
+    outp.close
+  end
+  
+  if outp != $stdout and not $brief
+    puts
+    File.open(reportName + &quot;_report.txt&quot;) {
+      | inp |
+      puts inp.read
+    }
+  end
+  
+  if $brief
+    puts(overallResults.collect{|stats| stats.mean}.join(&quot;\t&quot;))
+    puts(overallResults.collect{|stats| stats.confInt}.join(&quot;\t&quot;))
+  end
+  
+  File.open(reportName + &quot;.json&quot;, &quot;w&quot;) {
+    | outp |
+    outp.puts json.to_json
+  }
+end
+
+begin
+  $sawBenchOptions = false
+  
+  def resetBenchOptionsIfNecessary
+    unless $sawBenchOptions
+      $includeSunSpider = false
+      $includeLongSpider = false
+      $includeV8 = false
+      $includeKraken = false
+      $includeJSBench = false
+      $includeJSRegress = false
+      $includeAsmBench = false
+      $includeDSPJS = false
+      $includeBrowsermarkJS = false
+      $includeBrowsermarkDOM = false
+      $includeOctane = false
+      $includeCompressionBench = false
+      $sawBenchOptions = true
+    end
+  end
+  
+  GetoptLong.new(['--rerun', GetoptLong::REQUIRED_ARGUMENT],
+                 ['--inner', GetoptLong::REQUIRED_ARGUMENT],
+                 ['--outer', GetoptLong::REQUIRED_ARGUMENT],
+                 ['--warmup', GetoptLong::REQUIRED_ARGUMENT],
+                 ['--no-ss-warmup', GetoptLong::NO_ARGUMENT],
+                 ['--quantum', GetoptLong::REQUIRED_ARGUMENT],
+                 ['--minimum', GetoptLong::REQUIRED_ARGUMENT],
+                 ['--timing-mode', GetoptLong::REQUIRED_ARGUMENT],
+                 ['--sunspider', GetoptLong::NO_ARGUMENT],
+                 ['--longspider', GetoptLong::NO_ARGUMENT],
+                 ['--v8-spider', GetoptLong::NO_ARGUMENT],
+                 ['--kraken', GetoptLong::NO_ARGUMENT],
+                 ['--js-bench', GetoptLong::NO_ARGUMENT],
+                 ['--js-regress', GetoptLong::NO_ARGUMENT],
+                 ['--asm-bench', GetoptLong::NO_ARGUMENT],
+                 ['--dsp', GetoptLong::NO_ARGUMENT],
+                 ['--browsermark-js', GetoptLong::NO_ARGUMENT],
+                 ['--browsermark-dom', GetoptLong::NO_ARGUMENT],
+                 ['--octane', GetoptLong::NO_ARGUMENT],
+                 ['--compression-bench', GetoptLong::NO_ARGUMENT],
+                 ['--benchmarks', GetoptLong::REQUIRED_ARGUMENT],
+                 ['--measure-gc', GetoptLong::OPTIONAL_ARGUMENT],
+                 ['--force-vm-kind', GetoptLong::REQUIRED_ARGUMENT],
+                 ['--force-vm-copy', GetoptLong::NO_ARGUMENT],
+                 ['--dont-copy-vms', GetoptLong::NO_ARGUMENT],
+                 ['--verbose', '-v', GetoptLong::NO_ARGUMENT],
+                 ['--brief', GetoptLong::NO_ARGUMENT],
+                 ['--silent', GetoptLong::NO_ARGUMENT],
+                 ['--remote', GetoptLong::REQUIRED_ARGUMENT],
+                 ['--local', GetoptLong::NO_ARGUMENT],
+                 ['--ssh-options', GetoptLong::REQUIRED_ARGUMENT],
+                 ['--slave', GetoptLong::NO_ARGUMENT],
+                 ['--prepare-only', GetoptLong::NO_ARGUMENT],
+                 ['--analyze', GetoptLong::REQUIRED_ARGUMENT],
+                 ['--vms', GetoptLong::REQUIRED_ARGUMENT],
+                 ['--output-name', GetoptLong::REQUIRED_ARGUMENT],
+                 ['--environment', GetoptLong::REQUIRED_ARGUMENT],
+                 ['--config', GetoptLong::REQUIRED_ARGUMENT],
+                 ['--help', '-h', GetoptLong::NO_ARGUMENT]).each {
+    | opt, arg |
+    case opt
+    when '--rerun'
+      $rerun = intArg(opt,arg,1,nil)
+    when '--inner'
+      $inner = intArg(opt,arg,1,nil)
+    when '--outer'
+      $outer = intArg(opt,arg,1,nil)
+    when '--warmup'
+      $warmup = intArg(opt,arg,0,nil)
+    when '--no-ss-warmup'
+      $sunSpiderWarmup = false
+    when '--quantum'
+      $quantum = intArg(opt,arg,1,nil)
+    when '--minimum'
+      $minimum = intArg(opt,arg,1,nil)
+    when '--timing-mode'
+      if arg.upcase == &quot;PRECISETIME&quot;
+        $timeMode = :preciseTime
+      elsif arg.upcase == &quot;DATE&quot;
+        $timeMode = :date
+      elsif arg.upcase == &quot;AUTO&quot;
+        $timeMode = :auto
+      else
+        quickFail(&quot;Expected either 'preciseTime', 'date', or 'auto' for --time-mode, but got '#{arg}'.&quot;,
+                  &quot;Invalid argument for command-line option&quot;)
+      end
+    when '--force-vm-kind'
+      if arg.upcase == &quot;JSC&quot;
+        $forceVMKind = :jsc
+      elsif arg.upcase == &quot;DUMPRENDERTREE&quot;
+        $forceVMKind = :dumpRenderTree
+      elsif arg.upcase == &quot;WEBKITTESTRUNNER&quot;
+        $forceVMKind = :webkitTestRunner
+      elsif arg.upcase == &quot;AUTO&quot;
+        $forceVMKind = nil
+      else
+        quickFail(&quot;Expected 'jsc', 'DumpRenderTree', or 'WebKitTestRunner' for --force-vm-kind, but got '#{arg}'.&quot;,
+                  &quot;Invalid argument for command-line option&quot;)
+      end
+    when '--force-vm-copy'
+      $needToCopyVMs = true
+    when '--dont-copy-vms'
+      $dontCopyVMs = true
+    when '--sunspider'
+      resetBenchOptionsIfNecessary
+      $includeSunSpider = true
+    when '--longspider'
+      resetBenchOptionsIfNecessary
+      $includeLongSpider = true
+    when '--v8-spider'
+      resetBenchOptionsIfNecessary
+      $includeV8 = true
+    when '--kraken'
+      resetBenchOptionsIfNecessary
+      $includeKraken = true
+    when '--js-bench'
+      resetBenchOptionsIfNecessary
+      $includeJSBench = true
+    when '--js-regress'
+      resetBenchOptionsIfNecessary
+      $includeJSRegress = true
+    when '--asm-bench'
+      resetBenchOptionsIfNecessary
+      $includeAsmBench = true
+    when '--dsp'
+      resetBenchOptionsIfNecessary
+      $includeDSPJS = true
+    when '--browsermark-js'
+      resetBenchOptionsIfNecessary
+      $includeBrowsermarkJS = true
+    when '--browsermark-dom'
+      resetBenchOptionsIfNecessary
+      $includeBrowsermarkDOM = true
+    when '--octane'
+      resetBenchOptionsIfNecessary
+      $includeOctane = true
+    when '--compression-bench'
+      resetBenchOptionsIfNecessary
+      $includeCompressionBench = true
+    when '--benchmarks'
+      $benchmarkPattern = Regexp.new(arg)
+    when '--measure-gc'
+      if arg == ''
+        $measureGC = true
+      else
+        $measureGC = arg
+      end
+    when '--verbose'
+      $verbosity += 1
+    when '--brief'
+      $brief = true
+    when '--silent'
+      $silent = true
+    when '--remote'
+      $remoteHosts += arg.split(',')
+      $needToCopyVMs = true
+    when '--ssh-options'
+      $sshOptions &lt;&lt; arg
+    when '--local'
+      $alsoLocal = true
+    when '--prepare-only'
+      $run = false
+    when '--analyze'
+      $prepare = false
+      $run = false
+      $analyze &lt;&lt; arg
+    when '--output-name'
+      $outputName = arg
+    when '--vms'
+      JSON::parse(IO::read(arg)).each {
+        | vmDescription |
+        path = Pathname.new(vmDescription[&quot;path&quot;]).realpath
+        if vmDescription[&quot;name&quot;]
+          name = vmDescription[&quot;name&quot;]
+          nameKind = :given
+        else
+          name = &quot;Conf\##{$vms.length+1}&quot;
+          nameKind = :auto
+        end
+        vm = VM.new(path, name, nameKind, nil)
+        if vmDescription[&quot;env&quot;]
+          vmDescription[&quot;env&quot;].each_pair {
+            | key, val |
+            vm.addExtraEnv(key, val)
+          }
+        end
+        $vms &lt;&lt; vm
+      }
+    when '--environment'
+      $environment = JSON::parse(IO::read(arg))
+    when '--config'
+      $configPath = Pathname.new(arg)
+    when '--help'
+      usage
+    else
+      raise &quot;bad option: #{opt}&quot;
+    end
+  }
+  
+  # Figure out the configuration
+  if $configPath.file?
+    config = JSON::parse(IO::read($configPath.to_s))
+  else
+    config = {}
+  end
+  OCTANE_PATH = config[&quot;OctanePath&quot;]
+  BROWSERMARK_PATH = config[&quot;BrowserMarkPath&quot;]
+  BROWSERMARK_JS_PATH = config[&quot;BrowserMarkJSPath&quot;]
+  BROWSERMARK_DOM_PATH = config[&quot;BrowserMarkDOMPath&quot;]
+  ASMBENCH_PATH = config[&quot;AsmBenchPath&quot;]
+  COMPRESSIONBENCH_PATH = config[&quot;CompressionBenchPath&quot;]
+  DSPJS_FILTRR_PATH = config[&quot;DSPJSFiltrrPath&quot;]
+  DSPJS_ROUTE9_PATH = config[&quot;DSPJSRoute9Path&quot;]
+  DSPJS_STARFIELD_PATH = config[&quot;DSPJSStarfieldPath&quot;]
+  DSPJS_QUAKE3_PATH = config[&quot;DSPJSQuake3Path&quot;]
+  DSPJS_MANDELBROT_PATH = config[&quot;DSPJSMandelbrotPath&quot;]
+  DSPJS_JSLINUX_PATH = config[&quot;DSPJSLinuxPath&quot;]
+  DSPJS_AMMOJS_ASMJS_PATH = config[&quot;DSPJSAmmoJSAsmJSPath&quot;]
+  DSPJS_AMMOJS_REGULAR_PATH = config[&quot;DSPJSAmmoJSRegularPath&quot;]
+  JSBENCH_PATH = config[&quot;JSBenchPath&quot;]
+  KRAKEN_PATH = config[&quot;KrakenPath&quot;]
+  
+  # If the --dont-copy-vms option was passed, it overrides the --force-vm-copy option.
+  if $dontCopyVMs
+    $needToCopyVMs = false
+  end
+  
+  ARGV.each {
+    | vm |
+    if vm =~ /([a-zA-Z0-9_ ]+):/
+      name = $1
+      nameKind = :given
+      vm = $~.post_match
+    else
+      name = &quot;Conf\##{$vms.length+1}&quot;
+      nameKind = :auto
+    end
+    envs = []
+    while vm =~ /([a-zA-Z0-9_]+)=([a-zA-Z0-9_:]+):/
+      envs &lt;&lt; [$1, $2]
+      vm = $~.post_match
+    end
+    $stderr.puts &quot;#{name}: #{vm}&quot; if $verbosity &gt;= 1
+    vm = VM.new(Pathname.new(vm).realpath, name, nameKind, nil)
+    envs.each {
+      | pair |
+      vm.addExtraEnv(pair[0], pair[1])
+    }
+    $vms &lt;&lt; vm
+  }
+  
+  if $vms.empty?
+    quickFail(&quot;Please specify at least on configuraiton on the command line.&quot;,
+              &quot;Insufficient arguments&quot;)
+  end
+  
+  $vms.each {
+    | vm |
+    if vm.vmType == :jsc
+      $allDRT = false
+    end
+  }
+  
+  SUNSPIDER = BenchmarkSuite.new(&quot;SunSpider&quot;, :arithmeticMean, 0)
+  WARMUP = BenchmarkSuite.new(&quot;WARMUP&quot;, :arithmeticMean, 0)
+  [&quot;3d-cube&quot;, &quot;3d-morph&quot;, &quot;3d-raytrace&quot;, &quot;access-binary-trees&quot;,
+   &quot;access-fannkuch&quot;, &quot;access-nbody&quot;, &quot;access-nsieve&quot;,
+   &quot;bitops-3bit-bits-in-byte&quot;, &quot;bitops-bits-in-byte&quot;, &quot;bitops-bitwise-and&quot;,
+   &quot;bitops-nsieve-bits&quot;, &quot;controlflow-recursive&quot;, &quot;crypto-aes&quot;,
+   &quot;crypto-md5&quot;, &quot;crypto-sha1&quot;, &quot;date-format-tofte&quot;, &quot;date-format-xparb&quot;,
+   &quot;math-cordic&quot;, &quot;math-partial-sums&quot;, &quot;math-spectral-norm&quot;, &quot;regexp-dna&quot;,
+   &quot;string-base64&quot;, &quot;string-fasta&quot;, &quot;string-tagcloud&quot;,
+   &quot;string-unpack-code&quot;, &quot;string-validate-input&quot;].each {
+    | name |
+    SUNSPIDER.add SunSpiderBenchmark.new(name)
+    WARMUP.addIgnoringPattern SunSpiderBenchmark.new(name)
+  }
+
+  LONGSPIDER = BenchmarkSuite.new(&quot;LongSpider&quot;, :geometricMean, 0)
+  [&quot;3d-cube&quot;, &quot;3d-morph&quot;, &quot;3d-raytrace&quot;, &quot;access-binary-trees&quot;,
+   &quot;access-fannkuch&quot;, &quot;access-nbody&quot;, &quot;access-nsieve&quot;,
+   &quot;bitops-3bit-bits-in-byte&quot;, &quot;bitops-bits-in-byte&quot;, &quot;bitops-nsieve-bits&quot;,
+   &quot;controlflow-recursive&quot;, &quot;crypto-aes&quot;, &quot;crypto-md5&quot;, &quot;crypto-sha1&quot;,
+   &quot;date-format-tofte&quot;, &quot;date-format-xparb&quot;, &quot;math-cordic&quot;,
+   &quot;math-partial-sums&quot;, &quot;math-spectral-norm&quot;, &quot;string-base64&quot;,
+   &quot;string-fasta&quot;, &quot;string-tagcloud&quot;].each {
+    | name |
+    LONGSPIDER.add LongSpiderBenchmark.new(name)
+  }
+
+  V8 = BenchmarkSuite.new(&quot;V8Spider&quot;, :geometricMean, 0)
+  [&quot;crypto&quot;, &quot;deltablue&quot;, &quot;earley-boyer&quot;, &quot;raytrace&quot;,
+   &quot;regexp&quot;, &quot;richards&quot;, &quot;splay&quot;].each {
+    | name |
+    V8.add V8Benchmark.new(name)
+  }
+
+  OCTANE = BenchmarkSuite.new(&quot;Octane&quot;, :geometricMean, 1)
+  [[[&quot;crypto&quot;], &quot;encrypt&quot;, 1, true, false, 32],
+   [[&quot;crypto&quot;], &quot;decrypt&quot;, 1, true, false, 32],
+   [[&quot;deltablue&quot;], &quot;deltablue&quot;, 2, true, false, 32],
+   [[&quot;earley-boyer&quot;], &quot;earley&quot;, 1, true, false, 32],
+   [[&quot;earley-boyer&quot;], &quot;boyer&quot;, 1, true, false, 32],
+   [[&quot;navier-stokes&quot;], &quot;navier-stokes&quot;, 2, true, false, 16],
+   [[&quot;raytrace&quot;], &quot;raytrace&quot;, 2, true, false, 32],
+   [[&quot;richards&quot;], &quot;richards&quot;, 2, true, false, 32],
+   [[&quot;splay&quot;], &quot;splay&quot;, 2, true, false, 32],
+   [[&quot;regexp&quot;], &quot;regexp&quot;, 2, false, false, 16],
+   [[&quot;pdfjs&quot;], &quot;pdfjs&quot;, 2, false, false, 4],
+   [[&quot;mandreel&quot;], &quot;mandreel&quot;, 2, false, false, 4],
+   [[&quot;gbemu-part1&quot;, &quot;gbemu-part2&quot;], &quot;gbemu&quot;, 2, false, false, 4],
+   [[&quot;code-load&quot;], &quot;closure&quot;, 1, false, false, 16],
+   [[&quot;code-load&quot;], &quot;jquery&quot;, 1, false, false, 16],
+   [[&quot;box2d&quot;], &quot;box2d&quot;, 2, false, false, 8],
+   [[&quot;zlib&quot;, &quot;zlib-data&quot;], &quot;zlib&quot;, 2, false, true, 3],
+   [[&quot;typescript&quot;, &quot;typescript-input&quot;, &quot;typescript-compiler&quot;], &quot;typescript&quot;, 2, false, true, 1]].each {
+    | args |
+    OCTANE.add OctaneBenchmark.new(*args)
+  }
+
+  KRAKEN = BenchmarkSuite.new(&quot;Kraken&quot;, :arithmeticMean, -1)
+  [&quot;ai-astar&quot;, &quot;audio-beat-detection&quot;, &quot;audio-dft&quot;, &quot;audio-fft&quot;,
+   &quot;audio-oscillator&quot;, &quot;imaging-darkroom&quot;, &quot;imaging-desaturate&quot;,
+   &quot;imaging-gaussian-blur&quot;, &quot;json-parse-financial&quot;,
+   &quot;json-stringify-tinderbox&quot;, &quot;stanford-crypto-aes&quot;,
+   &quot;stanford-crypto-ccm&quot;, &quot;stanford-crypto-pbkdf2&quot;,
+   &quot;stanford-crypto-sha256-iterative&quot;].each {
+    | name |
+    KRAKEN.add KrakenBenchmark.new(name)
+  }
+  
+  JSBENCH = BenchmarkSuite.new(&quot;JSBench&quot;, :arithmeticMean, 0)
+  [[&quot;amazon&quot;, &quot;urm&quot;], [&quot;facebook&quot;, &quot;urem&quot;], [&quot;google&quot;, &quot;urem&quot;], [&quot;twitter&quot;, &quot;urem&quot;],
+   [&quot;yahoo&quot;, &quot;urem&quot;]].each {
+    | nameAndMode |
+    JSBENCH.add JSBenchBenchmark.new(*nameAndMode)
+  }
+  
+  JSREGRESS = BenchmarkSuite.new(&quot;JSRegress&quot;, :geometricMean, 0)
+  Dir.foreach(JSREGRESS_PATH) {
+    | filename |
+    if filename =~ /\.js$/
+      name = $~.pre_match
+      JSREGRESS.add JSRegressBenchmark.new(name)
+    end
+  }
+  
+  ASMBENCH = BenchmarkSuite.new(&quot;AsmBench&quot;, :geometricMean, 0)
+  if JSBENCH_PATH
+    Dir.foreach(ASMBENCH_PATH) {
+      | filename |
+      if filename =~ /\.js$/
+        name = $~.pre_match
+        ASMBENCH.add AsmBenchBenchmark.new(name)
+      end
+    }
+  end
+
+  COMPRESSIONBENCH = BenchmarkSuite.new(&quot;CompressionBench&quot;, :geometricMean, 0)
+  [[[&quot;huffman&quot;, &quot;compression-data&quot;], &quot;huffman&quot;, &quot;&quot;],
+   [[&quot;arithmetic&quot;, &quot;compression-data&quot;], &quot;arithmetic&quot;, &quot;Simple&quot;],
+   [[&quot;arithmetic&quot;, &quot;compression-data&quot;], &quot;arithmetic&quot;, &quot;Precise&quot;],
+   [[&quot;arithmetic&quot;, &quot;compression-data&quot;], &quot;arithmetic&quot;, &quot;Complex Precise&quot;],
+   [[&quot;arithmetic&quot;, &quot;compression-data&quot;], &quot;arithmetic&quot;, &quot;Precise Order 0&quot;],
+   [[&quot;arithmetic&quot;, &quot;compression-data&quot;], &quot;arithmetic&quot;, &quot;Precise Order 1&quot;],
+   [[&quot;arithmetic&quot;, &quot;compression-data&quot;], &quot;arithmetic&quot;, &quot;Precise Order 2&quot;],
+   [[&quot;arithmetic&quot;, &quot;compression-data&quot;], &quot;arithmetic&quot;, &quot;Simple Order 1&quot;],
+   [[&quot;arithmetic&quot;, &quot;compression-data&quot;], &quot;arithmetic&quot;, &quot;Simple Order 2&quot;],
+   [[&quot;lz-string&quot;, &quot;compression-data&quot;], &quot;lz-string&quot;, &quot;&quot;]
+  ].each {
+    | args |
+    COMPRESSIONBENCH.add CompressionBenchBenchmark.new(*args)
+  }
+
+  DSPJS = BenchmarkSuite.new(&quot;DSP&quot;, :geometricMean, 0)
+  DSPJS.add DSPJSFiltrrBenchmark.new(&quot;filtrr-posterize-tint&quot;, &quot;e2&quot;)
+  DSPJS.add DSPJSFiltrrBenchmark.new(&quot;filtrr-tint-contrast-sat-bright&quot;, &quot;e5&quot;)
+  DSPJS.add DSPJSFiltrrBenchmark.new(&quot;filtrr-tint-sat-adj-contr-mult&quot;, &quot;e7&quot;)
+  DSPJS.add DSPJSFiltrrBenchmark.new(&quot;filtrr-blur-overlay-sat-contr&quot;, &quot;e8&quot;)
+  DSPJS.add DSPJSFiltrrBenchmark.new(&quot;filtrr-sat-blur-mult-sharpen-contr&quot;, &quot;e9&quot;)
+  DSPJS.add DSPJSFiltrrBenchmark.new(&quot;filtrr-sepia-bias&quot;, &quot;e10&quot;)
+  DSPJS.add DSPJSVP8Benchmark.new
+  DSPJS.add DSPStarfieldBenchmark.new
+  DSPJS.add DSPJSJSLinuxBenchmark.new
+  DSPJS.add DSPJSQuake3Benchmark.new
+  DSPJS.add DSPJSMandelbrotBenchmark.new
+  DSPJS.add DSPJSAmmoJSASMBenchmark.new
+  DSPJS.add DSPJSAmmoJSRegularBenchmark.new
+
+  BROWSERMARK_JS = BenchmarkSuite.new(&quot;BrowsermarkJS&quot;, :geometricMean, 1)
+  [&quot;array_blur&quot;, &quot;array_weighted&quot;, &quot;string_chat&quot;, &quot;string_filter&quot;, &quot;string_weighted&quot;].each {
+    | name |
+      BROWSERMARK_JS.add BrowsermarkJSBenchmark.new(name)
+  }
+  
+  BROWSERMARK_DOM = BenchmarkSuite.new(&quot;BrowsermarkDOM&quot;, :geometricMean, 1)
+  [&quot;advanced_search&quot;, &quot;create_source&quot;, &quot;dynamic_create&quot;, &quot;search&quot;].each {
+    | name |
+      BROWSERMARK_DOM.add BrowsermarkDOMBenchmark.new(name)
+  }
+
+  $suites = []
+  
+  if $includeSunSpider and not SUNSPIDER.empty?
+    $suites &lt;&lt; SUNSPIDER
+  end
+  
+  if $includeLongSpider and not LONGSPIDER.empty?
+    $suites &lt;&lt; LONGSPIDER
+  end
+  
+  if $includeV8 and not V8.empty?
+    $suites &lt;&lt; V8
+  end
+  
+  if $includeOctane and not OCTANE.empty?
+    if OCTANE_PATH
+      $suites &lt;&lt; OCTANE
+    else
+      $stderr.puts &quot;Warning: refusing to run Octane because \&quot;OctanePath\&quot; isn't set in #{$configPath}.&quot;
+    end
+  end
+  
+  if $includeKraken and not KRAKEN.empty?
+    if KRAKEN_PATH
+      $suites &lt;&lt; KRAKEN
+    else
+      $stderr.puts &quot;Warning: refusing to run Kraken because \&quot;KrakenPath\&quot; isn't set in #{$configPath}.&quot;
+    end
+  end
+  
+  if $includeJSBench and not JSBENCH.empty?
+    if $allDRT
+      if JSBENCH_PATH
+        $suites &lt;&lt; JSBENCH
+      else
+        $stderr.puts &quot;Warning: refusing to run JSBench because \&quot;JSBenchPath\&quot; isn't set in #{$configPath}&quot;
+      end
+    else
+      $stderr.puts &quot;Warning: refusing to run JSBench because not all VMs are DumpRenderTree or WebKitTestRunner.&quot;
+    end
+  end
+  
+  if $includeJSRegress and not JSREGRESS.empty?
+    $suites &lt;&lt; JSREGRESS
+  end
+  
+  if $includeAsmBench and not ASMBENCH.empty?
+    if ASMBENCH_PATH
+      $suites &lt;&lt; ASMBENCH
+    else
+      $stderr.puts &quot;Warning: refusing to run AsmBench because \&quot;AsmBenchPath\&quot; isn't set in #{$configPath}.&quot;
+    end
+  end
+  
+  if $includeDSPJS and not DSPJS.empty?
+    if $allDRT
+      if DSPJS_FILTRR_PATH and DSPJS_ROUTE9_PATH and DSPJS_STARFIELD_PATH and DSPJS_QUAKE3_PATH and DSPJS_MANDELBROT_PATH and DSPJS_JSLINUX_PATH and DSPJS_AMMOJS_ASMJS_PATH and DSPJS_AMMOJS_REGULAR_PATH
+        $suites &lt;&lt; DSPJS
+      else
+        $stderr.puts &quot;Warning: refusing to run DSPJS because one of the following isn't set in #{$configPath}: \&quot;DSPJSFiltrrPath\&quot;, \&quot;DSPJSRoute9Path\&quot;, \&quot;DSPJSStarfieldPath\&quot;, \&quot;DSPJSQuake3Path\&quot;, \&quot;DSPJSMandelbrotPath\&quot;, \&quot;DSPJSLinuxPath\&quot;, \&quot;DSPJSAmmoJSAsmJSPath\&quot;, \&quot;DSPJSAmmoJSRegularPath\&quot;.&quot;
+      end
+    else
+      $stderr.puts &quot;Warning: refusing to run DSPJS because not all VMs are DumpRenderTree or WebKitTestRunner.&quot;
+    end
+  end
+  
+  if $includeBrowsermarkJS and not BROWSERMARK_JS.empty?
+    if BROWSERMARK_PATH and BROWSERMARK_JS_PATH
+      $suites &lt;&lt; BROWSERMARK_JS
+    else
+      $stderr.puts &quot;Warning: refusing to run Browsermark-JS because one of the following isn't set in #{$configPath}: \&quot;BrowserMarkPath\&quot; or \&quot;BrowserMarkJSPath\&quot;.&quot;
+    end
+  end
+
+  if $includeBrowsermarkDOM and not BROWSERMARK_DOM.empty?
+    if $allDRT
+      if BROWSERMARK_PATH and BROWSERMARK_JS_PATH and BROWSERMARK_DOM_PATH
+        $suites &lt;&lt; BROWSERMARK_DOM
+      else
+        $stderr.puts &quot;Warning: refusing to run Browsermark-DOM because one of the following isn't set in #{$configPath}: \&quot;BrowserMarkPath\&quot;, \&quot;BrowserMarkJSPath\&quot;, or \&quot;BrowserMarkDOMPath\&quot;.&quot;
+      end
+    else
+      $stderr.puts &quot;Warning: refusing to run Browsermark-DOM because not all VMs are DumpRenderTree or WebKitTestRunner.&quot;
+    end
+  end
+
+  if $includeCompressionBench and not COMPRESSIONBENCH.empty?
+    if COMPRESSIONBENCH_PATH
+      $suites &lt;&lt; COMPRESSIONBENCH
+    else
+      $stderr.puts &quot;Warning: refusing to run CompressionBench because \&quot;CompressionBenchPath\&quot; isn't set in #{$configPath}&quot;
+    end
+  end
+
+  $allSuites = $suites.map{|v| v.suites}.flatten(1)
+  
+  $benchmarks = []
+  $suites.each {
+    | suite |
+    $benchmarks += suite.benchmarks
+  }
+  
+  if $suites.empty? or $benchmarks.empty?
+    $stderr.puts &quot;No benchmarks found.  Bailing out.&quot;
+    exit 1
+  end
+  
+  if $outer*$inner == 1
+    $stderr.puts &quot;Warning: will only collect one sample per benchmark/VM.  Confidence interval calculation will fail.&quot;
+  end
+  
+  $stderr.puts &quot;Using timeMode = #{$timeMode}.&quot; if $verbosity &gt;= 1
+  
+  $runPlans = []
+  $vms.each {
+    | vm |
+    $benchmarks.each {
+      | benchmark |
+      $outer.times {
+        | iteration |
+        $runPlans &lt;&lt; BenchRunPlan.new(benchmark, vm, iteration)
+      }
+    }
+  }
+  
+  $runPlans.shuffle!
+  
+  if $sunSpiderWarmup
+    warmupPlans = []
+    $vms.each {
+      | vm |
+      WARMUP.benchmarks.each {
+        | benchmark |
+        warmupPlans &lt;&lt; BenchRunPlan.new(benchmark, vm, 0)
+      }
+    }
+    
+    $runPlans = warmupPlans.shuffle + $runPlans
+  end
+  
+  $suitepad = $suites.collect {
+    | suite |
+    suite.to_s.size
+  }.max + 1
+  
+  $planpad = $runPlans.collect {
+    | plan |
+    plan.to_s.size
+  }.max + 1
+  
+  maxBenchNameLength =
+    ($benchmarks + [&quot;&lt;arithmetic&gt; *&quot;, &quot;&lt;geometric&gt; *&quot;, &quot;&lt;harmonic&gt; *&quot;]).collect {
+    | benchmark |
+    if benchmark.respond_to? :name
+      benchmark.name.size
+    else
+      benchmark.size
+    end
+  }.max
+  $benchNameClip = 40
+  $benchpad = [maxBenchNameLength, $benchNameClip].min + 1
+  
+  $weightpad = $benchmarks.collect {
+    | benchmark |
+    benchmark.weightString.size
+  }.max
+
+  $vmpad = $vms.collect {
+    | vm |
+    vm.to_s.size
+  }.max + 1
+  
+  $analyze.each_with_index {
+    | filename, index |
+    if index &gt;= 1
+      puts
+    end
+    parseAndDisplayResults(IO::read(filename))
+  }
+  
+  if not $prepare and not $run
+    exit 0
+  end
+  
+  if FileTest.exist? BENCH_DATA_PATH
+    cmd = &quot;rm -rf #{BENCH_DATA_PATH}&quot;
+    $stderr.puts &quot;&gt;&gt; #{cmd}&quot; if $verbosity &gt;= 2
+    raise unless system cmd
+  end
+  
+  Dir.mkdir BENCH_DATA_PATH
+  
+  if $needToCopyVMs
+    canCopyIntoBenchPath = true
+    $vms.each {
+      | vm |
+      canCopyIntoBenchPath = false unless vm.canCopyIntoBenchPath
+    }
+    
+    if canCopyIntoBenchPath
+      $vms.each {
+        | vm |
+        $stderr.puts &quot;Copying #{vm} into #{BENCH_DATA_PATH}...&quot;
+        vm.copyIntoBenchPath
+      }
+      $stderr.puts &quot;All VMs are in place.&quot;
+    else
+      $stderr.puts &quot;Warning: don't know how to copy some VMs into #{BENCH_DATA_PATH}, so I won't do it.&quot;
+    end
+  end
+  
+  if $measureGC and $measureGC != true
+    found = false
+    $vms.each {
+      | vm |
+      if vm.name == $measureGC
+        found = true
+      end
+    }
+    unless found
+      $stderr.puts &quot;Warning: --measure-gc option ignored because no VM is named #{$measureGC}&quot;
+    end
+  end
+  
+  if $prepare
+    File.open(&quot;#{BENCH_DATA_PATH}/runscript&quot;, &quot;w&quot;) {
+      | file |
+      file.puts &quot;echo \&quot;HOSTNAME:\\c\&quot;&quot;
+      file.puts &quot;hostname&quot;
+      file.puts &quot;echo&quot;
+      file.puts &quot;echo \&quot;HARDWARE:\\c\&quot;&quot;
+      file.puts &quot;/usr/sbin/sysctl hw.model&quot;
+      file.puts &quot;echo&quot;
+      file.puts &quot;set -e&quot;
+      $script = file
+      $runPlans.each_with_index {
+        | plan, idx |
+        if $verbosity == 0 and not $silent
+          text1 = lpad(idx.to_s,$runPlans.size.to_s.size)+&quot;/&quot;+$runPlans.size.to_s
+          text2 = plan.to_s
+          file.puts(&quot;echo &quot; + Shellwords.shellescape(&quot;\r#{text1} #{rpad(text2,$planpad)}&quot;) + &quot;\&quot;\\c\&quot; 1&gt;&amp;2&quot;)
+          file.puts(&quot;echo &quot; + Shellwords.shellescape(&quot;\r#{text1} #{text2}&quot;) + &quot;\&quot;\\c\&quot; 1&gt;&amp;2&quot;)
+        end
+        plan.emitRunCode
+      }
+      if $verbosity == 0 and not $silent
+        file.puts(&quot;echo &quot; + Shellwords.shellescape(&quot;\r#{$runPlans.size}/#{$runPlans.size} #{' '*($suitepad+1+$benchpad+1+$vmpad)}&quot;) + &quot;\&quot;\\c\&quot; 1&gt;&amp;2&quot;)
+        file.puts(&quot;echo &quot; + Shellwords.shellescape(&quot;\r#{$runPlans.size}/#{$runPlans.size}&quot;) + &quot; 1&gt;&amp;2&quot;)
+      end
+    }
+  end
+  
+  if $run
+    unless $remoteHosts.empty?
+      $stderr.puts &quot;Packaging benchmarking directory for remote hosts...&quot; if $verbosity==0
+      Dir.chdir(TEMP_PATH) {
+        cmd = &quot;tar -czf payload.tar.gz benchdata&quot;
+        $stderr.puts &quot;&gt;&gt; #{cmd}&quot; if $verbosity&gt;=2
+        raise unless system(cmd)
+      }
+      
+      def grokHost(host)
+        if host =~ /:([0-9]+)$/
+          &quot;-p &quot; + $1 + &quot; &quot; + Shellwords.shellescape($~.pre_match)
+        else
+          Shellwords.shellescape(host)
+        end
+      end
+      
+      def sshRead(host, command)
+        cmd = &quot;ssh #{$sshOptions.collect{|x| Shellwords.shellescape(x)}.join(' ')} #{grokHost(host)} #{Shellwords.shellescape(command)}&quot;
+        $stderr.puts &quot;&gt;&gt; #{cmd}&quot; if $verbosity&gt;=2
+        result = &quot;&quot;
+        IO.popen(cmd, &quot;r&quot;) {
+          | inp |
+          inp.each_line {
+            | line |
+            $stderr.puts &quot;#{host}: #{line}&quot; if $verbosity&gt;=2
+            result += line
+          }
+        }
+        raise &quot;#{$?}&quot; unless $?.success?
+        result
+      end
+      
+      def sshWrite(host, command, data)
+        cmd = &quot;ssh #{$sshOptions.collect{|x| Shellwords.shellescape(x)}.join(' ')} #{grokHost(host)} #{Shellwords.shellescape(command)}&quot;
+        $stderr.puts &quot;&gt;&gt; #{cmd}&quot; if $verbosity&gt;=2
+        IO.popen(cmd, &quot;w&quot;) {
+          | outp |
+          outp.write(data)
+        }
+        raise &quot;#{$?}&quot; unless $?.success?
+      end
+      
+      $remoteHosts.each {
+        | host |
+        $stderr.puts &quot;Sending benchmark payload to #{host}...&quot; if $verbosity==0
+        
+        remoteTempPath = JSON::parse(sshRead(host, &quot;cat ~/.bencher&quot;))[&quot;tempPath&quot;]
+        raise unless remoteTempPath
+        
+        sshWrite(host, &quot;cd #{Shellwords.shellescape(remoteTempPath)} &amp;&amp; rm -rf benchdata &amp;&amp; tar -xz&quot;, IO::read(&quot;#{TEMP_PATH}/payload.tar.gz&quot;))
+        
+        $stderr.puts &quot;Running on #{host}...&quot; if $verbosity==0
+        
+        parseAndDisplayResults(sshRead(host, &quot;cd #{Shellwords.shellescape(remoteTempPath + '/benchdata')} &amp;&amp; sh runscript&quot;))
+      }
+    end
+    
+    if not $remoteHosts.empty? and $alsoLocal
+      $stderr.puts &quot;Running locally...&quot;
+    end
+    
+    if $remoteHosts.empty? or $alsoLocal
+      parseAndDisplayResults(runAndGetResults)
+    end
+  end
+  
+  if $prepare and not $run and $analyze.empty?
+    puts wrap(&quot;Benchmarking script and data are in #{BENCH_DATA_PATH}. You can run &quot;+
+              &quot;the benchmarks and get the results by doing:&quot;, 78)
+    puts
+    puts &quot;cd #{BENCH_DATA_PATH}&quot;
+    puts &quot;sh runscript &gt; results.txt&quot;
+    puts
+    puts wrap(&quot;Then you can analyze the results by running bencher with the same arguments &quot;+
+              &quot;as now, but replacing --prepare-only with --analyze results.txt.&quot;, 78)
+  end
+rescue =&gt; e
+  fail(e)
+end
+  
</ins></span></pre>
</div>
</div>

</body>
</html>