[webkit-dev] Iterating SunSpider

Peter Kasting pkasting at google.com
Tue Jul 7 16:19:57 PDT 2009


I'm more verbose than Mike, but it seems like people are talking past each
other.

On Tue, Jul 7, 2009 at 3:25 PM, Oliver Hunt <oliver at apple.com> wrote:

> If we see one section of the test taking dramatically longer than another
> then we can assume that we have not been paying enough attention to
> performance in that area,
>

It depends on what your goal with perf is.  If the goal is to balance
optimizations such that operation A always consumes the same time as
operation B, you are correct.  But is this always best?  The current design
says "yes".  The open question is whether that is the best possible design.

On Tue, Jul 7, 2009 at 3:58 PM, Geoffrey Garen <ggaren at apple.com> wrote:
>
> I also don't buy your conclusion -- that if regular expressions account for
> 1% of JavaScript time on the Internet overall, they need not be optimized.


I didn't see Mike say that regexes "did not need to be optimized".

If given an operation that occurs 20% of the time and another that occurs 1%
of the time, I certainly think it _might_ be appropriate to spend more
engineering effort on optimizing the first operation.  Knowing for sure
depends on how much you value the rarer cases, for reasons such as you give
next:

Second, it's important for all web apps to be fast in WebKit -- not just the
> ones that do what's common overall. Third, we want to enable not only the
> web applications of today, but also the web applications of tomorrow.


I strongly agree with these principles, but I don't see why the current
design necessarily does a better job of preserving them than all other
designs.  For example, let's say at the time SunSpider was created (and
everything was roughly equal-weighted) that one of the subtests tested a
horribly slow operation that would greatly benefit future web apps if it
improved substantially.  Unfortunately, the original equal-weighting
enshrines the slowness of this operation, relative to the others being
tested, such that if you begin to make it faster, the subtests become
unbalanced and you conclude that no further work on it is needed for the
time being.  This is a suboptimal outcome.

So in general, the question is: when some operation is slower than others,
what criteria can we use to make the best decisions about where to spend
developer effort?  Surely our greatest cost here is opportunity cost.

I accept Maciej's statement that the current design was intentional.  I also
accept that sums and geomeans each have drawbacks in guiding
decision-making.  I simply want to focus on finding the best possible design
for the framework.

For example, the framework could compute both sums _and_ geomeans, if people
thought both were valuable.  We could agree on a way of benchmarking a
representative sample of current sites to get an idea of how widespread
certain operations currently are.  We could talk with the maintainers of
jQuery, Dojo, etc. to see what sorts of operations they think would be
helpful to future apps to make faster.  We could instrument browsers to have
some sort of (opt-in) sampling of real-world workloads.  etc.  Surely
together we can come up with ways to make Sunspider even better, while
keeping its current strengths in mind.

PK
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.webkit.org/pipermail/webkit-dev/attachments/20090707/9afb01b8/attachment.html>


More information about the webkit-dev mailing list