[webkit-dev] Iterating SunSpider

Mike Belshe mike at belshe.com
Tue Jul 7 16:22:39 PDT 2009


On Tue, Jul 7, 2009 at 3:58 PM, Geoffrey Garen <ggaren at apple.com> wrote:

> Are you saying that you did see Regex as being such a high percentage of
>> javascript code?  If so, we're using very different mixes of content for our
>> tests.
>>
>
> I'm saying that I don't buy your claim that regular expression performance
> should only count as 1% of a JavaScript benchmark.


I never said that.


> I don't buy your premise -- that regular expressions are only 1% of
> JavaScript execution time on the web -- because I think your sample size is
> small, and, anecdotally, I've seen significant web applications that make
> heavy use of regular expressions for parsing and validation.


If you've got data, please post it - that would be fantastic.


> I also don't buy your conclusion -- that if regular expressions account for
> 1% of JavaScript time on the Internet overall, they need not be optimized.


I never said that.


> First, generally, I think it's dubious to say that there's any major
> feature of JavaScript that need not be optimized. Second, it's important for
> all web apps to be fast in WebKit -- not just the ones that do what's common
> overall. Third, we want to enable not only the web applications of today,
> but also the web applications of tomorrow.


I never said that either.

I'm talking about how a score is computed.  You're throwing a lot of red
herrings in here, and I don't know why.  Is this turning into an "I'm mad at
Google vs Webkit thing?"  I'm trying to have an engineering discussion about
the merits of two different scoring mechanisms.


> To some extent, I think you must agree with me, since v8 copied
> JavaScriptCore in implementing a regular expression JIT, and the v8
> benchmark copied SunSpider in including regular expressions as a test
> component.


I like SunSpider because of its balance. I think SunSpider, unlike some
> other benchmarks, tends to encourage broad thinking about all the different
> parts of the JavaScript language, and design tradeoffs between them, while
> discouraging tunnel vision. You can't just implement fast integer math, or
> fast property access, and call it a day on SunSpider. Instead, you need to
> consider many different language features, and do them all well.


You're completely missing the point.

Of course benchmarks should cover a broad set of features; that is not being
debated here.  I'd love to see the tests *expanded*.  Using a sum, it is
very very difficult to add "many different language features" without
causing the benchmark scoring to be very misleading.  (See my other reply).

The only thing I'm debating here is how the score should be computed.  There
is the summation method, which is variable over time, or you can use a
geometric mean, which is balanced.

To think of this another way - how is JavaScript benchmarking different from
other types of benchmarks?  If we could answer this question, that would
help me a lot.

What do you think of spec.org?  They've been using geometric mean to do
scoring for years:  http://www.spec.org/spec/glossary/#geometricmean  Other
benchmarks all seem to choose geometric means as well.  Peacemaker also uses
a geometric mean:  http://service.futuremark.com/peacekeeper/faq.action  I'm
actually hard pressed to find any benchmark which uses summation.  Are there
any non-sunspider benchmarks using sums?

Thanks,
Mike








Mike



>
>
> Geoff
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.webkit.org/pipermail/webkit-dev/attachments/20090707/98452712/attachment.html>


More information about the webkit-dev mailing list