[webkit-dev] Iterating SunSpider
mike at belshe.com
Tue Jul 7 16:22:39 PDT 2009
On Tue, Jul 7, 2009 at 3:58 PM, Geoffrey Garen <ggaren at apple.com> wrote:
> Are you saying that you did see Regex as being such a high percentage of
> I'm saying that I don't buy your claim that regular expression performance
I never said that.
> I don't buy your premise -- that regular expressions are only 1% of
> small, and, anecdotally, I've seen significant web applications that make
> heavy use of regular expressions for parsing and validation.
If you've got data, please post it - that would be fantastic.
> I also don't buy your conclusion -- that if regular expressions account for
I never said that.
> First, generally, I think it's dubious to say that there's any major
> all web apps to be fast in WebKit -- not just the ones that do what's common
> overall. Third, we want to enable not only the web applications of today,
> but also the web applications of tomorrow.
I never said that either.
I'm talking about how a score is computed. You're throwing a lot of red
herrings in here, and I don't know why. Is this turning into an "I'm mad at
Google vs Webkit thing?" I'm trying to have an engineering discussion about
the merits of two different scoring mechanisms.
> To some extent, I think you must agree with me, since v8 copied
> benchmark copied SunSpider in including regular expressions as a test
I like SunSpider because of its balance. I think SunSpider, unlike some
> other benchmarks, tends to encourage broad thinking about all the different
> discouraging tunnel vision. You can't just implement fast integer math, or
> fast property access, and call it a day on SunSpider. Instead, you need to
> consider many different language features, and do them all well.
You're completely missing the point.
Of course benchmarks should cover a broad set of features; that is not being
debated here. I'd love to see the tests *expanded*. Using a sum, it is
very very difficult to add "many different language features" without
causing the benchmark scoring to be very misleading. (See my other reply).
The only thing I'm debating here is how the score should be computed. There
is the summation method, which is variable over time, or you can use a
geometric mean, which is balanced.
other types of benchmarks? If we could answer this question, that would
help me a lot.
What do you think of spec.org? They've been using geometric mean to do
scoring for years: http://www.spec.org/spec/glossary/#geometricmean Other
benchmarks all seem to choose geometric means as well. Peacemaker also uses
a geometric mean: http://service.futuremark.com/peacekeeper/faq.action I'm
actually hard pressed to find any benchmark which uses summation. Are there
any non-sunspider benchmarks using sums?
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the webkit-dev