[webkit-dev] setTimeout as browser speed throttle

Mike Belshe mike at belshe.com
Tue Sep 30 07:34:36 PDT 2008


Hi, Maciej,
Thanks for the response.

Comments below

On Mon, Sep 29, 2008 at 8:06 PM, Maciej Stachowiak <mjs at apple.com> wrote:

>
> On Sep 29, 2008, at 7:26 PM, Mike Belshe wrote:
>
> Hi,
> One of the differences between Chrome and Safari is that Chrome sets the
> setTimeout clamp to 1ms as opposed to 10ms.  This means that if the
> application writer requests a timer of less than 10ms, Chrome will allow it,
> whereas Safari will clamp the minimum timeout to 10ms.  The reason we did
> this was to minimize browser delays when running graphical javascript
> applications.
>
> This has been a concern for some, so I wanted to bring it up here and get
> an open discussion going.  My hope is to lower or remove the clamp over
> time.
>
> To demonstrate the benefit, here is one test case which benefits from
> removing the setTimeout clamp.  Chrome gets about a ~4x performance boost by
> reducing the setTimeout clamp.  This programming pattern in javascript is
> very common.
>
>    http://www.belshe.com/test/sort/sort.html<http://www.belshe.com/test/sort/sort.html>
>
> One counter argument brought up is a claim that all other browsers use a
> 10ms clamp, and this might cause incompatibilities.  However, it turns out
> that browsers already use widely varying values.
>
>
> I believe all major browsers (besides Chrome) have a minimum of either 10ms
> or 15.6ms. I don't think this is "widely varying".
>

I mean no disrespect, but this is a 56% variance.  So yes, I do think this
is widely varying, and it has a very non-trivial (~35%) impact on the sort
example above.


>  We also really haven't seen any incompatibilities due to this change.  It
> is true that having a lower clamp can provide an easy way for web developers
> to accidentally spin the CPU, and we have seen one high-profile instance of
> this.  But of course spinning the CPU can be done in javascript all by
> itself :-)
>
>
> The kinds of problems we are concerned about are of three forms:
>
> 1) Animations that run faster than intended by the author (it's true that
> 10ms vs 16ms floors will give slight differences in speed, but not nearly as
> much so as 10ms vs no delay).
>

I'm not aware of any of these; they probably exist, but does anyone have
one?


>
> 2) Burning CPU and battery on pages where the author did not expect this to
> happen, and had not seen it on the browsers he or she has tested with.
>

I consider the few websites that behave this way to be buggy.  If the app
requested to wait for 2ms, and the browser actually woke up in 2ms, and this
caused the application to spin the CPU, thats kind of a bug on the
application, right?  So you are applying rules for the unexpected, and
prohibiting the expected, while I am applying the rules in the reverse.

There also isn't an application incompatibility here; the website still
works just fine.  Using more CPU is bad, of course, but do we really want to
slow down all apps to work around what is a bug a few apps?  Honestly, there
aren't many apps that appear to be spinning the CPU here; and yet we've got
multiple examples of apps that can go much much faster.



> 3) Possibly slowing things dow if a page is using a 0-delay timer to poll
> for completion of network activity. The popular JavaScript library jQuery
> does this to detect when all stylesheets have loaded. Lack of clamping could
> actually slow down the loading it is intended to wait for.
>

Sure, requesting a 0ms timeout to wait for network activity is probably more
aggressive than the author intended.  This could be fixed with a one
character bug fix? :-)  Should we slow down all apps, or just fix the bug?


>
> 4) Future content that is authored in one of Safari or Chrome that depends
> on timing of 0-delay timers will have different behavior in the other. Thus,
> we get less compatibility benefit for WebKit-based browsers through
> cross-testing.
>

Webkit based browsers already have varied timeouts.  Safari for Mac uses
10ms, Safari for Windows uses 15.6.  So even today, application developers
cannot rely on WebKit using one-true-value.


>
> The fact that you say you have seen one high-profile instance doesn't sound
> to me like there are no incompatibilities. It sounds like there are some,
> and you have encountered at least one of them. Points 1 and 2 are what made
> us add the timer minimum in the first place, as documented in WebKit's SVN
> history and ChangeLogs. We originally did not have one, and added it for
> compatibility with other browsers.
>

Again, to be clear -  the site still works fine; its not like the site
rendered incorrectly.  But, it is true that the CPU was spinning, which is
bad.    And I agree - I am sure there are more.  But there are also more
sites that benefit from this change as well.


> Currently Chrome gets an advantage on some benchmarks by accepting this
> compatibility risk. This leads to misleading performance comparisons, in
> much the same way as firing the "load" event before images are loaded would.
>

If the benchmark is improperly effected by this timeout, then the benchmark
is flawed under all circumstances.  Surely you agree that IE, with its
15.6ms timeout would perform worse than webkit with a 10ms timeout on such a
benchmark?  We could keep all apps going slow with the current setTimeout
clamp, but it wouldn't change the fact that the benchmark is not measuring
what it was intending to; unless...

Maybe the benchmark actually is valid?  I don't know of the benchmark you
are referring to.  But, I do think the the sort application I provided is
legit.  It's a very common programming model, and Javascript developers I
have spoken to well understood the implications of this.



>
>  Here is a summary of the minimum timeout for existing browsers (you can
> test your browser with this page: http://www.belshe.com/test/timers.html )<http://www.belshe.com/test/timers.html>
> Safari for the mac:   10ms
> Safari for windows:    15.6ms
> Firefox:                   10ms or 15.6ms, depending on whether or not
> Flash is running on the system
> IE :                         15.6ms
> Chrome:                  1ms (future - remove the clamp?)
>
> So here are a couple of options:
>    1) Remove or lower the clamp so that javascript apps can run
> substantially faster.
>    2) Keep the clamp and let them run slowly :-)
>
> Thoughts?  It would be great to see Safari and Chrome use the same clamping
> values.
>
>
> Or there is option 3:
>
> 3) Restore the clamp for setTimeout and setInterval to 10ms for
> compatibility, and add a new setHighResTimer API that does not have any
> lower bound.
>
> This would let aware Web applications get the same benefit, but without any
> of the compatibility risk to legacy Web content. The main argument against
> doing things this way is that it would add API surface area. But that seems
> like a small price to pay for removing the compatibility risk, and turning
> the change into something other browsers would be willing to adopt.
>
> I would like to propose an API along these lines for HTML5 but I would
> prefer if we can achieve consensus in the WebKit community first.
>

OK - We could add a new API; I could potentially live with that.  I'd want
to be really sure that the existing API didn't fit, and that the new API
offered more than just a new name. Nobody wants gratuitous APIs, right?
I'd like to exhaust all possibility for the existing API before adding a new
one.   (btw - would we need two new APIs - setHiResTimeout and
setHiResInterval?  would webkit introduce clamps on those too?  what if the
clock is the same on both apis?)

One question I have is - why is 10ms the right value?  If some browsers can
do 15.6, and some 10, would 5ms also be acceptable?    How about 9ms?  3ms?
 We've already recognized that browsers differ by as much as 56%; so how do
we determine what the limit of variance should be?

Overall, I think the evidence provided here is pretty compelling that there
is a very real, plausible, and useful speedup for web applications by
removing or reducing the clamp.  The pushback is about compatibility; but
the examples of incompatibility are mostly theoretical.  The only real
example is a relatively rare CPU spinner; which isn't an incorrectly
rendered webpage at all, and it is always due to improper coding in the
website's javascript.  Is this really worthy of a new API for all browser
vendors to implement, document, and distribute, and then for all web
developers to learn and adopt?

10 years from now, when there are two APIs, will we need both?  10ms will be
a lot of processor time then.  Who will want this legacy API?  It's not
spec'd to 10ms now, do we really want to retro-actively spec it to be slow?

Thanks,
Mike




>
> Regards,
> Maciej
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.webkit.org/pipermail/webkit-dev/attachments/20080930/04f14866/attachment.html 


More information about the webkit-dev mailing list