[webkit-dev] support for navigator.cores or navigator.hardwareConcurrency

Ryosuke Niwa rniwa at webkit.org
Wed May 7 21:32:12 PDT 2014


On Wed, May 7, 2014 at 9:08 PM, Rik Cabanier <cabanier at gmail.com> wrote:

> On Wed, May 7, 2014 at 7:58 PM, Brady Eidson <beidson at apple.com> wrote:
>
>>
>> On May 7, 2014, at 5:38 PM, Rik Cabanier <cabanier at gmail.com> wrote:
>>
>> On Wed, May 7, 2014 at 5:07 PM, Benjamin Poulain <benjamin at webkit.org>wrote:
>>
>>> On 5/7/14, 4:13 PM, Benjamin Poulain wrote:
>>>
>>>> On 5/7/14, 3:52 PM, Filip Pizlo wrote:
>>>>
>>>>> Exactly. Ben, Oliver, and others have made arguments against web
>>>>> workers. Rik is not proposing web workers.  We already support them.
>>>>> The
>>>>> point is to give API to let developers opt into behaving nicely if they
>>>>> are already using web workers.
>>>>>
>>>>
>>>> I have nothing against Web Workers. They are useful to dispatch
>>>> background tasks.
>>>>
>>>> They are basically the Web equivalent dispatch_async() of GCD, which is
>>>> already a very useful tool.
>>>>
>>>> What you are suggesting is useful for making Web Workers the tool to do
>>>> high performance multi-thread computation.
>>>> I don't think Web Workers are a great tool for that job at the moment. I
>>>> would prefer something along TBB, GCD or something like that.
>>>>
>>>>
>>>> For high performance computation, I think a more useful API would be
>>>> something like TBB parallel_for with automatic chunking.
>>>> It is actually had to do faster than that with the number of cores
>>>> unless you know your task very very well.
>>>>
>>>> It would be a little more work for us, but a huge convenience for the
>>>> users of Web Workers.
>>>>
>>>
>>> After chatting with Filip, it seems such a model is unlikely to happen
>>> anytime soon for JavaScript.
>>>
>>> In the absence of any tasks/kernels model, I am in favor of exposing a
>>> "good number of thread" API. It is definitely better than nothing.
>>
>>
>> Do we know what this number would be? My guess would be the number of
>> cores for "regular" systems...
>>
>>
>> Define "regular" systems:
>>
>
> "regular" systems are those were all running CPU's are of the same type.
> There are some exotic systems where some CPU's are much faster than others.
> I'm unsure what we should return there.
>
>
>>  As Ryosuke mentioned, for systems that run on battery power (read: a
>> vast majority of systems), keeping cores asleep to preserve battery life is
>> often preferable to the user instead of waking up all available hardware
>> and building up heat.
>>
>
> Actually, spinning up more cores while on battery power might be more
> efficient.
>

This depends on the kinds of workloads.

I'm having a hard time finding good data, but take this chart for instance:
> http://www.anandtech.com/show/7903/samsung-galaxy-s-5-review/5
> Let's say you have a task that would take 1 core 4 seconds. This would
> mean 4 x 2612mw = 10456mw
> Now if you can divide it over 4 cores: display = 854 (AMOLED), cpu core
> (simplified) = 2612 - 854 = 1758mw -> 854 + (4 x 1758mw) = 7886mw
>

If your app wasn't doing anything, one of the CPU is bound to be awaken by
other daemons and system kernel itself.  So keeping one of the cores awake
is relatively cheap.  So you can't just add or subtract wattage like that.

On the desktop world, Intel Turbo boost [1] boosts single thread
> performance but at the expense of making the CPU run much hotter.
>

Precisely because of this technology, doing all the work in a single thread
might be faster in some cases.

Putting an even load will reduce power usage so the ratio of operator/watt
> will improve There's a paper from NVidia that also describes this [2].
>
> Just because you can break up the work, doesn't mean that you do MORE work.
>
>
>> Also, what type of cores?  Physical cores, or logical cores?
>>
>
> It would be logical cores. I think we're all compiling and running WebKit
> with hyper-threading turned on so it seems to work fine for parallel
> processing these days.
>

That's because compiling C code happens to fall within a set of workloads
that could benefit from running concurrently with SMT/HyperThreading.

OS X's scheduler, for example, appears to have been very carefully tuned
not to schedule two high priority jobs to two logical cores on a single
physical core when there are more physical cores available.

- R. Niwa
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.webkit.org/pipermail/webkit-dev/attachments/20140507/f2eafdcc/attachment-0001.html>


More information about the webkit-dev mailing list