<html><head><meta http-equiv="Content-Type" content="text/html charset=windows-1252"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;"><br><div><div>On May 7, 2014, at 9:08 PM, Rik Cabanier <<a href="mailto:cabanier@gmail.com">cabanier@gmail.com</a>> wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite"><div dir="ltr"><br><div class="gmail_extra"><br><br><div class="gmail_quote">On Wed, May 7, 2014 at 7:58 PM, Brady Eidson <span dir="ltr"><<a href="mailto:beidson@apple.com" target="_blank">beidson@apple.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div style="word-wrap:break-word"><div><br><div><div>
On May 7, 2014, at 5:38 PM, Rik Cabanier <<a href="mailto:cabanier@gmail.com" target="_blank">cabanier@gmail.com</a>> wrote:</div>
<br><blockquote type="cite"><div dir="ltr"><br><div class="gmail_extra"><br><br><div class="gmail_quote">On Wed, May 7, 2014 at 5:07 PM, Benjamin Poulain <span dir="ltr"><<a href="mailto:benjamin@webkit.org" target="_blank">benjamin@webkit.org</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div>On 5/7/14, 4:13 PM, Benjamin Poulain wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
On 5/7/14, 3:52 PM, Filip Pizlo wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
Exactly. Ben, Oliver, and others have made arguments against web<br>
workers. Rik is not proposing web workers. We already support them. The<br>
point is to give API to let developers opt into behaving nicely if they<br>
are already using web workers.<br>
</blockquote>
<br>
I have nothing against Web Workers. They are useful to dispatch<br>
background tasks.<br>
<br>
They are basically the Web equivalent dispatch_async() of GCD, which is<br>
already a very useful tool.<br>
<br>
What you are suggesting is useful for making Web Workers the tool to do<br>
high performance multi-thread computation.<br>
I don't think Web Workers are a great tool for that job at the moment. I<br>
would prefer something along TBB, GCD or something like that.<br>
<br>
<br>
For high performance computation, I think a more useful API would be<br>
something like TBB parallel_for with automatic chunking.<br>
It is actually had to do faster than that with the number of cores<br>
unless you know your task very very well.<br>
<br>
It would be a little more work for us, but a huge convenience for the<br>
users of Web Workers.<br>
</blockquote>
<br></div>
After chatting with Filip, it seems such a model is unlikely to happen anytime soon for JavaScript.<br>
<br>
In the absence of any tasks/kernels model, I am in favor of exposing a "good number of thread" API. It is definitely better than nothing.</blockquote><div><br></div><div>Do we know what this number would be? My guess would be the number of cores for "regular" systems...</div>
</div></div></div></blockquote><div><br></div></div></div><div>Define “regular” systems: </div></div></blockquote><div><br></div><div>"regular" systems are those were all running CPU's are of the same type. There are some exotic systems where some CPU's are much faster than others. I'm unsure what we should return there.</div>
<div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div style="word-wrap:break-word"><div> As Ryosuke mentioned, for systems that run on battery power (read: a vast majority of systems), keeping cores asleep to preserve battery life is often preferable to the user instead of waking up all available hardware and building up heat.</div>
</div></blockquote><div><br></div><div>Actually, spinning up more cores while on battery power might be more efficient.</div></div></div></div></blockquote></div><div><blockquote type="cite"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div><br></div><div>I'm having a hard time finding good data, but take this chart for instance: <a href="http://www.anandtech.com/show/7903/samsung-galaxy-s-5-review/5" target="_blank">http://www.anandtech.com/show/7903/samsung-galaxy-s-5-review/5</a></div>
<div>Let's say you have a task that would take 1 core 4 seconds. This would mean 4 x 2612mw = 10456mw</div><div>Now if you can divide it over 4 cores: display = 854 (AMOLED), cpu core (simplified) = 2612 - 854 = 1758mw -> 854 + (4 x 1758mw) = 7886mw</div>
<div><br></div><div>On the desktop world, Intel Turbo boost [1] boosts single thread performance but at the expense of making the CPU run much hotter. Putting an even load will reduce power usage so the ratio of operator/watt will improve</div>
<div>There's a paper from NVidia that also describes this [2].</div><div><br></div><div>Just because you can break up the work, doesn't mean that you do MORE work.</div></div></div></div></blockquote><div><br></div>I’m not arguing that more cores over less time can never be more efficient. Sure it can be, on certain systems and under certain conditions.</div><div><br></div><div>But less cores over more time can also be more efficient in other circumstances. Waking cores can be expensive, both from a battery and time perspective. In fact transitioning through P-states, T-states, and C-states all incur a cost.</div><div><br></div><div>A lot of this discussion has been focused on “make long running computational algorithms better”, meaning the types of threads that last minutes or even hours. Much more common are the many threads that transition in and out of existence over short periods of time, where attempting to minimize these transitions is important for battery life.</div><div><br><blockquote type="cite"><div dir="ltr"><blockquote style="margin:0px 0px 0px 40px;border:none;padding:0px"><div class="gmail_extra"><div class="gmail_quote"><div>Maybe we can keep the current patch that returns the number of available CPU's for now. [3]</div></div></div></blockquote></div></blockquote><div><br></div>This conversation has devolved into talking about the cost/benefit analysis of using all the cores versus taking it easy on the system, but it has completely ignored the fingerprinting problem.</div><div><br></div><div>Instead of passive/aggressively commenting on the patch, I’ve now actually r-‘ed it until at least that concern is resolved.</div><div><br></div><div>~Brady</div><div><br></div><div><blockquote type="cite"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">
<div><br></div><div>1: <a href="http://www.intel.com/content/www/us/en/architecture-and-technology/turbo-boost/turbo-boost-technology.html?wapkw=turbo+boost" target="_blank">http://www.intel.com/content/www/us/en/architecture-and-technology/turbo-boost/turbo-boost-technology.html?wapkw=turbo+boost</a></div>
<div>2: page 12 of <a href="http://www.nvidia.com/content/PDF/tegra_white_papers/tegra-whitepaper-0911b.pdf" target="_blank">http://www.nvidia.com/content/PDF/tegra_white_papers/tegra-whitepaper-0911b.pdf</a></div><div>3: <a href="https://bugs.webkit.org/show_bug.cgi?id=132588" target="_blank">https://bugs.webkit.org/show_bug.cgi?id=132588</a></div>
<div><br></div></div></div></div>
</blockquote></div><br></body></html>