[webkit-dev] Timing attacks on CSS Shaders (was Re: Security problems with CSS shaders)

Adam Barth abarth at webkit.org
Mon Dec 5 11:32:38 PST 2011


On Mon, Dec 5, 2011 at 10:53 AM, Chris Marrin <cmarrin at apple.com> wrote:
> To be clear, it's not the difference between white and black pixels, it's
> the difference between pixels with transparency and those without.

Can you explain why the attack is limited to distinguishing between
black and transparent pixels?  My understanding is that these attacks
are capable of distinguishing arbitrary pixel values.

> And I've
> never seen a renderer that runs "1000x slower" when rendering a pixel with
> transparency. It may runs a few times slower, and maybe even 10x slower. But
> that's about it.

As I wrote above, I don't have a proof-of-concept, so I can't give you
exact figures on how many bits the attacker can extract per second.

> I'm still waiting to see an actual "compelling" attack. The one you mention
> here:
>
> http://www.contextis.co.uk/resources/blog/webgl/poc/index.html
>
> has never seemed very compelling to me. At the default "medium" quality
> setting the image still takes over a minute to be generated and it's barely
> recognizable. You can't read the text in the image or even really tell what
> the image is unless you had the reference next to it. For something big,
> like the WebGL logo, you can see the shape. But that's because it's a lot of
> big solid color. And of course the demo only captures black and white, so
> none of the colors in an image come through. If you turn it to its highest
> quality mode you can make out the blocks of text, but that takes well over
> 15 minutes to generate.

A few points:

1) Attacks only get better, never worse.  It's unlikely that this demo
is the best possible attack.  It just gives us a feel for what kinds
of attacks are within the realm of possibility.

2) That attack isn't optimized for extracting text.  For the attack
I'm worried about, the attacker is interested in computing a binary
predicate over each pixel, which is much easier than computing the
value of the pixel.  Moreover, the attacker doesn't need to extract
every pixel.  He or she just needs to extract enough information to
distinguish glyphs in a known typeface.

3) According to the data we gathered for this paper
<http://www.adambarth.com/papers/2007/jackson-barth-bortz-shao-boneh.pdf>,
an attacker can easily spend four or five minutes executing an attack
like this without losing too many users.

> And this exploit is using WebGL, where the author has a huge amount of
> control over the rendering. CSS Shaders (and other types of rendering on the
> page) give you much less control over when rendering occurs so it makes it
> much more difficult to time the operations. I stand behind the statement,
> "... it seems difficult to mount such an attack with CSS shaders because the
> means to measure the time taken by a cross-domain shader are limited.",
> which you dismissed as dubious in your missive. With WebGL you can render a
> single triangle, wait for it to finish, and time it. Even if you tuned a CSS
> attack to a given browser whose rendering behavior you understand, it would
> take many frame times to determine the value of a single pixel and even then
> I think the accuracy and repeatability would be very low. I'm happy to be
> proven wrong about this, but I've never seen a convincing demo of any CSS
> rendering exploit.
>
> This all begs the question. What is an "exploit"? If I can reproduce
> with 90% accuracy a 100x100 block of RGB pixels in 2 seconds, then I think
> we'd all agree that we have a pretty severe exploit. But what if I can
> determine the color of a single pixel on the page with 50% accuracy in 30
> seconds. Is that an exploit? Some would say yes, because that can give back
> information (half the time) about visited links. If that's the case, then
> our solution is very different than in the first case.

It's a matter of risk.  I'm not sure it's helpful to set a hard
cut-off for what bit rate constitues an exploit.  We'd never be able
to figure out the exact bit rate anyway.  Instead, we should view more
efficient attack vectors as being higher risk.

> I think we need to agree on the problem we're trying to solve and then prove
> that the problem actually exists before trying to solve it. In fact, I think
> that's a general rule I live my life by :-)

I think it's clear what problem we're trying to solve.  We do not want
to provide web attackers a mechanism for extracting sensitive data
from the browser.  Here are a couple examples of sensitive data:

1) The text of other web sites.
2) Whether the user has visited another site previously.

I presume that's all non-controversial.  The part we seem to disagree
about is whether CSS Shaders causes this problem.  Based on everything
we know today, it seems quite likely they do.

Adam


More information about the webkit-dev mailing list