[webkit-dev] parallel painting

Zoltan Herczeg zherczeg at inf.u-szeged.hu
Fri Jun 10 00:10:20 PDT 2011


wow, your use case is really interesting. I have never thought about that.
Yeah, that would be a simple extension to the parallel painting system,
you could replay the commands anywhere, any times. It is basically a
vector graphics representation of the screen, you could scale it up.

Actually I am really happy that those techinques we were working on
eventually attracting others. We were the first one who played with
parallelization, SIMD instruction sets, and their usage is growing in
WebKit now. Devices (people) actually use them! It seems playing with new
approaches (even radical changes) are never a waste of time. We could
learn a lot from them.


> On 6/9/2011 8:24 PM, Pierre-Antoine LaFayette wrote:
>> Android uses a retain mode rendering approach as well; where paint
>> operations are recorded on a WebCore thread and painting is actually
>> done on the UI thread. It isn't necessarily the best approach. But I
>> suppose it depends the platform whether or not there is much to gain.
>> You still need to worry about synchronization.
> ...
>> On 6 April 2010 03:24, Eric Seidel <eric at webkit.org
>> <mailto:eric at webkit.org>> wrote:
>>     Parallel painting would only be useful if the graphics layer is
>>     incredibly slow.  In most WebKit ports we do not see very much time
> ...
>>     On Sat, Apr 3, 2010 at 10:32 PM, Zoltan Herczeg
>>     <zherczeg at inf.u-szeged.hu <mailto:zherczeg at inf.u-szeged.hu>> wrote:
>>     > Hi,
>>     >
>>     > I am working on a parallel painting feature for WebKit (bug id:
>>     36883).
>>     > Basically it records the painting commands on the main thread,
>>     and replay
>>     > them on a painting thread. The gain would be that the recording
>>     operation
> Is this something that could be used to "duplicate" painting commands?
> I'm very interested in enabling secondary painting contexts,
> to enable better representation of Zoom, and other common assistive
> techniques.
> Example:
> If the recording is used, prefixed with scale and crop, a user could be
> presented with
> a crisp and clear magnification of a focused region or other sub-region.
> Such techniques could also be useful for remote viewing, via
> serialization,
> and for efficient screen dumps [assuming the render works, of course].
> It'd be great, if some time, secondary user agents, like the popular
> ZoomText Magnifier,
> were able to interact with WebKit and request regions to be painted at a
> higher resolution,
> so as to display the magnified image at native resolution.
> Does that make sense? Is that something that this technique might
> eventually provide?
> I suspect that screen mirroring and other forms of screen sharing will
> become more common
> in use, as more and more physical screens become common in our common
> lives.
> -Charles

More information about the webkit-dev mailing list