[webkit-dev] Wrestling with Widgets
hyatt at apple.com
Fri Mar 14 14:03:25 PDT 2008
I am wrestling with how to handle transforms on Widgets (for features
like full page zoom) and am basically looking for some advice/feedback.
Widgets currently are:
On Mac, all three of these widget types are backed by NSViews. On
Windows, we hand-roll (1) and (2). (3) may be backed by an HWND.
Our cross-platform Widget abstraction is effectively a tree. Child
widgets have a frame geometry that is in the coordinate space of their
parents. In the case of scrollable views, the coordinates of the
child widget are in the scrolled document's coordinate space.
The question I'm struggling with is what to do with these coordinates
in the presence of transforms. It seems like windowed plugins simply
are not going to work with anything but scaling/translation transforms
on either Mac or Windows...
Here are some possibilities:
(1) Do nothing. Widgets would be positioned as though they aren't
zoomed at all. The coordinates would effectively be a more-or-less
useless lie that we'd work around whenever possible. Note that on Mac
the position of the NSView really only matters when the NSView paints
With the current full page zoom implementation, we *do* paint the
widgets zoomed (even Flash)... we just don't do the right thing when
the widget is invalidating and repainting itself. In this scenario
we'd just attack the problems on an ad-hoc basis, e.g., force frames
into "slow scrolling mode" to stop blitting, hack windowed plugins to
position the NSView properly by hand. I think scrollers might paint
themselves too and am not quite sure how to handle them.
(2) The render tree sets widgets to a transformed rectangle if
possible, e.g.. if the transform consists only of translation/
scaling. The render tree will compute the transformed position and
place the widget into that position. Basically RenderWidget and
RenderLayer would be patched with this approach. Arbitrary transforms
would still not be reflected into the widget coordinate space, and
there would be a mismatch between widget coordinates and the render
tree coordinates that would now have to be dealt with. For example
when hit testing and drilling down into child widgets, transforms
would actually have to be applied. However underlying native widgets
(NSViews and HWNDs) would be the correct geometry without having to
hack specific subclasses.
(3) Add the notion of transforms to Widget. A widget would have an
AffineTransform that would be relative to its parent. The render tree
would be responsible for computing and setting transforms on widgets
and would then continue to use the same coordinates it does now
(untransformed) when moving/resizing widgets. It would then be up to
the underlying Widget code to use frame geometry + transform together
to compute the real native widget's position. Arbitrary transforms
would now *potentially* be able to be handled by the widget
abstraction on a platform that was smart enough (i.e., not OS X or
Windows). :) This approach I think looks the most elegant from an API
perspective but in practice could lead to more duplication of effort
in platform-specific Widget code.
Anyone have any other ideas or want to express an opinion about these
(hyatt at apple.com)
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the webkit-dev