[webkit-gtk] WebKit2GTK+ and compositing in the UIProcess
emanuele.aina at collabora.com
Wed Dec 9 03:45:00 PST 2015
zan at falconsigh.net wrote:
[CC'ing Daniel and Pekka who definitely know more about this stuff than
> > Differently from Žan's WebkitForWayland approach where he just need
> > to export the handle for the final surface buffer to the WebProcess
> > which would then use it as a GL render target, we need to be able
> > to use the layer buffers as GL render targets in the WebProcess and
> > as textures in the UIProcess.
> The rendering target is actually set up in the WebProcess, simply by
> creating a gbm_surface object and using that as the native window for
> the EGLSurface.
Ah, thanks for clearing up my confusion. :D
> After the rendering and the buffer swap call, the front buffer is
> locked and the information about the returned gbm_bo object is passed
> to the UIProcess.
> There the buffer object is imported and used as appropriate for the
> backend that's been chosen.
> > Exporting textures from the UIProcess as dmabufs and use them as
> > render targets in the WebProcess is not guaranteed to work in a
> > efficient manner, as the GL implementation may want to reallocate
> > the storage when it wants to, e.g. to win parallelism, but export
> > would prevent that.
> I guess this would still apply even with the previous clarification.
> Not familiar with this though, so I honestly can't give you a
> definitive answer whether the wk4wl approach falters here.
> > Also using dmabufs directly means that we should have really low-
> > level per-platform knowledge as they are created with driver-
> > specific ioclts.
> I don't agree. The GBM API abstracts this well enough.
I see. I've been told that directly using GBM buffers may still face
some subtle issues: for instance, since the display controller is often
allocating out of a fixed pool, we might end up exhausting quite
limited memory, and since the controller is usually more restrictive in
terms of the formats it can accept, compression, memory location and
other parameters if we reuse stuff from the GPU we may end up with
suboptimal configurations that cause extra copies.
Some implementations may even fail to allocate GBM buffers if the DRM
device is not a master device, which I presume is the case for the
Re-using the Wayland mechanisms would mean that we are guaranteed that
someone else already had to deal with all these subtly annoying,
hardware-specific details. :P
> This is how the nested compositor implementation worked before, even
> if the work was never fully finished. Compared to using the GBM
> platform for rendering in the WebProcess, this approach is proven to
> work for the GTK+ port. Whether it works efficiently is debatable (it
> certainly wasn't efficient a few years back).
What where the issues you encountered?
I'm sure that dealing with two IPC mechanism will be a pain, but I fear
that by going our own route may be even more painful.
Do you have a pointer to the old nested compositor implementation?
> Skimming the eglCreateWaylandBufferFromImageWL implementation in
> Mesa, it appears this would use a dma-buf buffer wherever available
> anyway. 
> > Do you see any issue with the described scenario or do you believe
> > there are more simple way to accomplish this?
> As I outlined above, personally I am not that supportive of the
> nested compositor approach.
What's the most annoying issue you can see with the nested compositor
Is "just" the the introduction of another IPC mechanism?
> But beyond choosing the best way to implement this, the bigger
> problem is likely how to use the composition results with the
> GTK+ toolkit.
Wouldn't using a GdkGLContext canvas provide a satisfying answer to
In the future the GTK+ scene graph would even provide the ability to
pass through buffers straight to the main compositor to eg. have zero-
copy video playback without compromising widget stacking (just like AC
works in WebKit), but that's a relatively orthogonal development.
More information about the webkit-gtk