[webkit-gtk] WebKit2GTK+ and compositing in the UIProcess
zan at falconsigh.net
zan at falconsigh.net
Wed Dec 9 05:55:31 PST 2015
On Wed, Dec 9, 2015, at 12:45 PM, Emanuele Aina wrote:
> zan at falconsigh.net wrote:
>
> [CC'ing Daniel and Pekka who definitely know more about this stuff than
> me]
>
>
> > > Differently from Žan's WebkitForWayland approach where he just need
> > > to export the handle for the final surface buffer to the WebProcess
> > > which would then use it as a GL render target, we need to be able
> > > to use the layer buffers as GL render targets in the WebProcess and
> > > as textures in the UIProcess.
> >
> > The rendering target is actually set up in the WebProcess, simply by
> > creating a gbm_surface object and using that as the native window for
> > the EGLSurface.[1]
>
> Ah, thanks for clearing up my confusion. :D
>
> > After the rendering and the buffer swap call, the front buffer is
> > locked and the information about the returned gbm_bo object is passed
> > to the UIProcess.
> > There the buffer object is imported and used as appropriate for the
> > backend that's been chosen.[2][3]
> >
> > > Exporting textures from the UIProcess as dmabufs and use them as
> > > render targets in the WebProcess is not guaranteed to work in a
> > > efficient manner, as the GL implementation may want to reallocate
> > > the storage when it wants to, e.g. to win parallelism, but export
> > > would prevent that.
> >
> > I guess this would still apply even with the previous clarification.
> > Not familiar with this though, so I honestly can't give you a
> > definitive answer whether the wk4wl approach falters here.
> >
> > > Also using dmabufs directly means that we should have really low-
> > > level per-platform knowledge as they are created with driver-
> > > specific ioclts.
> >
> > I don't agree. The GBM API abstracts this well enough.
>
> I see. I've been told that directly using GBM buffers may still face
> some subtle issues: for instance, since the display controller is often
> allocating out of a fixed pool, we might end up exhausting quite
> limited memory, and since the controller is usually more restrictive in
> terms of the formats it can accept, compression, memory location and
> other parameters if we reuse stuff from the GPU we may end up with
> suboptimal configurations that cause extra copies.
>
Are all these problems today? How are they addressed in the
implementation
of the Wayland platform in Mesa? I assume they could be addressed in a
similar way in libgbm.
> Some implementations may even fail to allocate GBM buffers if the DRM
> device is not a master device, which I presume is the case for the
> WebProcess.
>
I assume an authentication would be required in that case, essentially
an
equivalent to the wl_drm_authenticate() protocol call.
> Re-using the Wayland mechanisms would mean that we are guaranteed that
> someone else already had to deal with all these subtly annoying,
> hardware-specific details. :P
>
>
> > This is how the nested compositor implementation worked before, even
> > if the work was never fully finished. Compared to using the GBM
> > platform for rendering in the WebProcess, this approach is proven to
> > work for the GTK+ port. Whether it works efficiently is debatable (it
> > certainly wasn't efficient a few years back).
>
> What where the issues you encountered?
> I'm sure that dealing with two IPC mechanism will be a pain, but I fear
> that by going our own route may be even more painful.
>
> Do you have a pointer to the old nested compositor implementation?
>
This is the meta bug:
https://bugs.webkit.org/show_bug.cgi?id=115803
> > Skimming the eglCreateWaylandBufferFromImageWL implementation in
> > Mesa, it appears this would use a dma-buf buffer wherever available
> > anyway. [4]
>
> I see.
>
> > > Do you see any issue with the described scenario or do you believe
> > > there are more simple way to accomplish this?
> >
> > As I outlined above, personally I am not that supportive of the
> > nested compositor approach.
>
> What's the most annoying issue you can see with the nested compositor
> approach?
>
> Is "just" the the introduction of another IPC mechanism?
>
To paraphrase, you end up relying on the whole tractor when you just
need
the plow.
> > But beyond choosing the best way to implement this, the bigger
> > problem is likely how to use the composition results with the
> > GTK+ toolkit.
>
> Wouldn't using a GdkGLContext canvas provide a satisfying answer to
> that?
>
It would, but we then have to jump through a bunch of hoops just to
present
the composited results correctly.
> In the future the GTK+ scene graph would even provide the ability to
> pass through buffers straight to the main compositor to eg. have zero-
> copy video playback without compromising widget stacking (just like AC
> works in WebKit), but that's a relatively orthogonal development.
>
> > [1]
> > https://github.com/WebKitForWayland/webkit/blob/master/Source/WPE/Source/Graphics/GBM/RenderingBackendGBM.cpp
> > [2]
> > https://github.com/WebKitForWayland/webkit/blob/master/Source/WPE/Source/ViewBackend/DRM/ViewBackendDRM.cpp
> > [3]
> > https://github.com/WebKitForWayland/webkit/blob/master/Source/WPE/Source/ViewBackend/Wayland/ViewBackendWayland.cpp
> > [4]
> > http://cgit.freedesktop.org/mesa/mesa/tree/src/egl/drivers/dri2/platform_wayland.c#n765
>
> Thanks!
>
> --
> Emanuele Aina
> www.collabora.com
>
>
>
> _______________________________________________
> webkit-gtk mailing list
> webkit-gtk at lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-gtk
More information about the webkit-gtk
mailing list