[Webkit-unassigned] [Bug 102994] [META][Qt] Implement Threaded Coordinated Graphics in WebKit1

bugzilla-daemon at webkit.org bugzilla-daemon at webkit.org
Tue Dec 18 08:07:35 PST 2012


https://bugs.webkit.org/show_bug.cgi?id=102994





--- Comment #8 from Simon Hausmann <hausmann at webkit.org>  2012-12-18 08:09:50 PST ---
(In reply to comment #7)
> Wow, thank you for explanation. Our team does not have a Qt Graphics expert like you, so we need more information like your comment.
> 
> (In reply to comment #6)
> > > Our way about separating threads would be the similar to current coordinated graphics' way. If webview is backed by qquickitem, the rendering thread draws UI Widgets as well as web contents, and synchronizes with the main thread only when updating states.
> > 
> > "_If_ webview is backed by qquickitem" - That is exactly the point. With WebKit1 we do not have that situation, there is currently no support for using WebKit1 with QQuickItem (QtQuick2).
> 
> As I read your comment, QtQuick2 is not option of coordinated graphics on WK1, because we need to support qt4.8 also.

Why is that? WebKit trunk does not support Qt 4.8 and instead requires Qt 5.0.

> > > I don't decide yet whether to use qquickitem like WK2 to use the rendering thread or to use something like RunLoop in WK2. We can not just use thread because we need to run a timer in the separated thread. If we use RunLoop, we draws contents to texture (FBO) off the main thread, and blits texture to webview's surface on the main thread. The later way is hard to implement but a platform independent approach. We need to discuss whether is proper.
> > 
> > Yeah, I think we do need to discuss this. In my opinion in boils down to the question of where we want to go with WebKit1 in the Qt port. Right now WebKit1 is used to integrate with QWidget and QGraphicsView and WebKit2 is used for the QtQuick2 integration.
> > 
> > Technically we have three rendering code paths to maintain right now:
> > 
> >     (1) The coordinated graphics way with WebKit2 and a tiny bit of private Qt API that allows us to integrate with the QtQuick2 scene graph. This implies the availability of OpenGL and rendering happens in a secondary thread (Qt's render thread that finishes off the frame with a swapbuffers call).
> > 
> >     (2) The QWidget (and regular QGraphicsView) way, where we know we're rendering in software into a given QPainter. On the WebKit side that's done using the software texture mapper and WebKit1. Rendering is required to happen in the main thread and ends up in the (image) backing store of the QWindow/Widget.
> 
> I can understand now where TextureMapperImagebuffer is used.

Exactly :)

> >     (3) The way when using QGraphicsView with a QGLWidget as viewport, where we're again required to render in the main thread but we can use OpenGL and therefore use the GL texture mapper - with WebKit1.
> > 
> > 
> > We consciously promote option (1) over the others and would like to keep the effort required to maintain (2) and (3) at a minimum. The way (2) and (3) are tied to the main thread is indeed one of the main reasons for favoring (1).
> > 
> > This bug is effectively suggesting to add a fourth way of rendering that is similar to option (1) but is based on WebKit1 and would potentially be based on a QtQuick2 integration. I would like to see some compelling arguments why we should add this fourth way to the list of supported and maintained rendering code paths for the Qt port.
> > 
> > In other words: Why would somebody choose a potential WebKit1 based QtQuick2 integration over the existing WebKit2 based one?
> 
> As I read your comment, it is obvious that we need to use something like RunLoop to run AC on the rendering thread.
> This plan would not create (4) option. This plan improve (2) and (3) option.
> 
> In the (2) option, we will composite content into a imageBuffer on the rendering thread using TextureMapperImagebuffer, and QWidget will paint the result into a given QPainter.
> 
> In the (3) option, we will composite content into a texture on the rendering thread usint TextureMapperGL, and QGraphicsView  will paint the result texture into QGLWidget's surface.

While I agree that this would work I'm not really sure what the benefit of that is, apart from code sharing.

The heavy lifting of populating the layers happens in the main thread, the main application rendering is happening there, too. Only composing a few layers would happen in the thread. Is that worth it? Especially in terms of OpenGL it means that you have to do swapbuffers twice, once for the FBO rendering in the thread (layer composition) and the as part of the main rendering cycle.

And just to make it clear: I am indeed very interested in making (2) and (3) easier to maintain, provided we don't regress in performance.

> You can ask why we make big changes in (2) and (3) options although we decide to support minimally.
> 
> There are two reasons.
> 1. we will not make new implementation. we just reuse coordinated graphics in WK1. It is refactoring rather than implementation.
> 
> 2. It make it easy to support (2) and (3) option by minimal effort.
> We maintained some legacy code that only (2) and (3) option use.
> For example, only (2) and (3) use TextureMapperTiledBackingStore. It causes Bug 105158, but we can not fix by minimal effort. 
> There are so big gaps between (1) and (2)/(3) so it is hard to maintain (2) and (3) options. When we add new feature to (1), it is burden to consider how to handle (2) and (3).
> 
> This project makes (2) and (3) reuse coordinated graphics for (1).
> After complete, we will maintain only coordinated graphics. All legacy code will be removed.
> 
> Although we need some effort to refactor, we can actually offer minimal support to (2) and (3) after that.

I do see the advantage of getting rid of TextureMapperTiledBackingStore, but I do wonder if the result is slower in the end and maybe more complex. No'am, what is your take on all this?

> In addition, this project is not for only improving (2) and (3) in Qt.
> After that, we apply threaded coordinated graphics to GTK WK1 and WK2.

How do you plan to integrate with their application rendering? Also through an intermediate surface where just the layer composition happens in a separate thread?

-- 
Configure bugmail: https://bugs.webkit.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug.


More information about the webkit-unassigned mailing list