[Webkit-unassigned] [Bug 44926] Multiple accelerated 2D canvases should be able to use the same GraphicsContext3D

bugzilla-daemon at webkit.org bugzilla-daemon at webkit.org
Thu Sep 2 13:15:21 PDT 2010


https://bugs.webkit.org/show_bug.cgi?id=44926





--- Comment #28 from James Robinson <jamesr at chromium.org>  2010-09-02 13:15:20 PST ---
(In reply to comment #24)
> (In reply to comment #23)
> >...
> > > How are you creating the PlatformLayer? Doesn't the SharedContext need to vend that as well? Did I miss the call that does that?
> > 
> > The PlatformLayer is created by the platform-specific implementation of GraphicsLayer::setContentsToCanvas2D().  It's made aware of the DrawingBuffer at creation time and whenever the compositing layer tree is regenerated.  PlatformLayer creation and initialization is inherently platform-specific (as the name of the type would imply).
> 
> I see that. That seems less than ideal though because it is very specific to Canvas 2D. It also has different ownership that WebGL, where the GraphicsContext3D owns the PlatformLayer. Seems like it would be better if SharedContext3D just had a factory for PlatformLayers that you could then send along to GraphicsLayer::setContentsToCanvas(). I don't see any logic in setContentsToCanvas2D that make it necessary for GraphicsLayer to know it's a 2D Canvas, or have any knowledge at all about DrawingBuffer. Committing the rendered image to the PlatformLayer will still require a call to publishToPlatformLayer() and that all happens up in the Canvas logic.
> 
> I think the closer we can keep this to the WebGL model the better. We of course need to deal with the optimization of a single shared context, but that can all be done up in the SharedContext3D like you have in your latest patch (which looks good).
> 

I think this could work.  I don't think it would be an improvement, however.  The WebGL model is pretty inflexible since it ties up the compositing logic for the element with the behavior of the element.  I think it's cleaner for the canvas-specific logic (i.e. the SharedContext3D, ContextRenderingContext2D, etc) to be concerned with taking commands from javascript and producing rendering results in a DrawingBuffer and not worry about compositing at all.  When the layer is composited then the compositor only needs to know where the rendering results are, it doesn't need to be aware of the rest of the canvas stack.  The DrawingBuffer has to be aware of both, of course, so that it can take the rendered results and put them in a layer.

One concrete problem with the WebGL model is that since the GraphicsContext3D owns the PlatformLayer directly, the compositor can't decide when to create or destroy the PlatformLayer itself.  If a page is loaded in a background tab then it would be really handy for the compositor to ditch all the platformlayers and their associated backing stores and just let each canvas continue rendering to a DrawingBuffer.

-- 
Configure bugmail: https://bugs.webkit.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug.



More information about the webkit-unassigned mailing list