[webkit-reviews] review denied: [Bug 109579] [CoordGfx] Speed up web process rendering with double buffering : [Attachment 187859] Patch

bugzilla-daemon at webkit.org bugzilla-daemon at webkit.org
Tue Feb 12 07:14:51 PST 2013


Noam Rosenthal <noam at webkit.org> has denied Balazs Kelemen
<kbalazs at webkit.org>'s request for review:
Bug 109579: [CoordGfx] Speed up web process rendering with double buffering
https://bugs.webkit.org/show_bug.cgi?id=109579

Attachment 187859: Patch
https://bugs.webkit.org/attachment.cgi?id=187859&action=review

------- Additional Comments from Noam Rosenthal <noam at webkit.org>
View in context: https://bugs.webkit.org/attachment.cgi?id=187859&action=review


This is a very promising optimization, but it needs some more work. Maybe we
can do some of it in stages?
My main issue is that this relies too much on heuristics; Maybe we can create a
more reliable algorithm that is capped by memory rather than triggered by
slowness.

> Source/WebCore/ChangeLog:25
> +	   suspendPainting on the client accordint to the active state.

according

>
Source/WebCore/platform/graphics/texmap/coordinated/CoordinatedGraphicsScene.cp
p:599
> +    for (size_t i = 0; i < m_committableRenderTasks.size(); ++i)

This is not thread safe. m_committable*** can be accessed from
syncRemoteContent and didRenderFrame.

>
Source/WebCore/platform/graphics/texmap/coordinated/CoordinatedGraphicsScene.cp
p:676
> +    m_renderQueue.swap(m_committableRenderTasks);

If the web process happens to be quicker than the UI process, we'll lose
messages here.

>
Source/WebKit2/WebProcess/WebPage/CoordinatedGraphics/CoordinatedLayerTreeHost.
cpp:372
> +    unsigned elapsed = currentTimeInMilliSeconds() -
m_performFlushStartTime;
> +    bool isSlow = elapsed > SlowFrameThresholdMS;
> +    bool shouldUpdate = !((!m_numSlowFrames && !isSlow) || ((m_numSlowFrames
== NumberOfSlowFramesThreshold) && isSlow));
> +    if (shouldUpdate) {
> +	   m_numSlowFrames += isSlow ? 1 : -1;
> +	   ASSERT(m_numSlowFrames <= NumberOfSlowFramesThreshold);
> +	   if (!m_useDoubleBuffering && m_numSlowFrames ==
NumberOfSlowFramesThreshold) {
> +	       m_useDoubleBuffering = true;
> +	       if (!m_layerFlushTimer.isActive())
> +		   m_layerFlushTimer.startOneShot(0);
> +	   } else if (m_useDoubleBuffering && !m_numSlowFrames) {
> +	       m_useDoubleBuffering = false;
> +	       if (canPurgeFrontAtlases)
> +		   m_frontUpdateAtlases.clear();
> +	       else
> +		   m_shouldPurgeFrontAtlases = true;
> +	   }
> +    }

Not sure if this is the right heuristic.
Maybe we should always allow multiple frames, but limit the amount of memory
that can be used by atlases. Think of it like a buffered pipe where we block if
we run out of buffer space. I'm afraid of too much heuristics here.


More information about the webkit-reviews mailing list