[Webkit-unassigned] [Bug 45069] VideoLayerChromium uses GPU YUV to RGB color conversion

bugzilla-daemon at webkit.org bugzilla-daemon at webkit.org
Mon Sep 20 12:42:22 PDT 2010


https://bugs.webkit.org/show_bug.cgi?id=45069





--- Comment #9 from Vangelis Kokkevis <vangelis at chromium.org>  2010-09-20 12:42:22 PST ---
(From update of attachment 67456)
View in context: https://bugs.webkit.org/attachment.cgi?id=67456&action=prettypatch

Looks pretty good. Just a few inline comments.

> WebCore/platform/graphics/chromium/VideoLayerChromium.cpp:103
> +        "  gl_FragColor = vec4(rgb, alpha);                   \n"

The compositor assumes that layers use premultipled alpha so all the channels need to be multiplied by the alpha value.  You can look at ContentLayerChromium for an example.

> WebCore/platform/graphics/chromium/VideoLayerChromium.cpp:113
> +        "  gl_FragColor = vec4(texColor.x, texColor.y, texColor.z, texColor.w); \n"

Alpha here?

> WebCore/platform/graphics/chromium/VideoLayerChromium.cpp:184
> +    if (m_yuvTextures)

Here you should be testing if each individual texture is not 0 before trying to delete it.

> WebCore/platform/graphics/chromium/VideoLayerChromium.cpp:209
> +    IntSize requiredTextureSize(frame->stride(VideoFrameChromium::cYPlane), frame->height());

what about the case of an RGBA frame? Will this give you the right stride?

> WebCore/platform/graphics/chromium/VideoLayerChromium.cpp:248
> +            GLC(glDeleteTextures(3, static_cast<const GLuint*>(m_yuvTextures)));

Same as before.  You need to make sure that each individual texture id is not equal to zero before deleting.

> WebCore/platform/graphics/chromium/VideoLayerChromium.cpp:267
> +    IntSize uvDimensions(frame->stride(VideoFrameChromium::cUPlane), frame->height() / 2);

Does the integer division by 2 work ok for odd heights?

> WebCore/platform/graphics/chromium/VideoLayerChromium.cpp:288
> +    ASSERT(frame->stride(VideoFrameChromium::cYPlane) == m_allocatedTextureSize.width());

VideoFrameChromium::cRGBPlane ?

> WebCore/platform/graphics/chromium/VideoLayerChromium.cpp:294
> +            glDeleteTextures(1, &m_rgbaTexture);

If you already have a texture ID you can just call allocateTexture() with it and adjust it to its new size.  Is there a reason why you delete it and re-create it?

> WebCore/platform/graphics/chromium/VideoLayerChromium.cpp:295
> +        m_rgbaTexture = layerRenderer()->createLayerTexture();

I wonder if the logic here could be simplified by making allocateTexture do only the allocation (passing a NULL pointer for data) and then always calling the updateTexture. That way, if in the future updateTexture() does something different, you won't have to change how allocateTexture also sends its bits to the texture.

> WebCore/platform/graphics/chromium/VideoLayerChromium.cpp:317
> +        ASSERT(glGetError() == GL_NO_ERROR);

Any reason why you're not using the GLC() macro for these two gl calls but rather explicitly call glGetError() ?

-- 
Configure bugmail: https://bugs.webkit.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug.



More information about the webkit-unassigned mailing list