[Webkit-unassigned] [Bug 70099] OpenCL implementation of W3C Filter Effects Master Bug

bugzilla-daemon at webkit.org bugzilla-daemon at webkit.org
Tue Oct 18 09:33:01 PDT 2011


https://bugs.webkit.org/show_bug.cgi?id=70099





--- Comment #13 from Dirk Schulze <krit at webkit.org>  2011-10-18 09:33:00 PST ---
(In reply to comment #12)
> (In reply to comment #11)
> > (In reply to comment #10)
> > > It sounds like what's needed is an abstraction layer at the platform/graphics level.  Then we could have OpenCL, GLES, GraphicsContext3D or other implementations of the filters.
> > We already started that with the ARM implementation, I'd just continue on that way.
> 
> OK, so I'm guessing your plan is to essentially put the OpenCL-specific declarations directly in the header file (e.g., FEGaussianBlur.h), with separate concrete implementations in separate files (e.g., FEGaussianBlurOpenCL.cpp or whatever), but as non-virtual member functions, not as subclasses?  That is consistent with the GraphicsContext approach.  It's fairly lightweight in terms of lines of code/files, but it does have the disadvantage of not being able to runtime fallbacks in a clean way, since there's no virtual interface to do runtime dispatch on (the specific fallback path would have to be decided at compile time).  I think that's probably ok for now, though.
> 
> Unlike the Neon case which just takes ByteArrays, presumably the platformApply() functions would need to take some OpenCL-specific datatype, so this code would have to be #ifdef'ed.  Either that, or some abstraction of the images should be used.  Would it be possible or make sense to use Image and/or ImageBuffer here?  Maybe even for the CPU path?  In Chrome we already have code to back these with GPU resources as sources and sinks, so it would really make life easier for us.  Failing that, we could do that in a different flavour of platformApply() which uses Image and ImageBuffer.
> 
> > > This could either be by new methods on GraphicsContext, or a new interface entirely.  A new interface would have the advantage of being more modular, so ports could choose a filter backend independently of the choice of GraphicsContext backend.
> > We would just add a apply method and various platformApply functions that get called. However, we can't divide the individual ports easily, since we might want to fallback to other implementations (and at the end to software rendering).
> 
> Not sure I completely understand this.  There's already a virtual apply method; would we need to subclass here?  Or #ifdef different versions of it to handle the different fallback paths?
> 
> > > It might also be possible to refactor the existing FilterEffect hierarchy to have multiple implementations.  That would be a bit tricky, though, since there are some dependencies on ByteArrays, and other CPU-specific details even in the base class which would have to be abstracted away.
> > That's point one on my list and doesn't need a lot refactoring. We just need to make the apply function independent of the pixel buffers/imageBuffers. Not a big deal. Also it would be the first step for every implementation: OpenCL, OpenGL (WebGL), CI. That's why I start here independent of the further discussion. I'll upload my basic idea of the OpenCL implementation to this bug, just to demonstrate how the different implementations can interact with filters in a couple of days.
> 
> That would be great.  I'm definitely interested in doing an alternate implementation, so having your approach as a reference would be helpful.

I don't feel happy to publish something in this early state. The patch that I'll upload is just a prove of concept, won't compile or work at the moment, is not the final concept and might contain some errors. I just prototyped it to give an answer to your questions.

In general we have a function called apply() in FilterEffect. This function doesn't need to be virtual anymore. But for the prototype I take FEOffset as example:

void FEOffset::apply()
{
    if (hasResult())
        return;
    FilterEffect* in = inputEffect(0);
    in->apply();
    if (!in->hasResult())
        return;
    determineAbsolutePaintRect();

    // systemSupportsHWAcceleration():
    // * must be set once before calling FilterEffect::apply()
    // * can be static
    // * is an enumeration with different supportable HW acceleration possibilities.
#if USE(OPENCL)
    if (systemSupportsHWAcceleration() = OpenCLSupport) {
        platformApplyOpenCL();
        return;
     }
#endif
#if USE(Something else like OpenGL)
    if (systemSupportsHWAcceleration() =  check other HW acceleration implementations) {
         ...
         return;
    }
#endif 
    platformApplyGeneric();
}

Like you can see, there is an enumeration, that contains all HW acceleration implementations supported by the system. This needs to be checked once before applying the filter and can be static. The OpenCL implementation does not use the intermediate ImageBuffers or  ByteArrays. Instead I introduced a FilterOpenCL object that manages all mem allocations on the devices. The platform specific apply functions need to be virtual and contain the implementation aware code (can be moved to external files like FEOffsetOpenCL.cpp). For OpenCL the function either creates the kernels (like for FEColorMatrix) or copies data from one men object on the device to a new men object on the same device. No data will be copied over the CPU if the GPU was chosen. For OpenGL the function platformApplyOpenGL function would create a shader and might use FilterOpenGL for data and shader management.

If no HW acceleration is supported, we jump back to the software rendering with platformApplyGeneric().

-- 
Configure bugmail: https://bugs.webkit.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug.



More information about the webkit-unassigned mailing list