[Webkit-unassigned] [Bug 26467] Cross-platform image system architecture cleanup metabug

bugzilla-daemon at webkit.org bugzilla-daemon at webkit.org
Wed Jun 17 11:40:18 PDT 2009


https://bugs.webkit.org/show_bug.cgi?id=26467





------- Comment #11 from pkasting at google.com  2009-06-17 11:40 PDT -------
(In reply to comment #10)
> Making it runtime can allow us to choose the format that is most suitable on
> the target device at run time. But I'm afraid that could affect performance,
> unless we take care of all pixel walking code like this:
> case format A:
>    for each pixels {
>    }
> case format B:
>    for each pixels {
>   }
> ...

I'm skeptical; we're already doing nontrivial work on every pixel, it's not
clear to me that adding an if (or perhaps writing some very clever algorithmic
code) will have a measurable perf impact.  Obviously, testing would say for
sure.

> > > 6. We also did some work for resource caching. We've implemented 
> > > SharedBuffer in our way, and so that the buffer doesn't need to be flat 
> > > Vector. Instead, it can be a series of segments. See the definition of 
> > > OptimizedBuffer. And we make all image decoders (and other modules that uses 
> > > text decoders) work with segmented source. This is currently only for WINCE 
> > > & TORCHMOBILE. But we think it can be helpful for other platform, too.
> > 
> > This is the most confusing bit to me.  There's already an image cache in WebKit
> > that can hang onto or discard decoded image data.  And there's also code that
> > throws away the unneeded decoded data from large animated GIFs.  Is this work
> > you've done separate from that, or integrated with it?  I worry about having
> > multiple isolated caching systems as a single system seems like it would make
> > better decisions.  I will have to look at precisely how you've implemented
> > this.
> > 
> SharedBuffer is used for raw resource data (encoded data). The reason why we
> allow segmented buffer is that memory situation on mobile device is very bad,
> and allocating a big block of memory can fail. Also, it takes CPU time to
> resize the Vector. I guess JSC::SegmentedVector is a good utility class for
> implementing OptimizedBuffer easily.

OK, so basically you're trying to decode images where perhaps not even one
frame can fit into memory at once?  Although if that were the case I'm not sure
how the full image would get drawn, unless the drawing loop got changed...
maybe what you're saying is, allocating the memory as a single chunk fails (due
to fragmentation) but will succeed when broken into smaller pieces?


-- 
Configure bugmail: https://bugs.webkit.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug, or are watching the assignee.



More information about the webkit-unassigned mailing list