[Webkit-unassigned] [Bug 33178] Segmented SharedBuffer: SharedBuffer doesn't have to be a flat memory block

bugzilla-daemon at webkit.org bugzilla-daemon at webkit.org
Mon Jan 4 14:27:05 PST 2010


https://bugs.webkit.org/show_bug.cgi?id=33178





--- Comment #8 from Yong Li <yong.li.webkit at gmail.com>  2010-01-04 14:27:05 PST ---
(In reply to comment #7)
> (In reply to comment #6)
> > (In reply to comment #3)
> > > (From update of attachment 45821 [details] [details] [details])
> > > Looks OK. But with no clients using the getSomeData function, it seems like
> > > it's scaffolding for some future work and functions like getSomeData are not
> > > tested. And this change will make things less efficient until the major clients
> > > adopt.
> > 
> > I have 3 more patches that make all script decoders, jpeg decoder and png
> > decoder support decoding from segmented SharedBuffer. Will post them soon.
> 
> I think that means that after your work image decoding on Mac OS X, iPhone, and
> some other platforms will be less efficient than it is today, because the image
> decoder in question is not a script decoder, JPEG decoder, or PNG decoder.
> 
Still not sure why it will less efficient. Even just merging into a flat
consecutive buffer at last can still save memory copy times. Also it shouldn't
be difficult to make image decoding work on segments, because the resource from
network comes in fragments, and a browser should always be able to decode
incomplete image source and be able to continue unfinished decoding job when
more data comes.

> > > Do you know what adding this temporary extra work does to page loading time? I
> > > don't want to get stuck with that.
> > 
> > Probably it's because it doesn't need extra memory copying. But I'm not sure. I
> > did the test just because I want to make sure it doesn't slow down desktop
> > browsers.
> 
> Sorry, I don't understand. This patch will allocate separate segments and then
> merge them later, which will make things slower for clients that use it the old
> way. That seems like it will make things slower and will require allocating two
> copies of everything at one point.
> 
> You say "did the test". What test did you do?
> 
Even with the old solution, when the buffer grows, it has to allocate 2 copies
of everything, and copies all bytes from one to the other.

The test I did is with other patches that make many decoders support segmented
decoding. And more details: 
1. run HTML loading test in iBench once and ignore the result.
2. run HTML loading test in iBench for 3 times continously, and record the
results.
I know the resources are actually loaded from disk cache. But in this way, the
results are very stable. When I "reset Safari" every time before running
iBench, I got very unstable and useless results.

> > > This patch also changes behavior when running out of memory. The new code will
> > > abort when running out of memory, whereas the old code would drop data. I think
> > > that probably should be changed in a separate patch rather than as a side
> > > effect of this one.
> > 
> > The current patch calls fastMalloc and I remember fastMalloc calls abort() upon
> > OOM already. In the future, the allocation of segments can be done in other
> > ways for better performance. We could directly allocate a page of virtual
> > memory although I'm not sure if it helps.
> 
> Yes, that's just the problem. The old code calls Vector::append, which does not
> call abort. There is tryFastMalloc, which does not call abort. You could use
> that and keep the semantics the same. Or you could change the semantics if you
> think there is a good reason.

But Vector doesn't call tryFastMalloc, instead, it calls fastMalloc in
allocateBuffer(). Have I missed anything?

-- 
Configure bugmail: https://bugs.webkit.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug.



More information about the webkit-unassigned mailing list