<html>
<head>
<base href="https://bugs.webkit.org/" />
</head>
<body>
<p>
<div>
<b><a class="bz_bug_link
bz_status_NEW "
title="NEW - [SOUP] Add initial implementation of NetworkProcess disk cache"
href="https://bugs.webkit.org/show_bug.cgi?id=143872#c19">Comment # 19</a>
on <a class="bz_bug_link
bz_status_NEW "
title="NEW - [SOUP] Add initial implementation of NetworkProcess disk cache"
href="https://bugs.webkit.org/show_bug.cgi?id=143872">bug 143872</a>
from <span class="vcard"><a class="email" href="mailto:cgarcia@igalia.com" title="Carlos Garcia Campos <cgarcia@igalia.com>"> <span class="fn">Carlos Garcia Campos</span></a>
</span></b>
<pre>(In reply to <a href="show_bug.cgi?id=143872#c18">comment #18</a>)
<span class="quote">> Comment on <span class=""><a href="attachment.cgi?id=251543&action=diff" name="attach_251543" title="Updated to use fast malloc/free">attachment 251543</a> <a href="attachment.cgi?id=251543&action=edit" title="Updated to use fast malloc/free">[details]</a></span>
> Updated to use fast malloc/free
>
> View in context:
> <a href="https://bugs.webkit.org/attachment.cgi?id=251543&action=review">https://bugs.webkit.org/attachment.cgi?id=251543&action=review</a>
>
> > Source/WebKit2/NetworkProcess/cache/NetworkCacheIOChannelSoup.cpp:129
> > + GRefPtr<SoupBuffer> buffer = adoptGRef(soup_buffer_new(SOUP_MEMORY_TAKE, static_cast<char*>(g_malloc(bufferSize)), bufferSize));
>
> Here it might be better to use fastMalloc as well.</span >
Right, I forgot we were also allocating memory here.
<span class="quote">> > Source/WebKit2/NetworkProcess/cache/NetworkCacheIOChannelSoup.cpp:142
> > + GRefPtr<SoupBuffer> readBuffer = adoptGRef(soup_buffer_new(SOUP_MEMORY_TAKE, static_cast<char*>(g_malloc(bufferSize)), bufferSize));
>
> Ditto.</span >
Right.
<span class="quote">> > Source/WebKit2/NetworkProcess/cache/NetworkCacheIOChannelSoup.cpp:162
> > + size_t bytesToRead = bufferSize;
> > + do {
> > + // FIXME: implement offset.
> > + gssize bytesRead = g_input_stream_read(m_inputStream.get(), const_cast<char*>(readBuffer->data), bytesToRead, nullptr, nullptr);
> > + if (bytesRead == -1) {
> > + completionHandler(data, -1);
> > + return;
> > + }
> > +
> > + if (!bytesRead)
> > + break;
> > +
> > + ASSERT(bytesRead > 0);
> > + fillDataFromReadBuffer(readBuffer.get(), static_cast<size_t>(bytesRead), data);
> > +
> > + pendingBytesToRead = size - data.size();
> > + bytesToRead = std::min(pendingBytesToRead, readBuffer->length);
> > + } while (pendingBytesToRead);
>
> Sorry I missed this in the first review. I have a quick question about the
> synchronous read case. Why is it better to read chunks of data and
> constantly fill up small buffers? Why not create a buffer the full size of
> the data and only create a single Data object? Couldn't you avoid all of the
> concatenation that way?</span >
Because we don't know the size, NetworkCacheStorage passes std::numeric_limits<size_t>::max() to ensure we read until end of stream. Note that this readSync method is only used for debugging, when a dump.json file is requested to be created by the user, it's not used when retrieving cache resources for loading.
<span class="quote">> Otherwise, I suggest a small rename here for the sake of clarity:
> pendingBytesToRead -> bytesToRead, bytesToRead -> bytesToReadForCurrentChunk.</span ></pre>
</div>
</p>
<hr>
<span>You are receiving this mail because:</span>
<ul>
<li>You are the assignee for the bug.</li>
</ul>
</body>
</html>