<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN"
"http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head><meta http-equiv="content-type" content="text/html; charset=utf-8" />
<title>[182954] trunk/Source/WebKit2</title>
</head>
<body>

<style type="text/css"><!--
#msg dl.meta { border: 1px #006 solid; background: #369; padding: 6px; color: #fff; }
#msg dl.meta dt { float: left; width: 6em; font-weight: bold; }
#msg dt:after { content:':';}
#msg dl, #msg dt, #msg ul, #msg li, #header, #footer, #logmsg { font-family: verdana,arial,helvetica,sans-serif; font-size: 10pt;  }
#msg dl a { font-weight: bold}
#msg dl a:link    { color:#fc3; }
#msg dl a:active  { color:#ff0; }
#msg dl a:visited { color:#cc6; }
h3 { font-family: verdana,arial,helvetica,sans-serif; font-size: 10pt; font-weight: bold; }
#msg pre { overflow: auto; background: #ffc; border: 1px #fa0 solid; padding: 6px; }
#logmsg { background: #ffc; border: 1px #fa0 solid; padding: 1em 1em 0 1em; }
#logmsg p, #logmsg pre, #logmsg blockquote { margin: 0 0 1em 0; }
#logmsg p, #logmsg li, #logmsg dt, #logmsg dd { line-height: 14pt; }
#logmsg h1, #logmsg h2, #logmsg h3, #logmsg h4, #logmsg h5, #logmsg h6 { margin: .5em 0; }
#logmsg h1:first-child, #logmsg h2:first-child, #logmsg h3:first-child, #logmsg h4:first-child, #logmsg h5:first-child, #logmsg h6:first-child { margin-top: 0; }
#logmsg ul, #logmsg ol { padding: 0; list-style-position: inside; margin: 0 0 0 1em; }
#logmsg ul { text-indent: -1em; padding-left: 1em; }#logmsg ol { text-indent: -1.5em; padding-left: 1.5em; }
#logmsg > ul, #logmsg > ol { margin: 0 0 1em 0; }
#logmsg pre { background: #eee; padding: 1em; }
#logmsg blockquote { border: 1px solid #fa0; border-left-width: 10px; padding: 1em 1em 0 1em; background: white;}
#logmsg dl { margin: 0; }
#logmsg dt { font-weight: bold; }
#logmsg dd { margin: 0; padding: 0 0 0.5em 0; }
#logmsg dd:before { content:'\00bb';}
#logmsg table { border-spacing: 0px; border-collapse: collapse; border-top: 4px solid #fa0; border-bottom: 1px solid #fa0; background: #fff; }
#logmsg table th { text-align: left; font-weight: normal; padding: 0.2em 0.5em; border-top: 1px dotted #fa0; }
#logmsg table td { text-align: right; border-top: 1px dotted #fa0; padding: 0.2em 0.5em; }
#logmsg table thead th { text-align: center; border-bottom: 1px solid #fa0; }
#logmsg table th.Corner { text-align: left; }
#logmsg hr { border: none 0; border-top: 2px dashed #fa0; height: 1px; }
#header, #footer { color: #fff; background: #636; border: 1px #300 solid; padding: 6px; }
#patch { width: 100%; }
#patch h4 {font-family: verdana,arial,helvetica,sans-serif;font-size:10pt;padding:8px;background:#369;color:#fff;margin:0;}
#patch .propset h4, #patch .binary h4 {margin:0;}
#patch pre {padding:0;line-height:1.2em;margin:0;}
#patch .diff {width:100%;background:#eee;padding: 0 0 10px 0;overflow:auto;}
#patch .propset .diff, #patch .binary .diff  {padding:10px 0;}
#patch span {display:block;padding:0 10px;}
#patch .modfile, #patch .addfile, #patch .delfile, #patch .propset, #patch .binary, #patch .copfile {border:1px solid #ccc;margin:10px 0;}
#patch ins {background:#dfd;text-decoration:none;display:block;padding:0 10px;}
#patch del {background:#fdd;text-decoration:none;display:block;padding:0 10px;}
#patch .lines, .info {color:#888;background:#fff;}
--></style>
<div id="msg">
<dl class="meta">
<dt>Revision</dt> <dd><a href="http://trac.webkit.org/projects/webkit/changeset/182954">182954</a></dd>
<dt>Author</dt> <dd>antti@apple.com</dd>
<dt>Date</dt> <dd>2015-04-17 10:07:18 -0700 (Fri, 17 Apr 2015)</dd>
</dl>

<h3>Log Message</h3>
<pre>Network Cache: Read resource record and body in parallel
https://bugs.webkit.org/show_bug.cgi?id=143879

Reviewed by Chris Dumez.

We currently first fetch the record file and then fetch the body blob if needed.
We can do both operations in parallel to reduce latency.

* NetworkProcess/cache/NetworkCacheFileSystemPosix.h:
(WebKit::NetworkCache::traverseCacheFiles):

    Do all validation in the client.

* NetworkProcess/cache/NetworkCacheStorage.cpp:
(WebKit::NetworkCache::Storage::synchronize):

    Maintain a bloom filter that contains the body blobs to avoid unnecessary IO attempts.
    Delete any unknown file in cache directory.

(WebKit::NetworkCache::Storage::addToRecordFilter):

    More informative name for record filter.

(WebKit::NetworkCache::Storage::mayContain):
(WebKit::NetworkCache::Storage::readRecord):
(WebKit::NetworkCache::Storage::storeBodyAsBlob):
(WebKit::NetworkCache::Storage::dispatchReadOperation):

    Start record read IO and body blob read IO in parallel.

(WebKit::NetworkCache::Storage::finishReadOperation):

    The read is finished when we have both the record and the blob.

(WebKit::NetworkCache::Storage::dispatchWriteOperation):
(WebKit::NetworkCache::Storage::retrieve):
(WebKit::NetworkCache::Storage::store):
(WebKit::NetworkCache::Storage::traverse):
(WebKit::NetworkCache::Storage::clear):
(WebKit::NetworkCache::Storage::shrink):
(WebKit::NetworkCache::Storage::addToContentsFilter): Deleted.
(WebKit::NetworkCache::Storage::decodeRecord): Deleted.
* NetworkProcess/cache/NetworkCacheStorage.h:
(WebKit::NetworkCache::Storage::ReadOperation::ReadOperation):

    ReadOperation is now mutable and gathers the read result.</pre>

<h3>Modified Paths</h3>
<ul>
<li><a href="#trunkSourceWebKit2ChangeLog">trunk/Source/WebKit2/ChangeLog</a></li>
<li><a href="#trunkSourceWebKit2NetworkProcesscacheNetworkCacheFileSystemPosixh">trunk/Source/WebKit2/NetworkProcess/cache/NetworkCacheFileSystemPosix.h</a></li>
<li><a href="#trunkSourceWebKit2NetworkProcesscacheNetworkCacheStoragecpp">trunk/Source/WebKit2/NetworkProcess/cache/NetworkCacheStorage.cpp</a></li>
<li><a href="#trunkSourceWebKit2NetworkProcesscacheNetworkCacheStorageh">trunk/Source/WebKit2/NetworkProcess/cache/NetworkCacheStorage.h</a></li>
</ul>

</div>
<div id="patch">
<h3>Diff</h3>
<a id="trunkSourceWebKit2ChangeLog"></a>
<div class="modfile"><h4>Modified: trunk/Source/WebKit2/ChangeLog (182953 => 182954)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/WebKit2/ChangeLog        2015-04-17 16:49:49 UTC (rev 182953)
+++ trunk/Source/WebKit2/ChangeLog        2015-04-17 17:07:18 UTC (rev 182954)
</span><span class="lines">@@ -1,3 +1,52 @@
</span><ins>+2015-04-17  Antti Koivisto  &lt;antti@apple.com&gt;
+
+        Network Cache: Read resource record and body in parallel
+        https://bugs.webkit.org/show_bug.cgi?id=143879
+
+        Reviewed by Chris Dumez.
+
+        We currently first fetch the record file and then fetch the body blob if needed.
+        We can do both operations in parallel to reduce latency.
+
+        * NetworkProcess/cache/NetworkCacheFileSystemPosix.h:
+        (WebKit::NetworkCache::traverseCacheFiles):
+
+            Do all validation in the client.
+
+        * NetworkProcess/cache/NetworkCacheStorage.cpp:
+        (WebKit::NetworkCache::Storage::synchronize):
+
+            Maintain a bloom filter that contains the body blobs to avoid unnecessary IO attempts.
+            Delete any unknown file in cache directory.
+
+        (WebKit::NetworkCache::Storage::addToRecordFilter):
+
+            More informative name for record filter.
+
+        (WebKit::NetworkCache::Storage::mayContain):
+        (WebKit::NetworkCache::Storage::readRecord):
+        (WebKit::NetworkCache::Storage::storeBodyAsBlob):
+        (WebKit::NetworkCache::Storage::dispatchReadOperation):
+
+            Start record read IO and body blob read IO in parallel.
+
+        (WebKit::NetworkCache::Storage::finishReadOperation):
+
+            The read is finished when we have both the record and the blob.
+
+        (WebKit::NetworkCache::Storage::dispatchWriteOperation):
+        (WebKit::NetworkCache::Storage::retrieve):
+        (WebKit::NetworkCache::Storage::store):
+        (WebKit::NetworkCache::Storage::traverse):
+        (WebKit::NetworkCache::Storage::clear):
+        (WebKit::NetworkCache::Storage::shrink):
+        (WebKit::NetworkCache::Storage::addToContentsFilter): Deleted.
+        (WebKit::NetworkCache::Storage::decodeRecord): Deleted.
+        * NetworkProcess/cache/NetworkCacheStorage.h:
+        (WebKit::NetworkCache::Storage::ReadOperation::ReadOperation):
+
+            ReadOperation is now mutable and gathers the read result.
+
</ins><span class="cx"> 2015-04-16  Anders Carlsson  &lt;andersca@apple.com&gt;
</span><span class="cx"> 
</span><span class="cx">         Stop installing WebKit2.framework
</span></span></pre></div>
<a id="trunkSourceWebKit2NetworkProcesscacheNetworkCacheFileSystemPosixh"></a>
<div class="modfile"><h4>Modified: trunk/Source/WebKit2/NetworkProcess/cache/NetworkCacheFileSystemPosix.h (182953 => 182954)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/WebKit2/NetworkProcess/cache/NetworkCacheFileSystemPosix.h        2015-04-17 16:49:49 UTC (rev 182953)
+++ trunk/Source/WebKit2/NetworkProcess/cache/NetworkCacheFileSystemPosix.h        2015-04-17 17:07:18 UTC (rev 182954)
</span><span class="lines">@@ -28,7 +28,6 @@
</span><span class="cx"> 
</span><span class="cx"> #if ENABLE(NETWORK_CACHE)
</span><span class="cx"> 
</span><del>-#include &quot;NetworkCacheKey.h&quot;
</del><span class="cx"> #include &lt;WebCore/FileSystem.h&gt;
</span><span class="cx"> #include &lt;dirent.h&gt;
</span><span class="cx"> #include &lt;sys/stat.h&gt;
</span><span class="lines">@@ -62,8 +61,6 @@
</span><span class="cx">     traverseDirectory(cachePath, DT_DIR, [&amp;cachePath, &amp;function](const String&amp; subdirName) {
</span><span class="cx">         String partitionPath = WebCore::pathByAppendingComponent(cachePath, subdirName);
</span><span class="cx">         traverseDirectory(partitionPath, DT_REG, [&amp;function, &amp;partitionPath](const String&amp; fileName) {
</span><del>-            if (fileName.length() != Key::hashStringLength())
-                return;
</del><span class="cx">             function(fileName, partitionPath);
</span><span class="cx">         });
</span><span class="cx">     });
</span></span></pre></div>
<a id="trunkSourceWebKit2NetworkProcesscacheNetworkCacheStoragecpp"></a>
<div class="modfile"><h4>Modified: trunk/Source/WebKit2/NetworkProcess/cache/NetworkCacheStorage.cpp (182953 => 182954)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/WebKit2/NetworkProcess/cache/NetworkCacheStorage.cpp        2015-04-17 16:49:49 UTC (rev 182953)
+++ trunk/Source/WebKit2/NetworkProcess/cache/NetworkCacheStorage.cpp        2015-04-17 17:07:18 UTC (rev 182954)
</span><span class="lines">@@ -105,7 +105,7 @@
</span><span class="cx"> 
</span><span class="cx"> size_t Storage::approximateSize() const
</span><span class="cx"> {
</span><del>-    return m_approximateSize + m_blobStorage.approximateSize();
</del><ins>+    return m_approximateRecordsSize + m_blobStorage.approximateSize();
</ins><span class="cx"> }
</span><span class="cx"> 
</span><span class="cx"> void Storage::synchronize()
</span><span class="lines">@@ -119,58 +119,77 @@
</span><span class="cx">     LOG(NetworkCacheStorage, &quot;(NetworkProcess) synchronizing cache&quot;);
</span><span class="cx"> 
</span><span class="cx">     backgroundIOQueue().dispatch([this] {
</span><del>-        auto filter = std::make_unique&lt;ContentsFilter&gt;();
-        size_t size = 0;
</del><ins>+        auto recordFilter = std::make_unique&lt;ContentsFilter&gt;();
+        auto bodyFilter = std::make_unique&lt;ContentsFilter&gt;();
+        size_t recordsSize = 0;
</ins><span class="cx">         unsigned count = 0;
</span><del>-        traverseCacheFiles(recordsPath(), [&amp;filter, &amp;size, &amp;count](const String&amp; fileName, const String&amp; partitionPath) {
</del><ins>+        traverseCacheFiles(recordsPath(), [&amp;recordFilter, &amp;bodyFilter, &amp;recordsSize, &amp;count](const String&amp; fileName, const String&amp; partitionPath) {
+            auto filePath = WebCore::pathByAppendingComponent(partitionPath, fileName);
+
+            bool isBody = fileName.endsWith(bodyPostfix);
+            String hashString = isBody ? fileName.substring(0, Key::hashStringLength()) : fileName;
</ins><span class="cx">             Key::HashType hash;
</span><del>-            if (!Key::stringToHash(fileName, hash))
</del><ins>+            if (!Key::stringToHash(hashString, hash)) {
+                WebCore::deleteFile(filePath);
</ins><span class="cx">                 return;
</span><del>-            auto filePath = WebCore::pathByAppendingComponent(partitionPath, fileName);
</del><ins>+            }
</ins><span class="cx">             long long fileSize = 0;
</span><span class="cx">             WebCore::getFileSize(filePath, fileSize);
</span><del>-            if (!fileSize)
</del><ins>+            if (!fileSize) {
+                WebCore::deleteFile(filePath);
</ins><span class="cx">                 return;
</span><del>-            filter-&gt;add(hash);
-            size += fileSize;
</del><ins>+            }
+            if (isBody) {
+                bodyFilter-&gt;add(hash);
+                return;
+            }
+            recordFilter-&gt;add(hash);
+            recordsSize += fileSize;
</ins><span class="cx">             ++count;
</span><span class="cx">         });
</span><span class="cx"> 
</span><del>-        auto* filterPtr = filter.release();
-        RunLoop::main().dispatch([this, filterPtr, size] {
-            auto filter = std::unique_ptr&lt;ContentsFilter&gt;(filterPtr);
</del><ins>+        auto* recordFilterPtr = recordFilter.release();
+        auto* bodyFilterPtr = bodyFilter.release();
+        RunLoop::main().dispatch([this, recordFilterPtr, bodyFilterPtr, recordsSize] {
+            auto recordFilter = std::unique_ptr&lt;ContentsFilter&gt;(recordFilterPtr);
+            auto bodyFilter = std::unique_ptr&lt;ContentsFilter&gt;(bodyFilterPtr);
</ins><span class="cx"> 
</span><del>-            for (auto hash : m_contentsFilterHashesAddedDuringSynchronization)
-                filter-&gt;add(hash);
-            m_contentsFilterHashesAddedDuringSynchronization.clear();
</del><ins>+            for (auto hash : m_recordFilterHashesAddedDuringSynchronization)
+                recordFilter-&gt;add(hash);
+            m_recordFilterHashesAddedDuringSynchronization.clear();
</ins><span class="cx"> 
</span><del>-            m_contentsFilter = WTF::move(filter);
-            m_approximateSize = size;
</del><ins>+            for (auto hash : m_bodyFilterHashesAddedDuringSynchronization)
+                bodyFilter-&gt;add(hash);
+            m_bodyFilterHashesAddedDuringSynchronization.clear();
+
+            m_recordFilter = WTF::move(recordFilter);
+            m_bodyFilter = WTF::move(bodyFilter);
+            m_approximateRecordsSize = recordsSize;
</ins><span class="cx">             m_synchronizationInProgress = false;
</span><span class="cx">         });
</span><span class="cx"> 
</span><span class="cx">         m_blobStorage.synchronize();
</span><span class="cx"> 
</span><del>-        LOG(NetworkCacheStorage, &quot;(NetworkProcess) cache synchronization completed size=%zu count=%d&quot;, size, count);
</del><ins>+        LOG(NetworkCacheStorage, &quot;(NetworkProcess) cache synchronization completed size=%zu count=%d&quot;, recordsSize, count);
</ins><span class="cx">     });
</span><span class="cx"> }
</span><span class="cx"> 
</span><del>-void Storage::addToContentsFilter(const Key&amp; key)
</del><ins>+void Storage::addToRecordFilter(const Key&amp; key)
</ins><span class="cx"> {
</span><span class="cx">     ASSERT(RunLoop::isMain());
</span><span class="cx"> 
</span><del>-    if (m_contentsFilter)
-        m_contentsFilter-&gt;add(key.hash());
</del><ins>+    if (m_recordFilter)
+        m_recordFilter-&gt;add(key.hash());
</ins><span class="cx"> 
</span><span class="cx">     // If we get new entries during filter synchronization take care to add them to the new filter as well.
</span><span class="cx">     if (m_synchronizationInProgress)
</span><del>-        m_contentsFilterHashesAddedDuringSynchronization.append(key.hash());
</del><ins>+        m_recordFilterHashesAddedDuringSynchronization.append(key.hash());
</ins><span class="cx"> }
</span><span class="cx"> 
</span><span class="cx"> bool Storage::mayContain(const Key&amp; key) const
</span><span class="cx"> {
</span><span class="cx">     ASSERT(RunLoop::isMain());
</span><del>-    return !m_contentsFilter || m_contentsFilter-&gt;mayContain(key.hash());
</del><ins>+    return !m_recordFilter || m_recordFilter-&gt;mayContain(key.hash());
</ins><span class="cx"> }
</span><span class="cx"> 
</span><span class="cx"> static String partitionPathForKey(const Key&amp; key, const String&amp; cachePath)
</span><span class="lines">@@ -278,42 +297,35 @@
</span><span class="cx">     return true;
</span><span class="cx"> }
</span><span class="cx"> 
</span><del>-std::unique_ptr&lt;Storage::Record&gt; Storage::decodeRecord(const Data&amp; recordData, const Key&amp; key)
</del><ins>+void Storage::readRecord(ReadOperation&amp; readOperation, const Data&amp; recordData)
</ins><span class="cx"> {
</span><span class="cx">     ASSERT(!RunLoop::isMain());
</span><span class="cx"> 
</span><span class="cx">     RecordMetaData metaData;
</span><span class="cx">     Data headerData;
</span><span class="cx">     if (!decodeRecordHeader(recordData, metaData, headerData))
</span><del>-        return nullptr;
</del><ins>+        return;
</ins><span class="cx"> 
</span><del>-    if (metaData.key != key)
-        return nullptr;
</del><ins>+    if (metaData.key != readOperation.key)
+        return;
</ins><span class="cx"> 
</span><span class="cx">     // Sanity check against time stamps in future.
</span><span class="cx">     auto timeStamp = std::chrono::system_clock::time_point(metaData.epochRelativeTimeStamp);
</span><span class="cx">     if (timeStamp &gt; std::chrono::system_clock::now())
</span><del>-        return nullptr;
</del><ins>+        return;
</ins><span class="cx"> 
</span><span class="cx">     Data bodyData;
</span><span class="cx">     if (metaData.isBodyInline) {
</span><span class="cx">         size_t bodyOffset = metaData.headerOffset + headerData.size();
</span><span class="cx">         if (bodyOffset + metaData.bodySize != recordData.size())
</span><del>-            return nullptr;
</del><ins>+            return;
</ins><span class="cx">         bodyData = recordData.subrange(bodyOffset, metaData.bodySize);
</span><span class="cx">         if (metaData.bodyHash != computeSHA1(bodyData))
</span><del>-            return nullptr;
-    } else {
-        auto bodyPath = bodyPathForKey(key, recordsPath());
-        auto bodyBlob = m_blobStorage.get(bodyPath);
-        if (metaData.bodySize != bodyBlob.data.size())
-            return nullptr;
-        if (metaData.bodyHash != bodyBlob.hash)
-            return nullptr;
-        bodyData = bodyBlob.data;
</del><ins>+            return;
</ins><span class="cx">     }
</span><span class="cx"> 
</span><del>-    return std::make_unique&lt;Storage::Record&gt;(Storage::Record {
</del><ins>+    readOperation.expectedBodyHash = metaData.bodyHash;
+    readOperation.resultRecord = std::make_unique&lt;Storage::Record&gt;(Storage::Record {
</ins><span class="cx">         metaData.key,
</span><span class="cx">         timeStamp,
</span><span class="cx">         headerData,
</span><span class="lines">@@ -348,12 +360,17 @@
</span><span class="cx">     if (blob.data.isNull())
</span><span class="cx">         return { };
</span><span class="cx"> 
</span><del>-    // Tell the client we now have a disk-backed map for this data.
-    if (mappedBodyHandler) {
-        RunLoop::main().dispatch([blob, mappedBodyHandler] {
</del><ins>+    auto hash = record.key.hash();
+    RunLoop::main().dispatch([this, blob, hash, mappedBodyHandler] {
+        if (m_bodyFilter)
+            m_bodyFilter-&gt;add(hash);
+        if (m_synchronizationInProgress)
+            m_bodyFilterHashesAddedDuringSynchronization.append(hash);
+
+        if (mappedBodyHandler)
</ins><span class="cx">             mappedBodyHandler(blob.data);
</span><del>-        });
-    }
</del><ins>+
+    });
</ins><span class="cx">     return blob;
</span><span class="cx"> }
</span><span class="cx"> 
</span><span class="lines">@@ -401,40 +418,60 @@
</span><span class="cx">     });
</span><span class="cx"> }
</span><span class="cx"> 
</span><del>-void Storage::dispatchReadOperation(const ReadOperation&amp; read)
</del><ins>+void Storage::dispatchReadOperation(ReadOperation&amp; readOperation)
</ins><span class="cx"> {
</span><span class="cx">     ASSERT(RunLoop::isMain());
</span><del>-    ASSERT(m_activeReadOperations.contains(&amp;read));
</del><ins>+    ASSERT(m_activeReadOperations.contains(&amp;readOperation));
</ins><span class="cx"> 
</span><del>-    auto recordsPath = this-&gt;recordsPath();
-    auto recordPath = recordPathForKey(read.key, recordsPath);
</del><ins>+    auto recordPath = recordPathForKey(readOperation.key, recordsPath());
</ins><span class="cx"> 
</span><span class="cx">     RefPtr&lt;IOChannel&gt; channel = IOChannel::open(recordPath, IOChannel::Type::Read);
</span><del>-    channel-&gt;read(0, std::numeric_limits&lt;size_t&gt;::max(), &amp;ioQueue(), [this, &amp;read](const Data&amp; fileData, int error) {
-        auto record = error ? nullptr : decodeRecord(fileData, read.key);
</del><ins>+    channel-&gt;read(0, std::numeric_limits&lt;size_t&gt;::max(), &amp;ioQueue(), [this, &amp;readOperation](const Data&amp; fileData, int error) {
+        if (!error)
+            readRecord(readOperation, fileData);
+        finishReadOperation(readOperation);
+    });
</ins><span class="cx"> 
</span><del>-        auto* recordPtr = record.release();
-        RunLoop::main().dispatch([this, &amp;read, recordPtr] {
-            auto record = std::unique_ptr&lt;Record&gt;(recordPtr);
-            finishReadOperation(read, WTF::move(record));
-        });
</del><ins>+    bool shouldGetBodyBlob = !m_bodyFilter || m_bodyFilter-&gt;mayContain(readOperation.key.hash());
+    if (!shouldGetBodyBlob) {
+        finishReadOperation(readOperation);
+        return;
+    }
+
+    // Read the body blob in parallel with the record read.
+    ioQueue().dispatch([this, &amp;readOperation] {
+        auto bodyPath = bodyPathForKey(readOperation.key, this-&gt;recordsPath());
+        readOperation.resultBodyBlob = m_blobStorage.get(bodyPath);
+        finishReadOperation(readOperation);
</ins><span class="cx">     });
</span><span class="cx"> }
</span><span class="cx"> 
</span><del>-void Storage::finishReadOperation(const ReadOperation&amp; read, std::unique_ptr&lt;Record&gt; record)
</del><ins>+void Storage::finishReadOperation(ReadOperation&amp; readOperation)
</ins><span class="cx"> {
</span><del>-    ASSERT(RunLoop::isMain());
</del><ins>+    // Record and body blob reads must finish.
+    bool isComplete = ++readOperation.finishedCount == 2;
+    if (!isComplete)
+        return;
</ins><span class="cx"> 
</span><del>-    bool success = read.completionHandler(WTF::move(record));
-    if (success)
-        updateFileModificationTime(recordPathForKey(read.key, recordsPath()));
-    else
-        remove(read.key);
-    ASSERT(m_activeReadOperations.contains(&amp;read));
-    m_activeReadOperations.remove(&amp;read);
-    dispatchPendingReadOperations();
</del><ins>+    RunLoop::main().dispatch([this, &amp;readOperation] {
+        if (readOperation.resultRecord &amp;&amp; readOperation.resultRecord-&gt;body.isNull()) {
+            if (readOperation.resultBodyBlob.hash == readOperation.expectedBodyHash)
+                readOperation.resultRecord-&gt;body = readOperation.resultBodyBlob.data;
+            else
+                readOperation.resultRecord = nullptr;
+        }
</ins><span class="cx"> 
</span><del>-    LOG(NetworkCacheStorage, &quot;(NetworkProcess) read complete success=%d&quot;, success);
</del><ins>+        bool success = readOperation.completionHandler(WTF::move(readOperation.resultRecord));
+        if (success)
+            updateFileModificationTime(recordPathForKey(readOperation.key, recordsPath()));
+        else
+            remove(readOperation.key);
+        ASSERT(m_activeReadOperations.contains(&amp;readOperation));
+        m_activeReadOperations.remove(&amp;readOperation);
+        dispatchPendingReadOperations();
+
+        LOG(NetworkCacheStorage, &quot;(NetworkProcess) read complete success=%d&quot;, success);
+    });
</ins><span class="cx"> }
</span><span class="cx"> 
</span><span class="cx"> void Storage::dispatchPendingReadOperations()
</span><span class="lines">@@ -498,42 +535,42 @@
</span><span class="cx">     return bodyData.size() &gt; maximumInlineBodySize;
</span><span class="cx"> }
</span><span class="cx"> 
</span><del>-void Storage::dispatchWriteOperation(const WriteOperation&amp; write)
</del><ins>+void Storage::dispatchWriteOperation(const WriteOperation&amp; writeOperation)
</ins><span class="cx"> {
</span><span class="cx">     ASSERT(RunLoop::isMain());
</span><del>-    ASSERT(m_activeWriteOperations.contains(&amp;write));
</del><ins>+    ASSERT(m_activeWriteOperations.contains(&amp;writeOperation));
</ins><span class="cx"> 
</span><span class="cx">     // This was added already when starting the store but filter might have been wiped.
</span><del>-    addToContentsFilter(write.record.key);
</del><ins>+    addToRecordFilter(writeOperation.record.key);
</ins><span class="cx"> 
</span><del>-    backgroundIOQueue().dispatch([this, &amp;write] {
</del><ins>+    backgroundIOQueue().dispatch([this, &amp;writeOperation] {
</ins><span class="cx">         auto recordsPath = this-&gt;recordsPath();
</span><del>-        auto partitionPath = partitionPathForKey(write.record.key, recordsPath);
-        auto recordPath = recordPathForKey(write.record.key, recordsPath);
</del><ins>+        auto partitionPath = partitionPathForKey(writeOperation.record.key, recordsPath);
+        auto recordPath = recordPathForKey(writeOperation.record.key, recordsPath);
</ins><span class="cx"> 
</span><span class="cx">         WebCore::makeAllDirectories(partitionPath);
</span><span class="cx"> 
</span><del>-        bool shouldStoreAsBlob = shouldStoreBodyAsBlob(write.record.body);
-        auto bodyBlob = shouldStoreAsBlob ? storeBodyAsBlob(write.record, write.mappedBodyHandler) : Nullopt;
</del><ins>+        bool shouldStoreAsBlob = shouldStoreBodyAsBlob(writeOperation.record.body);
+        auto bodyBlob = shouldStoreAsBlob ? storeBodyAsBlob(writeOperation.record, writeOperation.mappedBodyHandler) : Nullopt;
</ins><span class="cx"> 
</span><del>-        auto recordData = encodeRecord(write.record, bodyBlob);
</del><ins>+        auto recordData = encodeRecord(writeOperation.record, bodyBlob);
</ins><span class="cx"> 
</span><span class="cx">         auto channel = IOChannel::open(recordPath, IOChannel::Type::Create);
</span><span class="cx">         size_t recordSize = recordData.size();
</span><del>-        channel-&gt;write(0, recordData, nullptr, [this, &amp;write, recordSize](int error) {
</del><ins>+        channel-&gt;write(0, recordData, nullptr, [this, &amp;writeOperation, recordSize](int error) {
</ins><span class="cx">             // On error the entry still stays in the contents filter until next synchronization.
</span><del>-            m_approximateSize += recordSize;
-            finishWriteOperation(write);
</del><ins>+            m_approximateRecordsSize += recordSize;
+            finishWriteOperation(writeOperation);
</ins><span class="cx"> 
</span><span class="cx">             LOG(NetworkCacheStorage, &quot;(NetworkProcess) write complete error=%d&quot;, error);
</span><span class="cx">         });
</span><span class="cx">     });
</span><span class="cx"> }
</span><span class="cx"> 
</span><del>-void Storage::finishWriteOperation(const WriteOperation&amp; write)
</del><ins>+void Storage::finishWriteOperation(const WriteOperation&amp; writeOperation)
</ins><span class="cx"> {
</span><del>-    ASSERT(m_activeWriteOperations.contains(&amp;write));
-    m_activeWriteOperations.remove(&amp;write);
</del><ins>+    ASSERT(m_activeWriteOperations.contains(&amp;writeOperation));
+    m_activeWriteOperations.remove(&amp;writeOperation);
</ins><span class="cx">     dispatchPendingWriteOperations();
</span><span class="cx"> 
</span><span class="cx">     shrinkIfNeeded();
</span><span class="lines">@@ -560,7 +597,7 @@
</span><span class="cx">     if (retrieveFromMemory(m_activeWriteOperations, key, completionHandler))
</span><span class="cx">         return;
</span><span class="cx"> 
</span><del>-    m_pendingReadOperationsByPriority[priority].append(new ReadOperation { key, WTF::move(completionHandler) });
</del><ins>+    m_pendingReadOperationsByPriority[priority].append(new ReadOperation { key, WTF::move(completionHandler) } );
</ins><span class="cx">     dispatchPendingReadOperations();
</span><span class="cx"> }
</span><span class="cx"> 
</span><span class="lines">@@ -575,7 +612,7 @@
</span><span class="cx">     m_pendingWriteOperations.append(new WriteOperation { record, WTF::move(mappedBodyHandler) });
</span><span class="cx"> 
</span><span class="cx">     // Add key to the filter already here as we do lookups from the pending operations too.
</span><del>-    addToContentsFilter(record.key);
</del><ins>+    addToRecordFilter(record.key);
</ins><span class="cx"> 
</span><span class="cx">     dispatchPendingWriteOperations();
</span><span class="cx"> }
</span><span class="lines">@@ -584,6 +621,8 @@
</span><span class="cx"> {
</span><span class="cx">     ioQueue().dispatch([this, flags, traverseHandler] {
</span><span class="cx">         traverseCacheFiles(recordsPath(), [this, flags, &amp;traverseHandler](const String&amp; fileName, const String&amp; partitionPath) {
</span><ins>+            if (fileName.length() != Key::hashStringLength())
+                return;
</ins><span class="cx">             auto recordPath = WebCore::pathByAppendingComponent(partitionPath, fileName);
</span><span class="cx"> 
</span><span class="cx">             RecordInfo info;
</span><span class="lines">@@ -634,9 +673,11 @@
</span><span class="cx">     ASSERT(RunLoop::isMain());
</span><span class="cx">     LOG(NetworkCacheStorage, &quot;(NetworkProcess) clearing cache&quot;);
</span><span class="cx"> 
</span><del>-    if (m_contentsFilter)
-        m_contentsFilter-&gt;clear();
-    m_approximateSize = 0;
</del><ins>+    if (m_recordFilter)
+        m_recordFilter-&gt;clear();
+    if (m_bodyFilter)
+        m_bodyFilter-&gt;clear();
+    m_approximateRecordsSize = 0;
</ins><span class="cx"> 
</span><span class="cx">     ioQueue().dispatch([this] {
</span><span class="cx">         auto recordsPath = this-&gt;recordsPath();
</span><span class="lines">@@ -708,6 +749,8 @@
</span><span class="cx">     backgroundIOQueue().dispatch([this] {
</span><span class="cx">         auto recordsPath = this-&gt;recordsPath();
</span><span class="cx">         traverseCacheFiles(recordsPath, [this](const String&amp; fileName, const String&amp; partitionPath) {
</span><ins>+            if (fileName.length() != Key::hashStringLength())
+                return;
</ins><span class="cx">             auto recordPath = WebCore::pathByAppendingComponent(partitionPath, fileName);
</span><span class="cx">             auto bodyPath = bodyPathForRecordPath(recordPath);
</span><span class="cx"> 
</span></span></pre></div>
<a id="trunkSourceWebKit2NetworkProcesscacheNetworkCacheStorageh"></a>
<div class="modfile"><h4>Modified: trunk/Source/WebKit2/NetworkProcess/cache/NetworkCacheStorage.h (182953 => 182954)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/WebKit2/NetworkProcess/cache/NetworkCacheStorage.h        2015-04-17 16:49:49 UTC (rev 182953)
+++ trunk/Source/WebKit2/NetworkProcess/cache/NetworkCacheStorage.h        2015-04-17 17:07:18 UTC (rev 182954)
</span><span class="lines">@@ -96,12 +96,22 @@
</span><span class="cx">     void shrink();
</span><span class="cx"> 
</span><span class="cx">     struct ReadOperation {
</span><del>-        Key key;
-        RetrieveCompletionHandler completionHandler;
</del><ins>+        ReadOperation(const Key&amp; key, const RetrieveCompletionHandler&amp; completionHandler)
+            : key(key)
+            , completionHandler(completionHandler)
+        { }
+
+        const Key key;
+        const RetrieveCompletionHandler completionHandler;
+        
+        std::unique_ptr&lt;Record&gt; resultRecord;
+        SHA1::Digest expectedBodyHash;
+        BlobStorage::Blob resultBodyBlob;
+        std::atomic&lt;unsigned&gt; finishedCount { 0 };
</ins><span class="cx">     };
</span><del>-    void dispatchReadOperation(const ReadOperation&amp;);
</del><ins>+    void dispatchReadOperation(ReadOperation&amp;);
</ins><span class="cx">     void dispatchPendingReadOperations();
</span><del>-    void finishReadOperation(const ReadOperation&amp;, std::unique_ptr&lt;Record&gt;);
</del><ins>+    void finishReadOperation(ReadOperation&amp;);
</ins><span class="cx"> 
</span><span class="cx">     struct WriteOperation {
</span><span class="cx">         Record record;
</span><span class="lines">@@ -113,7 +123,7 @@
</span><span class="cx"> 
</span><span class="cx">     Optional&lt;BlobStorage::Blob&gt; storeBodyAsBlob(const Record&amp;, const MappedBodyHandler&amp;);
</span><span class="cx">     Data encodeRecord(const Record&amp;, Optional&lt;BlobStorage::Blob&gt;);
</span><del>-    std::unique_ptr&lt;Record&gt; decodeRecord(const Data&amp;, const Key&amp;);
</del><ins>+    void readRecord(ReadOperation&amp;, const Data&amp;);
</ins><span class="cx"> 
</span><span class="cx">     void updateFileModificationTime(const String&amp; path);
</span><span class="cx"> 
</span><span class="lines">@@ -123,26 +133,28 @@
</span><span class="cx"> 
</span><span class="cx">     bool mayContain(const Key&amp;) const;
</span><span class="cx"> 
</span><del>-    void addToContentsFilter(const Key&amp;);
</del><ins>+    void addToRecordFilter(const Key&amp;);
</ins><span class="cx"> 
</span><span class="cx">     const String m_basePath;
</span><span class="cx">     const String m_recordsPath;
</span><span class="cx"> 
</span><span class="cx">     size_t m_capacity { std::numeric_limits&lt;size_t&gt;::max() };
</span><del>-    size_t m_approximateSize { 0 };
</del><ins>+    size_t m_approximateRecordsSize { 0 };
</ins><span class="cx"> 
</span><span class="cx">     // 2^18 bit filter can support up to 26000 entries with false positive rate &lt; 1%.
</span><span class="cx">     using ContentsFilter = BloomFilter&lt;18&gt;;
</span><del>-    std::unique_ptr&lt;ContentsFilter&gt; m_contentsFilter;
</del><ins>+    std::unique_ptr&lt;ContentsFilter&gt; m_recordFilter;
+    std::unique_ptr&lt;ContentsFilter&gt; m_bodyFilter;
</ins><span class="cx"> 
</span><span class="cx">     bool m_synchronizationInProgress { false };
</span><span class="cx">     bool m_shrinkInProgress { false };
</span><span class="cx"> 
</span><del>-    Vector&lt;Key::HashType&gt; m_contentsFilterHashesAddedDuringSynchronization;
</del><ins>+    Vector&lt;Key::HashType&gt; m_recordFilterHashesAddedDuringSynchronization;
+    Vector&lt;Key::HashType&gt; m_bodyFilterHashesAddedDuringSynchronization;
</ins><span class="cx"> 
</span><span class="cx">     static const int maximumRetrievePriority = 4;
</span><del>-    Deque&lt;std::unique_ptr&lt;const ReadOperation&gt;&gt; m_pendingReadOperationsByPriority[maximumRetrievePriority + 1];
-    HashSet&lt;std::unique_ptr&lt;const ReadOperation&gt;&gt; m_activeReadOperations;
</del><ins>+    Deque&lt;std::unique_ptr&lt;ReadOperation&gt;&gt; m_pendingReadOperationsByPriority[maximumRetrievePriority + 1];
+    HashSet&lt;std::unique_ptr&lt;ReadOperation&gt;&gt; m_activeReadOperations;
</ins><span class="cx"> 
</span><span class="cx">     Deque&lt;std::unique_ptr&lt;const WriteOperation&gt;&gt; m_pendingWriteOperations;
</span><span class="cx">     HashSet&lt;std::unique_ptr&lt;const WriteOperation&gt;&gt; m_activeWriteOperations;
</span></span></pre>
</div>
</div>

</body>
</html>