<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN"
"http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head><meta http-equiv="content-type" content="text/html; charset=utf-8" />
<title>[182954] trunk/Source/WebKit2</title>
</head>
<body>
<style type="text/css"><!--
#msg dl.meta { border: 1px #006 solid; background: #369; padding: 6px; color: #fff; }
#msg dl.meta dt { float: left; width: 6em; font-weight: bold; }
#msg dt:after { content:':';}
#msg dl, #msg dt, #msg ul, #msg li, #header, #footer, #logmsg { font-family: verdana,arial,helvetica,sans-serif; font-size: 10pt; }
#msg dl a { font-weight: bold}
#msg dl a:link { color:#fc3; }
#msg dl a:active { color:#ff0; }
#msg dl a:visited { color:#cc6; }
h3 { font-family: verdana,arial,helvetica,sans-serif; font-size: 10pt; font-weight: bold; }
#msg pre { overflow: auto; background: #ffc; border: 1px #fa0 solid; padding: 6px; }
#logmsg { background: #ffc; border: 1px #fa0 solid; padding: 1em 1em 0 1em; }
#logmsg p, #logmsg pre, #logmsg blockquote { margin: 0 0 1em 0; }
#logmsg p, #logmsg li, #logmsg dt, #logmsg dd { line-height: 14pt; }
#logmsg h1, #logmsg h2, #logmsg h3, #logmsg h4, #logmsg h5, #logmsg h6 { margin: .5em 0; }
#logmsg h1:first-child, #logmsg h2:first-child, #logmsg h3:first-child, #logmsg h4:first-child, #logmsg h5:first-child, #logmsg h6:first-child { margin-top: 0; }
#logmsg ul, #logmsg ol { padding: 0; list-style-position: inside; margin: 0 0 0 1em; }
#logmsg ul { text-indent: -1em; padding-left: 1em; }#logmsg ol { text-indent: -1.5em; padding-left: 1.5em; }
#logmsg > ul, #logmsg > ol { margin: 0 0 1em 0; }
#logmsg pre { background: #eee; padding: 1em; }
#logmsg blockquote { border: 1px solid #fa0; border-left-width: 10px; padding: 1em 1em 0 1em; background: white;}
#logmsg dl { margin: 0; }
#logmsg dt { font-weight: bold; }
#logmsg dd { margin: 0; padding: 0 0 0.5em 0; }
#logmsg dd:before { content:'\00bb';}
#logmsg table { border-spacing: 0px; border-collapse: collapse; border-top: 4px solid #fa0; border-bottom: 1px solid #fa0; background: #fff; }
#logmsg table th { text-align: left; font-weight: normal; padding: 0.2em 0.5em; border-top: 1px dotted #fa0; }
#logmsg table td { text-align: right; border-top: 1px dotted #fa0; padding: 0.2em 0.5em; }
#logmsg table thead th { text-align: center; border-bottom: 1px solid #fa0; }
#logmsg table th.Corner { text-align: left; }
#logmsg hr { border: none 0; border-top: 2px dashed #fa0; height: 1px; }
#header, #footer { color: #fff; background: #636; border: 1px #300 solid; padding: 6px; }
#patch { width: 100%; }
#patch h4 {font-family: verdana,arial,helvetica,sans-serif;font-size:10pt;padding:8px;background:#369;color:#fff;margin:0;}
#patch .propset h4, #patch .binary h4 {margin:0;}
#patch pre {padding:0;line-height:1.2em;margin:0;}
#patch .diff {width:100%;background:#eee;padding: 0 0 10px 0;overflow:auto;}
#patch .propset .diff, #patch .binary .diff {padding:10px 0;}
#patch span {display:block;padding:0 10px;}
#patch .modfile, #patch .addfile, #patch .delfile, #patch .propset, #patch .binary, #patch .copfile {border:1px solid #ccc;margin:10px 0;}
#patch ins {background:#dfd;text-decoration:none;display:block;padding:0 10px;}
#patch del {background:#fdd;text-decoration:none;display:block;padding:0 10px;}
#patch .lines, .info {color:#888;background:#fff;}
--></style>
<div id="msg">
<dl class="meta">
<dt>Revision</dt> <dd><a href="http://trac.webkit.org/projects/webkit/changeset/182954">182954</a></dd>
<dt>Author</dt> <dd>antti@apple.com</dd>
<dt>Date</dt> <dd>2015-04-17 10:07:18 -0700 (Fri, 17 Apr 2015)</dd>
</dl>
<h3>Log Message</h3>
<pre>Network Cache: Read resource record and body in parallel
https://bugs.webkit.org/show_bug.cgi?id=143879
Reviewed by Chris Dumez.
We currently first fetch the record file and then fetch the body blob if needed.
We can do both operations in parallel to reduce latency.
* NetworkProcess/cache/NetworkCacheFileSystemPosix.h:
(WebKit::NetworkCache::traverseCacheFiles):
Do all validation in the client.
* NetworkProcess/cache/NetworkCacheStorage.cpp:
(WebKit::NetworkCache::Storage::synchronize):
Maintain a bloom filter that contains the body blobs to avoid unnecessary IO attempts.
Delete any unknown file in cache directory.
(WebKit::NetworkCache::Storage::addToRecordFilter):
More informative name for record filter.
(WebKit::NetworkCache::Storage::mayContain):
(WebKit::NetworkCache::Storage::readRecord):
(WebKit::NetworkCache::Storage::storeBodyAsBlob):
(WebKit::NetworkCache::Storage::dispatchReadOperation):
Start record read IO and body blob read IO in parallel.
(WebKit::NetworkCache::Storage::finishReadOperation):
The read is finished when we have both the record and the blob.
(WebKit::NetworkCache::Storage::dispatchWriteOperation):
(WebKit::NetworkCache::Storage::retrieve):
(WebKit::NetworkCache::Storage::store):
(WebKit::NetworkCache::Storage::traverse):
(WebKit::NetworkCache::Storage::clear):
(WebKit::NetworkCache::Storage::shrink):
(WebKit::NetworkCache::Storage::addToContentsFilter): Deleted.
(WebKit::NetworkCache::Storage::decodeRecord): Deleted.
* NetworkProcess/cache/NetworkCacheStorage.h:
(WebKit::NetworkCache::Storage::ReadOperation::ReadOperation):
ReadOperation is now mutable and gathers the read result.</pre>
<h3>Modified Paths</h3>
<ul>
<li><a href="#trunkSourceWebKit2ChangeLog">trunk/Source/WebKit2/ChangeLog</a></li>
<li><a href="#trunkSourceWebKit2NetworkProcesscacheNetworkCacheFileSystemPosixh">trunk/Source/WebKit2/NetworkProcess/cache/NetworkCacheFileSystemPosix.h</a></li>
<li><a href="#trunkSourceWebKit2NetworkProcesscacheNetworkCacheStoragecpp">trunk/Source/WebKit2/NetworkProcess/cache/NetworkCacheStorage.cpp</a></li>
<li><a href="#trunkSourceWebKit2NetworkProcesscacheNetworkCacheStorageh">trunk/Source/WebKit2/NetworkProcess/cache/NetworkCacheStorage.h</a></li>
</ul>
</div>
<div id="patch">
<h3>Diff</h3>
<a id="trunkSourceWebKit2ChangeLog"></a>
<div class="modfile"><h4>Modified: trunk/Source/WebKit2/ChangeLog (182953 => 182954)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/WebKit2/ChangeLog        2015-04-17 16:49:49 UTC (rev 182953)
+++ trunk/Source/WebKit2/ChangeLog        2015-04-17 17:07:18 UTC (rev 182954)
</span><span class="lines">@@ -1,3 +1,52 @@
</span><ins>+2015-04-17 Antti Koivisto <antti@apple.com>
+
+ Network Cache: Read resource record and body in parallel
+ https://bugs.webkit.org/show_bug.cgi?id=143879
+
+ Reviewed by Chris Dumez.
+
+ We currently first fetch the record file and then fetch the body blob if needed.
+ We can do both operations in parallel to reduce latency.
+
+ * NetworkProcess/cache/NetworkCacheFileSystemPosix.h:
+ (WebKit::NetworkCache::traverseCacheFiles):
+
+ Do all validation in the client.
+
+ * NetworkProcess/cache/NetworkCacheStorage.cpp:
+ (WebKit::NetworkCache::Storage::synchronize):
+
+ Maintain a bloom filter that contains the body blobs to avoid unnecessary IO attempts.
+ Delete any unknown file in cache directory.
+
+ (WebKit::NetworkCache::Storage::addToRecordFilter):
+
+ More informative name for record filter.
+
+ (WebKit::NetworkCache::Storage::mayContain):
+ (WebKit::NetworkCache::Storage::readRecord):
+ (WebKit::NetworkCache::Storage::storeBodyAsBlob):
+ (WebKit::NetworkCache::Storage::dispatchReadOperation):
+
+ Start record read IO and body blob read IO in parallel.
+
+ (WebKit::NetworkCache::Storage::finishReadOperation):
+
+ The read is finished when we have both the record and the blob.
+
+ (WebKit::NetworkCache::Storage::dispatchWriteOperation):
+ (WebKit::NetworkCache::Storage::retrieve):
+ (WebKit::NetworkCache::Storage::store):
+ (WebKit::NetworkCache::Storage::traverse):
+ (WebKit::NetworkCache::Storage::clear):
+ (WebKit::NetworkCache::Storage::shrink):
+ (WebKit::NetworkCache::Storage::addToContentsFilter): Deleted.
+ (WebKit::NetworkCache::Storage::decodeRecord): Deleted.
+ * NetworkProcess/cache/NetworkCacheStorage.h:
+ (WebKit::NetworkCache::Storage::ReadOperation::ReadOperation):
+
+ ReadOperation is now mutable and gathers the read result.
+
</ins><span class="cx"> 2015-04-16 Anders Carlsson <andersca@apple.com>
</span><span class="cx">
</span><span class="cx"> Stop installing WebKit2.framework
</span></span></pre></div>
<a id="trunkSourceWebKit2NetworkProcesscacheNetworkCacheFileSystemPosixh"></a>
<div class="modfile"><h4>Modified: trunk/Source/WebKit2/NetworkProcess/cache/NetworkCacheFileSystemPosix.h (182953 => 182954)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/WebKit2/NetworkProcess/cache/NetworkCacheFileSystemPosix.h        2015-04-17 16:49:49 UTC (rev 182953)
+++ trunk/Source/WebKit2/NetworkProcess/cache/NetworkCacheFileSystemPosix.h        2015-04-17 17:07:18 UTC (rev 182954)
</span><span class="lines">@@ -28,7 +28,6 @@
</span><span class="cx">
</span><span class="cx"> #if ENABLE(NETWORK_CACHE)
</span><span class="cx">
</span><del>-#include "NetworkCacheKey.h"
</del><span class="cx"> #include <WebCore/FileSystem.h>
</span><span class="cx"> #include <dirent.h>
</span><span class="cx"> #include <sys/stat.h>
</span><span class="lines">@@ -62,8 +61,6 @@
</span><span class="cx"> traverseDirectory(cachePath, DT_DIR, [&cachePath, &function](const String& subdirName) {
</span><span class="cx"> String partitionPath = WebCore::pathByAppendingComponent(cachePath, subdirName);
</span><span class="cx"> traverseDirectory(partitionPath, DT_REG, [&function, &partitionPath](const String& fileName) {
</span><del>- if (fileName.length() != Key::hashStringLength())
- return;
</del><span class="cx"> function(fileName, partitionPath);
</span><span class="cx"> });
</span><span class="cx"> });
</span></span></pre></div>
<a id="trunkSourceWebKit2NetworkProcesscacheNetworkCacheStoragecpp"></a>
<div class="modfile"><h4>Modified: trunk/Source/WebKit2/NetworkProcess/cache/NetworkCacheStorage.cpp (182953 => 182954)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/WebKit2/NetworkProcess/cache/NetworkCacheStorage.cpp        2015-04-17 16:49:49 UTC (rev 182953)
+++ trunk/Source/WebKit2/NetworkProcess/cache/NetworkCacheStorage.cpp        2015-04-17 17:07:18 UTC (rev 182954)
</span><span class="lines">@@ -105,7 +105,7 @@
</span><span class="cx">
</span><span class="cx"> size_t Storage::approximateSize() const
</span><span class="cx"> {
</span><del>- return m_approximateSize + m_blobStorage.approximateSize();
</del><ins>+ return m_approximateRecordsSize + m_blobStorage.approximateSize();
</ins><span class="cx"> }
</span><span class="cx">
</span><span class="cx"> void Storage::synchronize()
</span><span class="lines">@@ -119,58 +119,77 @@
</span><span class="cx"> LOG(NetworkCacheStorage, "(NetworkProcess) synchronizing cache");
</span><span class="cx">
</span><span class="cx"> backgroundIOQueue().dispatch([this] {
</span><del>- auto filter = std::make_unique<ContentsFilter>();
- size_t size = 0;
</del><ins>+ auto recordFilter = std::make_unique<ContentsFilter>();
+ auto bodyFilter = std::make_unique<ContentsFilter>();
+ size_t recordsSize = 0;
</ins><span class="cx"> unsigned count = 0;
</span><del>- traverseCacheFiles(recordsPath(), [&filter, &size, &count](const String& fileName, const String& partitionPath) {
</del><ins>+ traverseCacheFiles(recordsPath(), [&recordFilter, &bodyFilter, &recordsSize, &count](const String& fileName, const String& partitionPath) {
+ auto filePath = WebCore::pathByAppendingComponent(partitionPath, fileName);
+
+ bool isBody = fileName.endsWith(bodyPostfix);
+ String hashString = isBody ? fileName.substring(0, Key::hashStringLength()) : fileName;
</ins><span class="cx"> Key::HashType hash;
</span><del>- if (!Key::stringToHash(fileName, hash))
</del><ins>+ if (!Key::stringToHash(hashString, hash)) {
+ WebCore::deleteFile(filePath);
</ins><span class="cx"> return;
</span><del>- auto filePath = WebCore::pathByAppendingComponent(partitionPath, fileName);
</del><ins>+ }
</ins><span class="cx"> long long fileSize = 0;
</span><span class="cx"> WebCore::getFileSize(filePath, fileSize);
</span><del>- if (!fileSize)
</del><ins>+ if (!fileSize) {
+ WebCore::deleteFile(filePath);
</ins><span class="cx"> return;
</span><del>- filter->add(hash);
- size += fileSize;
</del><ins>+ }
+ if (isBody) {
+ bodyFilter->add(hash);
+ return;
+ }
+ recordFilter->add(hash);
+ recordsSize += fileSize;
</ins><span class="cx"> ++count;
</span><span class="cx"> });
</span><span class="cx">
</span><del>- auto* filterPtr = filter.release();
- RunLoop::main().dispatch([this, filterPtr, size] {
- auto filter = std::unique_ptr<ContentsFilter>(filterPtr);
</del><ins>+ auto* recordFilterPtr = recordFilter.release();
+ auto* bodyFilterPtr = bodyFilter.release();
+ RunLoop::main().dispatch([this, recordFilterPtr, bodyFilterPtr, recordsSize] {
+ auto recordFilter = std::unique_ptr<ContentsFilter>(recordFilterPtr);
+ auto bodyFilter = std::unique_ptr<ContentsFilter>(bodyFilterPtr);
</ins><span class="cx">
</span><del>- for (auto hash : m_contentsFilterHashesAddedDuringSynchronization)
- filter->add(hash);
- m_contentsFilterHashesAddedDuringSynchronization.clear();
</del><ins>+ for (auto hash : m_recordFilterHashesAddedDuringSynchronization)
+ recordFilter->add(hash);
+ m_recordFilterHashesAddedDuringSynchronization.clear();
</ins><span class="cx">
</span><del>- m_contentsFilter = WTF::move(filter);
- m_approximateSize = size;
</del><ins>+ for (auto hash : m_bodyFilterHashesAddedDuringSynchronization)
+ bodyFilter->add(hash);
+ m_bodyFilterHashesAddedDuringSynchronization.clear();
+
+ m_recordFilter = WTF::move(recordFilter);
+ m_bodyFilter = WTF::move(bodyFilter);
+ m_approximateRecordsSize = recordsSize;
</ins><span class="cx"> m_synchronizationInProgress = false;
</span><span class="cx"> });
</span><span class="cx">
</span><span class="cx"> m_blobStorage.synchronize();
</span><span class="cx">
</span><del>- LOG(NetworkCacheStorage, "(NetworkProcess) cache synchronization completed size=%zu count=%d", size, count);
</del><ins>+ LOG(NetworkCacheStorage, "(NetworkProcess) cache synchronization completed size=%zu count=%d", recordsSize, count);
</ins><span class="cx"> });
</span><span class="cx"> }
</span><span class="cx">
</span><del>-void Storage::addToContentsFilter(const Key& key)
</del><ins>+void Storage::addToRecordFilter(const Key& key)
</ins><span class="cx"> {
</span><span class="cx"> ASSERT(RunLoop::isMain());
</span><span class="cx">
</span><del>- if (m_contentsFilter)
- m_contentsFilter->add(key.hash());
</del><ins>+ if (m_recordFilter)
+ m_recordFilter->add(key.hash());
</ins><span class="cx">
</span><span class="cx"> // If we get new entries during filter synchronization take care to add them to the new filter as well.
</span><span class="cx"> if (m_synchronizationInProgress)
</span><del>- m_contentsFilterHashesAddedDuringSynchronization.append(key.hash());
</del><ins>+ m_recordFilterHashesAddedDuringSynchronization.append(key.hash());
</ins><span class="cx"> }
</span><span class="cx">
</span><span class="cx"> bool Storage::mayContain(const Key& key) const
</span><span class="cx"> {
</span><span class="cx"> ASSERT(RunLoop::isMain());
</span><del>- return !m_contentsFilter || m_contentsFilter->mayContain(key.hash());
</del><ins>+ return !m_recordFilter || m_recordFilter->mayContain(key.hash());
</ins><span class="cx"> }
</span><span class="cx">
</span><span class="cx"> static String partitionPathForKey(const Key& key, const String& cachePath)
</span><span class="lines">@@ -278,42 +297,35 @@
</span><span class="cx"> return true;
</span><span class="cx"> }
</span><span class="cx">
</span><del>-std::unique_ptr<Storage::Record> Storage::decodeRecord(const Data& recordData, const Key& key)
</del><ins>+void Storage::readRecord(ReadOperation& readOperation, const Data& recordData)
</ins><span class="cx"> {
</span><span class="cx"> ASSERT(!RunLoop::isMain());
</span><span class="cx">
</span><span class="cx"> RecordMetaData metaData;
</span><span class="cx"> Data headerData;
</span><span class="cx"> if (!decodeRecordHeader(recordData, metaData, headerData))
</span><del>- return nullptr;
</del><ins>+ return;
</ins><span class="cx">
</span><del>- if (metaData.key != key)
- return nullptr;
</del><ins>+ if (metaData.key != readOperation.key)
+ return;
</ins><span class="cx">
</span><span class="cx"> // Sanity check against time stamps in future.
</span><span class="cx"> auto timeStamp = std::chrono::system_clock::time_point(metaData.epochRelativeTimeStamp);
</span><span class="cx"> if (timeStamp > std::chrono::system_clock::now())
</span><del>- return nullptr;
</del><ins>+ return;
</ins><span class="cx">
</span><span class="cx"> Data bodyData;
</span><span class="cx"> if (metaData.isBodyInline) {
</span><span class="cx"> size_t bodyOffset = metaData.headerOffset + headerData.size();
</span><span class="cx"> if (bodyOffset + metaData.bodySize != recordData.size())
</span><del>- return nullptr;
</del><ins>+ return;
</ins><span class="cx"> bodyData = recordData.subrange(bodyOffset, metaData.bodySize);
</span><span class="cx"> if (metaData.bodyHash != computeSHA1(bodyData))
</span><del>- return nullptr;
- } else {
- auto bodyPath = bodyPathForKey(key, recordsPath());
- auto bodyBlob = m_blobStorage.get(bodyPath);
- if (metaData.bodySize != bodyBlob.data.size())
- return nullptr;
- if (metaData.bodyHash != bodyBlob.hash)
- return nullptr;
- bodyData = bodyBlob.data;
</del><ins>+ return;
</ins><span class="cx"> }
</span><span class="cx">
</span><del>- return std::make_unique<Storage::Record>(Storage::Record {
</del><ins>+ readOperation.expectedBodyHash = metaData.bodyHash;
+ readOperation.resultRecord = std::make_unique<Storage::Record>(Storage::Record {
</ins><span class="cx"> metaData.key,
</span><span class="cx"> timeStamp,
</span><span class="cx"> headerData,
</span><span class="lines">@@ -348,12 +360,17 @@
</span><span class="cx"> if (blob.data.isNull())
</span><span class="cx"> return { };
</span><span class="cx">
</span><del>- // Tell the client we now have a disk-backed map for this data.
- if (mappedBodyHandler) {
- RunLoop::main().dispatch([blob, mappedBodyHandler] {
</del><ins>+ auto hash = record.key.hash();
+ RunLoop::main().dispatch([this, blob, hash, mappedBodyHandler] {
+ if (m_bodyFilter)
+ m_bodyFilter->add(hash);
+ if (m_synchronizationInProgress)
+ m_bodyFilterHashesAddedDuringSynchronization.append(hash);
+
+ if (mappedBodyHandler)
</ins><span class="cx"> mappedBodyHandler(blob.data);
</span><del>- });
- }
</del><ins>+
+ });
</ins><span class="cx"> return blob;
</span><span class="cx"> }
</span><span class="cx">
</span><span class="lines">@@ -401,40 +418,60 @@
</span><span class="cx"> });
</span><span class="cx"> }
</span><span class="cx">
</span><del>-void Storage::dispatchReadOperation(const ReadOperation& read)
</del><ins>+void Storage::dispatchReadOperation(ReadOperation& readOperation)
</ins><span class="cx"> {
</span><span class="cx"> ASSERT(RunLoop::isMain());
</span><del>- ASSERT(m_activeReadOperations.contains(&read));
</del><ins>+ ASSERT(m_activeReadOperations.contains(&readOperation));
</ins><span class="cx">
</span><del>- auto recordsPath = this->recordsPath();
- auto recordPath = recordPathForKey(read.key, recordsPath);
</del><ins>+ auto recordPath = recordPathForKey(readOperation.key, recordsPath());
</ins><span class="cx">
</span><span class="cx"> RefPtr<IOChannel> channel = IOChannel::open(recordPath, IOChannel::Type::Read);
</span><del>- channel->read(0, std::numeric_limits<size_t>::max(), &ioQueue(), [this, &read](const Data& fileData, int error) {
- auto record = error ? nullptr : decodeRecord(fileData, read.key);
</del><ins>+ channel->read(0, std::numeric_limits<size_t>::max(), &ioQueue(), [this, &readOperation](const Data& fileData, int error) {
+ if (!error)
+ readRecord(readOperation, fileData);
+ finishReadOperation(readOperation);
+ });
</ins><span class="cx">
</span><del>- auto* recordPtr = record.release();
- RunLoop::main().dispatch([this, &read, recordPtr] {
- auto record = std::unique_ptr<Record>(recordPtr);
- finishReadOperation(read, WTF::move(record));
- });
</del><ins>+ bool shouldGetBodyBlob = !m_bodyFilter || m_bodyFilter->mayContain(readOperation.key.hash());
+ if (!shouldGetBodyBlob) {
+ finishReadOperation(readOperation);
+ return;
+ }
+
+ // Read the body blob in parallel with the record read.
+ ioQueue().dispatch([this, &readOperation] {
+ auto bodyPath = bodyPathForKey(readOperation.key, this->recordsPath());
+ readOperation.resultBodyBlob = m_blobStorage.get(bodyPath);
+ finishReadOperation(readOperation);
</ins><span class="cx"> });
</span><span class="cx"> }
</span><span class="cx">
</span><del>-void Storage::finishReadOperation(const ReadOperation& read, std::unique_ptr<Record> record)
</del><ins>+void Storage::finishReadOperation(ReadOperation& readOperation)
</ins><span class="cx"> {
</span><del>- ASSERT(RunLoop::isMain());
</del><ins>+ // Record and body blob reads must finish.
+ bool isComplete = ++readOperation.finishedCount == 2;
+ if (!isComplete)
+ return;
</ins><span class="cx">
</span><del>- bool success = read.completionHandler(WTF::move(record));
- if (success)
- updateFileModificationTime(recordPathForKey(read.key, recordsPath()));
- else
- remove(read.key);
- ASSERT(m_activeReadOperations.contains(&read));
- m_activeReadOperations.remove(&read);
- dispatchPendingReadOperations();
</del><ins>+ RunLoop::main().dispatch([this, &readOperation] {
+ if (readOperation.resultRecord && readOperation.resultRecord->body.isNull()) {
+ if (readOperation.resultBodyBlob.hash == readOperation.expectedBodyHash)
+ readOperation.resultRecord->body = readOperation.resultBodyBlob.data;
+ else
+ readOperation.resultRecord = nullptr;
+ }
</ins><span class="cx">
</span><del>- LOG(NetworkCacheStorage, "(NetworkProcess) read complete success=%d", success);
</del><ins>+ bool success = readOperation.completionHandler(WTF::move(readOperation.resultRecord));
+ if (success)
+ updateFileModificationTime(recordPathForKey(readOperation.key, recordsPath()));
+ else
+ remove(readOperation.key);
+ ASSERT(m_activeReadOperations.contains(&readOperation));
+ m_activeReadOperations.remove(&readOperation);
+ dispatchPendingReadOperations();
+
+ LOG(NetworkCacheStorage, "(NetworkProcess) read complete success=%d", success);
+ });
</ins><span class="cx"> }
</span><span class="cx">
</span><span class="cx"> void Storage::dispatchPendingReadOperations()
</span><span class="lines">@@ -498,42 +535,42 @@
</span><span class="cx"> return bodyData.size() > maximumInlineBodySize;
</span><span class="cx"> }
</span><span class="cx">
</span><del>-void Storage::dispatchWriteOperation(const WriteOperation& write)
</del><ins>+void Storage::dispatchWriteOperation(const WriteOperation& writeOperation)
</ins><span class="cx"> {
</span><span class="cx"> ASSERT(RunLoop::isMain());
</span><del>- ASSERT(m_activeWriteOperations.contains(&write));
</del><ins>+ ASSERT(m_activeWriteOperations.contains(&writeOperation));
</ins><span class="cx">
</span><span class="cx"> // This was added already when starting the store but filter might have been wiped.
</span><del>- addToContentsFilter(write.record.key);
</del><ins>+ addToRecordFilter(writeOperation.record.key);
</ins><span class="cx">
</span><del>- backgroundIOQueue().dispatch([this, &write] {
</del><ins>+ backgroundIOQueue().dispatch([this, &writeOperation] {
</ins><span class="cx"> auto recordsPath = this->recordsPath();
</span><del>- auto partitionPath = partitionPathForKey(write.record.key, recordsPath);
- auto recordPath = recordPathForKey(write.record.key, recordsPath);
</del><ins>+ auto partitionPath = partitionPathForKey(writeOperation.record.key, recordsPath);
+ auto recordPath = recordPathForKey(writeOperation.record.key, recordsPath);
</ins><span class="cx">
</span><span class="cx"> WebCore::makeAllDirectories(partitionPath);
</span><span class="cx">
</span><del>- bool shouldStoreAsBlob = shouldStoreBodyAsBlob(write.record.body);
- auto bodyBlob = shouldStoreAsBlob ? storeBodyAsBlob(write.record, write.mappedBodyHandler) : Nullopt;
</del><ins>+ bool shouldStoreAsBlob = shouldStoreBodyAsBlob(writeOperation.record.body);
+ auto bodyBlob = shouldStoreAsBlob ? storeBodyAsBlob(writeOperation.record, writeOperation.mappedBodyHandler) : Nullopt;
</ins><span class="cx">
</span><del>- auto recordData = encodeRecord(write.record, bodyBlob);
</del><ins>+ auto recordData = encodeRecord(writeOperation.record, bodyBlob);
</ins><span class="cx">
</span><span class="cx"> auto channel = IOChannel::open(recordPath, IOChannel::Type::Create);
</span><span class="cx"> size_t recordSize = recordData.size();
</span><del>- channel->write(0, recordData, nullptr, [this, &write, recordSize](int error) {
</del><ins>+ channel->write(0, recordData, nullptr, [this, &writeOperation, recordSize](int error) {
</ins><span class="cx"> // On error the entry still stays in the contents filter until next synchronization.
</span><del>- m_approximateSize += recordSize;
- finishWriteOperation(write);
</del><ins>+ m_approximateRecordsSize += recordSize;
+ finishWriteOperation(writeOperation);
</ins><span class="cx">
</span><span class="cx"> LOG(NetworkCacheStorage, "(NetworkProcess) write complete error=%d", error);
</span><span class="cx"> });
</span><span class="cx"> });
</span><span class="cx"> }
</span><span class="cx">
</span><del>-void Storage::finishWriteOperation(const WriteOperation& write)
</del><ins>+void Storage::finishWriteOperation(const WriteOperation& writeOperation)
</ins><span class="cx"> {
</span><del>- ASSERT(m_activeWriteOperations.contains(&write));
- m_activeWriteOperations.remove(&write);
</del><ins>+ ASSERT(m_activeWriteOperations.contains(&writeOperation));
+ m_activeWriteOperations.remove(&writeOperation);
</ins><span class="cx"> dispatchPendingWriteOperations();
</span><span class="cx">
</span><span class="cx"> shrinkIfNeeded();
</span><span class="lines">@@ -560,7 +597,7 @@
</span><span class="cx"> if (retrieveFromMemory(m_activeWriteOperations, key, completionHandler))
</span><span class="cx"> return;
</span><span class="cx">
</span><del>- m_pendingReadOperationsByPriority[priority].append(new ReadOperation { key, WTF::move(completionHandler) });
</del><ins>+ m_pendingReadOperationsByPriority[priority].append(new ReadOperation { key, WTF::move(completionHandler) } );
</ins><span class="cx"> dispatchPendingReadOperations();
</span><span class="cx"> }
</span><span class="cx">
</span><span class="lines">@@ -575,7 +612,7 @@
</span><span class="cx"> m_pendingWriteOperations.append(new WriteOperation { record, WTF::move(mappedBodyHandler) });
</span><span class="cx">
</span><span class="cx"> // Add key to the filter already here as we do lookups from the pending operations too.
</span><del>- addToContentsFilter(record.key);
</del><ins>+ addToRecordFilter(record.key);
</ins><span class="cx">
</span><span class="cx"> dispatchPendingWriteOperations();
</span><span class="cx"> }
</span><span class="lines">@@ -584,6 +621,8 @@
</span><span class="cx"> {
</span><span class="cx"> ioQueue().dispatch([this, flags, traverseHandler] {
</span><span class="cx"> traverseCacheFiles(recordsPath(), [this, flags, &traverseHandler](const String& fileName, const String& partitionPath) {
</span><ins>+ if (fileName.length() != Key::hashStringLength())
+ return;
</ins><span class="cx"> auto recordPath = WebCore::pathByAppendingComponent(partitionPath, fileName);
</span><span class="cx">
</span><span class="cx"> RecordInfo info;
</span><span class="lines">@@ -634,9 +673,11 @@
</span><span class="cx"> ASSERT(RunLoop::isMain());
</span><span class="cx"> LOG(NetworkCacheStorage, "(NetworkProcess) clearing cache");
</span><span class="cx">
</span><del>- if (m_contentsFilter)
- m_contentsFilter->clear();
- m_approximateSize = 0;
</del><ins>+ if (m_recordFilter)
+ m_recordFilter->clear();
+ if (m_bodyFilter)
+ m_bodyFilter->clear();
+ m_approximateRecordsSize = 0;
</ins><span class="cx">
</span><span class="cx"> ioQueue().dispatch([this] {
</span><span class="cx"> auto recordsPath = this->recordsPath();
</span><span class="lines">@@ -708,6 +749,8 @@
</span><span class="cx"> backgroundIOQueue().dispatch([this] {
</span><span class="cx"> auto recordsPath = this->recordsPath();
</span><span class="cx"> traverseCacheFiles(recordsPath, [this](const String& fileName, const String& partitionPath) {
</span><ins>+ if (fileName.length() != Key::hashStringLength())
+ return;
</ins><span class="cx"> auto recordPath = WebCore::pathByAppendingComponent(partitionPath, fileName);
</span><span class="cx"> auto bodyPath = bodyPathForRecordPath(recordPath);
</span><span class="cx">
</span></span></pre></div>
<a id="trunkSourceWebKit2NetworkProcesscacheNetworkCacheStorageh"></a>
<div class="modfile"><h4>Modified: trunk/Source/WebKit2/NetworkProcess/cache/NetworkCacheStorage.h (182953 => 182954)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/WebKit2/NetworkProcess/cache/NetworkCacheStorage.h        2015-04-17 16:49:49 UTC (rev 182953)
+++ trunk/Source/WebKit2/NetworkProcess/cache/NetworkCacheStorage.h        2015-04-17 17:07:18 UTC (rev 182954)
</span><span class="lines">@@ -96,12 +96,22 @@
</span><span class="cx"> void shrink();
</span><span class="cx">
</span><span class="cx"> struct ReadOperation {
</span><del>- Key key;
- RetrieveCompletionHandler completionHandler;
</del><ins>+ ReadOperation(const Key& key, const RetrieveCompletionHandler& completionHandler)
+ : key(key)
+ , completionHandler(completionHandler)
+ { }
+
+ const Key key;
+ const RetrieveCompletionHandler completionHandler;
+
+ std::unique_ptr<Record> resultRecord;
+ SHA1::Digest expectedBodyHash;
+ BlobStorage::Blob resultBodyBlob;
+ std::atomic<unsigned> finishedCount { 0 };
</ins><span class="cx"> };
</span><del>- void dispatchReadOperation(const ReadOperation&);
</del><ins>+ void dispatchReadOperation(ReadOperation&);
</ins><span class="cx"> void dispatchPendingReadOperations();
</span><del>- void finishReadOperation(const ReadOperation&, std::unique_ptr<Record>);
</del><ins>+ void finishReadOperation(ReadOperation&);
</ins><span class="cx">
</span><span class="cx"> struct WriteOperation {
</span><span class="cx"> Record record;
</span><span class="lines">@@ -113,7 +123,7 @@
</span><span class="cx">
</span><span class="cx"> Optional<BlobStorage::Blob> storeBodyAsBlob(const Record&, const MappedBodyHandler&);
</span><span class="cx"> Data encodeRecord(const Record&, Optional<BlobStorage::Blob>);
</span><del>- std::unique_ptr<Record> decodeRecord(const Data&, const Key&);
</del><ins>+ void readRecord(ReadOperation&, const Data&);
</ins><span class="cx">
</span><span class="cx"> void updateFileModificationTime(const String& path);
</span><span class="cx">
</span><span class="lines">@@ -123,26 +133,28 @@
</span><span class="cx">
</span><span class="cx"> bool mayContain(const Key&) const;
</span><span class="cx">
</span><del>- void addToContentsFilter(const Key&);
</del><ins>+ void addToRecordFilter(const Key&);
</ins><span class="cx">
</span><span class="cx"> const String m_basePath;
</span><span class="cx"> const String m_recordsPath;
</span><span class="cx">
</span><span class="cx"> size_t m_capacity { std::numeric_limits<size_t>::max() };
</span><del>- size_t m_approximateSize { 0 };
</del><ins>+ size_t m_approximateRecordsSize { 0 };
</ins><span class="cx">
</span><span class="cx"> // 2^18 bit filter can support up to 26000 entries with false positive rate < 1%.
</span><span class="cx"> using ContentsFilter = BloomFilter<18>;
</span><del>- std::unique_ptr<ContentsFilter> m_contentsFilter;
</del><ins>+ std::unique_ptr<ContentsFilter> m_recordFilter;
+ std::unique_ptr<ContentsFilter> m_bodyFilter;
</ins><span class="cx">
</span><span class="cx"> bool m_synchronizationInProgress { false };
</span><span class="cx"> bool m_shrinkInProgress { false };
</span><span class="cx">
</span><del>- Vector<Key::HashType> m_contentsFilterHashesAddedDuringSynchronization;
</del><ins>+ Vector<Key::HashType> m_recordFilterHashesAddedDuringSynchronization;
+ Vector<Key::HashType> m_bodyFilterHashesAddedDuringSynchronization;
</ins><span class="cx">
</span><span class="cx"> static const int maximumRetrievePriority = 4;
</span><del>- Deque<std::unique_ptr<const ReadOperation>> m_pendingReadOperationsByPriority[maximumRetrievePriority + 1];
- HashSet<std::unique_ptr<const ReadOperation>> m_activeReadOperations;
</del><ins>+ Deque<std::unique_ptr<ReadOperation>> m_pendingReadOperationsByPriority[maximumRetrievePriority + 1];
+ HashSet<std::unique_ptr<ReadOperation>> m_activeReadOperations;
</ins><span class="cx">
</span><span class="cx"> Deque<std::unique_ptr<const WriteOperation>> m_pendingWriteOperations;
</span><span class="cx"> HashSet<std::unique_ptr<const WriteOperation>> m_activeWriteOperations;
</span></span></pre>
</div>
</div>
</body>
</html>