<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN"
"http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head><meta http-equiv="content-type" content="text/html; charset=utf-8" />
<title>[279621] trunk/Source/bmalloc</title>
</head>
<body>

<style type="text/css"><!--
#msg dl.meta { border: 1px #006 solid; background: #369; padding: 6px; color: #fff; }
#msg dl.meta dt { float: left; width: 6em; font-weight: bold; }
#msg dt:after { content:':';}
#msg dl, #msg dt, #msg ul, #msg li, #header, #footer, #logmsg { font-family: verdana,arial,helvetica,sans-serif; font-size: 10pt;  }
#msg dl a { font-weight: bold}
#msg dl a:link    { color:#fc3; }
#msg dl a:active  { color:#ff0; }
#msg dl a:visited { color:#cc6; }
h3 { font-family: verdana,arial,helvetica,sans-serif; font-size: 10pt; font-weight: bold; }
#msg pre { overflow: auto; background: #ffc; border: 1px #fa0 solid; padding: 6px; }
#logmsg { background: #ffc; border: 1px #fa0 solid; padding: 1em 1em 0 1em; }
#logmsg p, #logmsg pre, #logmsg blockquote { margin: 0 0 1em 0; }
#logmsg p, #logmsg li, #logmsg dt, #logmsg dd { line-height: 14pt; }
#logmsg h1, #logmsg h2, #logmsg h3, #logmsg h4, #logmsg h5, #logmsg h6 { margin: .5em 0; }
#logmsg h1:first-child, #logmsg h2:first-child, #logmsg h3:first-child, #logmsg h4:first-child, #logmsg h5:first-child, #logmsg h6:first-child { margin-top: 0; }
#logmsg ul, #logmsg ol { padding: 0; list-style-position: inside; margin: 0 0 0 1em; }
#logmsg ul { text-indent: -1em; padding-left: 1em; }#logmsg ol { text-indent: -1.5em; padding-left: 1.5em; }
#logmsg > ul, #logmsg > ol { margin: 0 0 1em 0; }
#logmsg pre { background: #eee; padding: 1em; }
#logmsg blockquote { border: 1px solid #fa0; border-left-width: 10px; padding: 1em 1em 0 1em; background: white;}
#logmsg dl { margin: 0; }
#logmsg dt { font-weight: bold; }
#logmsg dd { margin: 0; padding: 0 0 0.5em 0; }
#logmsg dd:before { content:'\00bb';}
#logmsg table { border-spacing: 0px; border-collapse: collapse; border-top: 4px solid #fa0; border-bottom: 1px solid #fa0; background: #fff; }
#logmsg table th { text-align: left; font-weight: normal; padding: 0.2em 0.5em; border-top: 1px dotted #fa0; }
#logmsg table td { text-align: right; border-top: 1px dotted #fa0; padding: 0.2em 0.5em; }
#logmsg table thead th { text-align: center; border-bottom: 1px solid #fa0; }
#logmsg table th.Corner { text-align: left; }
#logmsg hr { border: none 0; border-top: 2px dashed #fa0; height: 1px; }
#header, #footer { color: #fff; background: #636; border: 1px #300 solid; padding: 6px; }
#patch { width: 100%; }
#patch h4 {font-family: verdana,arial,helvetica,sans-serif;font-size:10pt;padding:8px;background:#369;color:#fff;margin:0;}
#patch .propset h4, #patch .binary h4 {margin:0;}
#patch pre {padding:0;line-height:1.2em;margin:0;}
#patch .diff {width:100%;background:#eee;padding: 0 0 10px 0;overflow:auto;}
#patch .propset .diff, #patch .binary .diff  {padding:10px 0;}
#patch span {display:block;padding:0 10px;}
#patch .modfile, #patch .addfile, #patch .delfile, #patch .propset, #patch .binary, #patch .copfile {border:1px solid #ccc;margin:10px 0;}
#patch ins {background:#dfd;text-decoration:none;display:block;padding:0 10px;}
#patch del {background:#fdd;text-decoration:none;display:block;padding:0 10px;}
#patch .lines, .info {color:#888;background:#fff;}
--></style>
<div id="msg">
<dl class="meta">
<dt>Revision</dt> <dd><a href="http://trac.webkit.org/projects/webkit/changeset/279621">279621</a></dd>
<dt>Author</dt> <dd>msaboff@apple.com</dd>
<dt>Date</dt> <dd>2021-07-06 14:20:53 -0700 (Tue, 06 Jul 2021)</dd>
</dl>

<h3>Log Message</h3>
<pre>[bmalloc] Make adaptive scavenging more precise
https://bugs.webkit.org/show_bug.cgi?id=226237

Reviewed by Geoffrey Garen.

Reland the adaptive scavenger change for macOS with fix.

The bug happens when decommitting large ranges that don't have physical pages.
We'd call Heap::decommitLargeRange(), but would only add the range to the
decommitter list if there were physical pages associated with the range.
We would still perform all the other processing of a decommitted range,
including setting the range as not elgible for allocation or merging.
Had the range been added to the decommitter list, we would have set the
range as elgible after the physical pages were released to the OS.
The result is that the range could never be allocated, either by itself or as a
larger range merged with adjacent ranges.

The fix is to only perform decommit processing of large ranges if they have
physical pages.  We now check for physical pages before calling Heap::decommitLargeRange().
For ranges that don't have physical pages, they can stay on the free list as
elgible without having to round trip through decommit processing.

Made a minor change to the calculation of the physical end of the LargeRange created
and added to the free list in Heap::deallocateSmallChunk.  If the last page in the chunk
has a physical page, we set the physical end of the range to the end of the chunk.
This is for the case where there is an unusable partial small page at the end of the chunk.

* bmalloc/BPlatform.h:
* bmalloc/Heap.cpp:
(bmalloc::Heap::decommitLargeRange):
(bmalloc::Heap::scavenge):
(bmalloc::Heap::allocateSmallChunk):
(bmalloc::Heap::deallocateSmallChunk):
(bmalloc::Heap::allocateSmallPage):
(bmalloc::Heap::splitAndAllocate):
(bmalloc::Heap::allocateLarge):
(bmalloc::Heap::tryAllocateLargeChunk):
(bmalloc::Heap::shrinkLarge):
(bmalloc::Heap::deallocateLarge):
(bmalloc::Heap::scavengeToHighWatermark): Deleted.
* bmalloc/Heap.h:
* bmalloc/IsoDirectory.h:
* bmalloc/IsoDirectoryInlines.h:
(bmalloc::passedNumPages>::takeFirstEligible):
(bmalloc::passedNumPages>::scavenge):
(bmalloc::passedNumPages>::scavengeToHighWatermark): Deleted.
* bmalloc/IsoHeapImpl.h:
* bmalloc/IsoHeapImplInlines.h:
(bmalloc::IsoHeapImpl<Config>::scavengeToHighWatermark): Deleted.
* bmalloc/IsoSharedHeapInlines.h:
(bmalloc::IsoSharedHeap::allocateSlow):
* bmalloc/LargeMap.cpp:
(bmalloc::LargeMap::add):
* bmalloc/LargeRange.h:
(bmalloc::LargeRange::LargeRange):
(bmalloc::LargeRange::physicalEnd const):
(bmalloc::LargeRange::setPhysicalEnd):
(bmalloc::LargeRange::clearPhysicalEnd):
(bmalloc::LargeRange::setUsedSinceLastScavenge):
(bmalloc::merge):
(bmalloc::LargeRange::split const):
(): Deleted.
* bmalloc/Scavenger.cpp:
(bmalloc::Scavenger::Scavenger):
(bmalloc::Scavenger::scheduleIfUnderMemoryPressure):
(bmalloc::Scavenger::schedule):
(bmalloc::Scavenger::scavenge):
(bmalloc::Scavenger::threadRunLoop):
(bmalloc::Scavenger::didStartGrowing): Deleted.
(bmalloc::Scavenger::timeSinceLastPartialScavenge): Deleted.
(bmalloc::Scavenger::partialScavenge): Deleted.
* bmalloc/Scavenger.h:
* bmalloc/SmallPage.h:
(bmalloc::SmallPage::setUsedSinceLastScavenge):</pre>

<h3>Modified Paths</h3>
<ul>
<li><a href="#trunkSourcebmallocChangeLog">trunk/Source/bmalloc/ChangeLog</a></li>
<li><a href="#trunkSourcebmallocbmallocBPlatformh">trunk/Source/bmalloc/bmalloc/BPlatform.h</a></li>
<li><a href="#trunkSourcebmallocbmallocHeapcpp">trunk/Source/bmalloc/bmalloc/Heap.cpp</a></li>
<li><a href="#trunkSourcebmallocbmallocHeaph">trunk/Source/bmalloc/bmalloc/Heap.h</a></li>
<li><a href="#trunkSourcebmallocbmallocIsoDirectoryh">trunk/Source/bmalloc/bmalloc/IsoDirectory.h</a></li>
<li><a href="#trunkSourcebmallocbmallocIsoDirectoryInlinesh">trunk/Source/bmalloc/bmalloc/IsoDirectoryInlines.h</a></li>
<li><a href="#trunkSourcebmallocbmallocIsoHeapImplh">trunk/Source/bmalloc/bmalloc/IsoHeapImpl.h</a></li>
<li><a href="#trunkSourcebmallocbmallocIsoHeapImplInlinesh">trunk/Source/bmalloc/bmalloc/IsoHeapImplInlines.h</a></li>
<li><a href="#trunkSourcebmallocbmallocIsoSharedHeapInlinesh">trunk/Source/bmalloc/bmalloc/IsoSharedHeapInlines.h</a></li>
<li><a href="#trunkSourcebmallocbmallocLargeMapcpp">trunk/Source/bmalloc/bmalloc/LargeMap.cpp</a></li>
<li><a href="#trunkSourcebmallocbmallocLargeRangeh">trunk/Source/bmalloc/bmalloc/LargeRange.h</a></li>
<li><a href="#trunkSourcebmallocbmallocScavengercpp">trunk/Source/bmalloc/bmalloc/Scavenger.cpp</a></li>
<li><a href="#trunkSourcebmallocbmallocScavengerh">trunk/Source/bmalloc/bmalloc/Scavenger.h</a></li>
<li><a href="#trunkSourcebmallocbmallocSmallPageh">trunk/Source/bmalloc/bmalloc/SmallPage.h</a></li>
</ul>

</div>
<div id="patch">
<h3>Diff</h3>
<a id="trunkSourcebmallocChangeLog"></a>
<div class="modfile"><h4>Modified: trunk/Source/bmalloc/ChangeLog (279620 => 279621)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/bmalloc/ChangeLog   2021-07-06 21:12:55 UTC (rev 279620)
+++ trunk/Source/bmalloc/ChangeLog      2021-07-06 21:20:53 UTC (rev 279621)
</span><span class="lines">@@ -1,3 +1,80 @@
</span><ins>+2021-07-06  Michael Saboff  <msaboff@apple.com>
+
+        [bmalloc] Make adaptive scavenging more precise
+        https://bugs.webkit.org/show_bug.cgi?id=226237
+
+        Reviewed by Geoffrey Garen.
+
+        Reland the adaptive scavenger change for macOS with fix.
+
+        The bug happens when decommitting large ranges that don't have physical pages.
+        We'd call Heap::decommitLargeRange(), but would only add the range to the
+        decommitter list if there were physical pages associated with the range.
+        We would still perform all the other processing of a decommitted range,
+        including setting the range as not elgible for allocation or merging.
+        Had the range been added to the decommitter list, we would have set the
+        range as elgible after the physical pages were released to the OS.
+        The result is that the range could never be allocated, either by itself or as a
+        larger range merged with adjacent ranges.
+
+        The fix is to only perform decommit processing of large ranges if they have
+        physical pages.  We now check for physical pages before calling Heap::decommitLargeRange().
+        For ranges that don't have physical pages, they can stay on the free list as
+        elgible without having to round trip through decommit processing.
+
+        Made a minor change to the calculation of the physical end of the LargeRange created
+        and added to the free list in Heap::deallocateSmallChunk.  If the last page in the chunk
+        has a physical page, we set the physical end of the range to the end of the chunk.
+        This is for the case where there is an unusable partial small page at the end of the chunk.
+
+        * bmalloc/BPlatform.h:
+        * bmalloc/Heap.cpp:
+        (bmalloc::Heap::decommitLargeRange):
+        (bmalloc::Heap::scavenge):
+        (bmalloc::Heap::allocateSmallChunk):
+        (bmalloc::Heap::deallocateSmallChunk):
+        (bmalloc::Heap::allocateSmallPage):
+        (bmalloc::Heap::splitAndAllocate):
+        (bmalloc::Heap::allocateLarge):
+        (bmalloc::Heap::tryAllocateLargeChunk):
+        (bmalloc::Heap::shrinkLarge):
+        (bmalloc::Heap::deallocateLarge):
+        (bmalloc::Heap::scavengeToHighWatermark): Deleted.
+        * bmalloc/Heap.h:
+        * bmalloc/IsoDirectory.h:
+        * bmalloc/IsoDirectoryInlines.h:
+        (bmalloc::passedNumPages>::takeFirstEligible):
+        (bmalloc::passedNumPages>::scavenge):
+        (bmalloc::passedNumPages>::scavengeToHighWatermark): Deleted.
+        * bmalloc/IsoHeapImpl.h:
+        * bmalloc/IsoHeapImplInlines.h:
+        (bmalloc::IsoHeapImpl<Config>::scavengeToHighWatermark): Deleted.
+        * bmalloc/IsoSharedHeapInlines.h:
+        (bmalloc::IsoSharedHeap::allocateSlow):
+        * bmalloc/LargeMap.cpp:
+        (bmalloc::LargeMap::add):
+        * bmalloc/LargeRange.h:
+        (bmalloc::LargeRange::LargeRange):
+        (bmalloc::LargeRange::physicalEnd const):
+        (bmalloc::LargeRange::setPhysicalEnd):
+        (bmalloc::LargeRange::clearPhysicalEnd):
+        (bmalloc::LargeRange::setUsedSinceLastScavenge):
+        (bmalloc::merge):
+        (bmalloc::LargeRange::split const):
+        (): Deleted.
+        * bmalloc/Scavenger.cpp:
+        (bmalloc::Scavenger::Scavenger):
+        (bmalloc::Scavenger::scheduleIfUnderMemoryPressure):
+        (bmalloc::Scavenger::schedule):
+        (bmalloc::Scavenger::scavenge):
+        (bmalloc::Scavenger::threadRunLoop):
+        (bmalloc::Scavenger::didStartGrowing): Deleted.
+        (bmalloc::Scavenger::timeSinceLastPartialScavenge): Deleted.
+        (bmalloc::Scavenger::partialScavenge): Deleted.
+        * bmalloc/Scavenger.h:
+        * bmalloc/SmallPage.h:
+        (bmalloc::SmallPage::setUsedSinceLastScavenge):
+
</ins><span class="cx"> 2021-06-23  Mark Lam  <mark.lam@apple.com>
</span><span class="cx"> 
</span><span class="cx">         Base Options::useWebAssemblyFastMemory's default value on Gigacage::hasCapacityToUseLargeGigacage.
</span></span></pre></div>
<a id="trunkSourcebmallocbmallocBPlatformh"></a>
<div class="modfile"><h4>Modified: trunk/Source/bmalloc/bmalloc/BPlatform.h (279620 => 279621)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/bmalloc/bmalloc/BPlatform.h 2021-07-06 21:12:55 UTC (rev 279620)
+++ trunk/Source/bmalloc/bmalloc/BPlatform.h    2021-07-06 21:20:53 UTC (rev 279621)
</span><span class="lines">@@ -309,12 +309,6 @@
</span><span class="cx"> /* This is used for debugging when hacking on how bmalloc calculates its physical footprint. */
</span><span class="cx"> #define ENABLE_PHYSICAL_PAGE_MAP 0
</span><span class="cx"> 
</span><del>-#if BPLATFORM(MAC)
-#define BUSE_PARTIAL_SCAVENGE 1
-#else
-#define BUSE_PARTIAL_SCAVENGE 0
-#endif
-
</del><span class="cx"> #if !defined(BUSE_PRECOMPUTED_CONSTANTS_VMPAGE4K)
</span><span class="cx"> #define BUSE_PRECOMPUTED_CONSTANTS_VMPAGE4K 1
</span><span class="cx"> #endif
</span></span></pre></div>
<a id="trunkSourcebmallocbmallocHeapcpp"></a>
<div class="modfile"><h4>Modified: trunk/Source/bmalloc/bmalloc/Heap.cpp (279620 => 279621)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/bmalloc/bmalloc/Heap.cpp    2021-07-06 21:12:55 UTC (rev 279620)
+++ trunk/Source/bmalloc/bmalloc/Heap.cpp       2021-07-06 21:20:53 UTC (rev 279621)
</span><span class="lines">@@ -65,7 +65,7 @@
</span><span class="cx">         m_gigacageSize = size;
</span><span class="cx">         ptrdiff_t offset = roundDownToMultipleOf(vmPageSize(), random[1] % (gigacageSize - size));
</span><span class="cx">         void* base = reinterpret_cast<unsigned char*>(gigacageBasePtr) + offset;
</span><del>-        m_largeFree.add(LargeRange(base, size, 0, 0));
</del><ins>+        m_largeFree.add(LargeRange(base, size, 0, 0, base));
</ins><span class="cx">     }
</span><span class="cx"> #endif
</span><span class="cx">     
</span><span class="lines">@@ -106,24 +106,23 @@
</span><span class="cx"> 
</span><span class="cx"> void Heap::decommitLargeRange(UniqueLockHolder&, LargeRange& range, BulkDecommit& decommitter)
</span><span class="cx"> {
</span><ins>+    BASSERT(range.hasPhysicalPages());
+
</ins><span class="cx">     m_footprint -= range.totalPhysicalSize();
</span><span class="cx">     m_freeableMemory -= range.totalPhysicalSize();
</span><del>-    decommitter.addLazy(range.begin(), range.size());
</del><ins>+    decommitter.addLazy(range.begin(), range.physicalEnd() - range.begin());
</ins><span class="cx">     m_hasPendingDecommits = true;
</span><span class="cx">     range.setStartPhysicalSize(0);
</span><span class="cx">     range.setTotalPhysicalSize(0);
</span><ins>+    range.clearPhysicalEnd();
</ins><span class="cx">     BASSERT(range.isEligibile());
</span><span class="cx">     range.setEligible(false);
</span><del>-#if ENABLE_PHYSICAL_PAGE_MAP 
</del><ins>+#if ENABLE_PHYSICAL_PAGE_MAP
</ins><span class="cx">     m_physicalPageMap.decommit(range.begin(), range.size());
</span><span class="cx"> #endif
</span><span class="cx"> }
</span><span class="cx"> 
</span><del>-#if BUSE(PARTIAL_SCAVENGE)
-void Heap::scavenge(UniqueLockHolder& lock, BulkDecommit& decommitter)
-#else
</del><span class="cx"> void Heap::scavenge(UniqueLockHolder& lock, BulkDecommit& decommitter, size_t& deferredDecommits)
</span><del>-#endif
</del><span class="cx"> {
</span><span class="cx">     for (auto& list : m_freePages) {
</span><span class="cx">         for (auto* chunk : list) {
</span><span class="lines">@@ -130,13 +129,11 @@
</span><span class="cx">             for (auto* page : chunk->freePages()) {
</span><span class="cx">                 if (!page->hasPhysicalPages())
</span><span class="cx">                     continue;
</span><del>-#if !BUSE(PARTIAL_SCAVENGE)
</del><span class="cx">                 if (page->usedSinceLastScavenge()) {
</span><span class="cx">                     page->clearUsedSinceLastScavenge();
</span><span class="cx">                     deferredDecommits++;
</span><span class="cx">                     continue;
</span><span class="cx">                 }
</span><del>-#endif
</del><span class="cx"> 
</span><span class="cx">                 size_t pageSize = bmalloc::pageSize(&list - &m_freePages[0]);
</span><span class="cx">                 size_t decommitSize = physicalPageSizeSloppy(page->begin()->begin(), pageSize);
</span><span class="lines">@@ -157,37 +154,18 @@
</span><span class="cx">     }
</span><span class="cx"> 
</span><span class="cx">     for (LargeRange& range : m_largeFree) {
</span><del>-#if BUSE(PARTIAL_SCAVENGE)
-        m_highWatermark = std::min(m_highWatermark, static_cast<void*>(range.begin()));
-#else
</del><ins>+        if (!range.hasPhysicalPages())
+            continue;
</ins><span class="cx">         if (range.usedSinceLastScavenge()) {
</span><span class="cx">             range.clearUsedSinceLastScavenge();
</span><span class="cx">             deferredDecommits++;
</span><span class="cx">             continue;
</span><span class="cx">         }
</span><del>-#endif
</del><ins>+
</ins><span class="cx">         decommitLargeRange(lock, range, decommitter);
</span><span class="cx">     }
</span><del>-
-#if BUSE(PARTIAL_SCAVENGE)
-    m_freeableMemory = 0;
-#endif
</del><span class="cx"> }
</span><span class="cx"> 
</span><del>-#if BUSE(PARTIAL_SCAVENGE)
-void Heap::scavengeToHighWatermark(UniqueLockHolder& lock, BulkDecommit& decommitter)
-{
-    void* newHighWaterMark = nullptr;
-    for (LargeRange& range : m_largeFree) {
-        if (range.begin() <= m_highWatermark)
-            newHighWaterMark = std::min(newHighWaterMark, static_cast<void*>(range.begin()));
-        else
-            decommitLargeRange(lock, range, decommitter);
-    }
-    m_highWatermark = newHighWaterMark;
-}
-#endif
-
</del><span class="cx"> void Heap::deallocateLineCache(UniqueLockHolder&, LineCache& lineCache)
</span><span class="cx"> {
</span><span class="cx">     for (auto& list : lineCache) {
</span><span class="lines">@@ -218,26 +196,15 @@
</span><span class="cx"> 
</span><span class="cx">         m_objectTypes.set(lock, chunk, ObjectType::Small);
</span><span class="cx"> 
</span><del>-        size_t accountedInFreeable = 0;
</del><span class="cx">         forEachPage(chunk, pageSize, [&](SmallPage* page) {
</span><span class="cx">             page->setHasPhysicalPages(true);
</span><del>-#if !BUSE(PARTIAL_SCAVENGE)
</del><span class="cx">             page->setUsedSinceLastScavenge();
</span><del>-#endif
</del><span class="cx">             page->setHasFreeLines(lock, true);
</span><span class="cx">             chunk->freePages().push(page);
</span><del>-            accountedInFreeable += pageSize;
</del><span class="cx">         });
</span><span class="cx"> 
</span><del>-        m_freeableMemory += accountedInFreeable;
</del><ins>+        m_freeableMemory += chunkSize;
</ins><span class="cx"> 
</span><del>-        auto metadataSize = Chunk::metadataSize(pageSize);
-        vmDeallocatePhysicalPagesSloppy(chunk->address(sizeof(Chunk)), metadataSize - sizeof(Chunk));
-
-        auto decommitSize = chunkSize - metadataSize - accountedInFreeable;
-        if (decommitSize > 0)
-            vmDeallocatePhysicalPagesSloppy(chunk->address(chunkSize - decommitSize), decommitSize);
-
</del><span class="cx">         m_scavenger->schedule(0);
</span><span class="cx"> 
</span><span class="cx">         return chunk;
</span><span class="lines">@@ -253,24 +220,28 @@
</span><span class="cx">     
</span><span class="cx">     size_t size = m_largeAllocated.remove(chunk);
</span><span class="cx">     size_t totalPhysicalSize = size;
</span><ins>+    size_t chunkPageSize = pageSize(pageClass);
+    SmallPage* firstPageWithoutPhysicalPages = nullptr;
</ins><span class="cx"> 
</span><del>-    size_t accountedInFreeable = 0;
-
-    bool hasPhysicalPages = true;
-    forEachPage(chunk, pageSize(pageClass), [&](SmallPage* page) {
</del><ins>+    void* physicalEnd = chunk->address(chunk->metadataSize(chunkPageSize));
+    bool lastPageHasPhysicalPages = false;
+    forEachPage(chunk, chunkPageSize, [&](SmallPage* page) {
</ins><span class="cx">         size_t physicalSize = physicalPageSizeSloppy(page->begin()->begin(), pageSize(pageClass));
</span><span class="cx">         if (!page->hasPhysicalPages()) {
</span><span class="cx">             totalPhysicalSize -= physicalSize;
</span><del>-            hasPhysicalPages = false;
-        } else
-            accountedInFreeable += physicalSize;
</del><ins>+            lastPageHasPhysicalPages = false;
+            if (!firstPageWithoutPhysicalPages)
+                firstPageWithoutPhysicalPages = page;
+        } else {
+            physicalEnd = page->begin()->begin() + physicalSize;
+            lastPageHasPhysicalPages = true;
+        }
</ins><span class="cx">     });
</span><span class="cx"> 
</span><del>-    m_freeableMemory -= accountedInFreeable;
-    m_freeableMemory += totalPhysicalSize;
-
-    size_t startPhysicalSize = hasPhysicalPages ? size : 0;
-    m_largeFree.add(LargeRange(chunk, size, startPhysicalSize, totalPhysicalSize));
</del><ins>+    size_t startPhysicalSize = firstPageWithoutPhysicalPages ? firstPageWithoutPhysicalPages->begin()->begin() - chunk->bytes() : size;
+    physicalEnd = lastPageHasPhysicalPages ? chunk->address(size) : physicalEnd;
+    
+    m_largeFree.add(LargeRange(chunk, size, startPhysicalSize, totalPhysicalSize, physicalEnd));
</ins><span class="cx"> }
</span><span class="cx"> 
</span><span class="cx"> SmallPage* Heap::allocateSmallPage(UniqueLockHolder& lock, size_t sizeClass, LineCache& lineCache, FailureAction action)
</span><span class="lines">@@ -283,8 +254,6 @@
</span><span class="cx">     if (!m_lineCache[sizeClass].isEmpty())
</span><span class="cx">         return m_lineCache[sizeClass].popFront();
</span><span class="cx"> 
</span><del>-    m_scavenger->didStartGrowing();
-    
</del><span class="cx">     SmallPage* page = [&]() -> SmallPage* {
</span><span class="cx">         size_t pageClass = m_constants.pageClass(sizeClass);
</span><span class="cx">         
</span><span class="lines">@@ -314,9 +283,7 @@
</span><span class="cx">             m_physicalPageMap.commit(page->begin()->begin(), pageSize);
</span><span class="cx"> #endif
</span><span class="cx">         }
</span><del>-#if !BUSE(PARTIAL_SCAVENGE)
</del><span class="cx">         page->setUsedSinceLastScavenge();
</span><del>-#endif
</del><span class="cx"> 
</span><span class="cx">         return page;
</span><span class="cx">     }();
</span><span class="lines">@@ -525,6 +492,7 @@
</span><span class="cx">         vmAllocatePhysicalPagesSloppy(range.begin() + range.startPhysicalSize(), range.size() - range.startPhysicalSize());
</span><span class="cx">         range.setStartPhysicalSize(range.size());
</span><span class="cx">         range.setTotalPhysicalSize(range.size());
</span><ins>+        range.setPhysicalEnd(range.begin() + range.size());
</ins><span class="cx"> #if ENABLE_PHYSICAL_PAGE_MAP 
</span><span class="cx">         m_physicalPageMap.commit(range.begin(), range.size());
</span><span class="cx"> #endif
</span><span class="lines">@@ -560,8 +528,6 @@
</span><span class="cx"> 
</span><span class="cx">     BASSERT(isPowerOfTwo(alignment));
</span><span class="cx">     
</span><del>-    m_scavenger->didStartGrowing();
-    
</del><span class="cx">     size_t roundedSize = size ? roundUpToMultipleOf(largeAlignment, size) : largeAlignment;
</span><span class="cx">     ASSERT_OR_RETURN_ON_FAILURE(roundedSize >= size); // Check for overflow
</span><span class="cx">     size = roundedSize;
</span><span class="lines">@@ -590,9 +556,6 @@
</span><span class="cx">     m_freeableMemory -= range.totalPhysicalSize();
</span><span class="cx"> 
</span><span class="cx">     void* result = splitAndAllocate(lock, range, alignment, size).begin();
</span><del>-#if BUSE(PARTIAL_SCAVENGE)
-    m_highWatermark = std::max(m_highWatermark, result);
-#endif
</del><span class="cx">     ASSERT_OR_RETURN_ON_FAILURE(result);
</span><span class="cx">     return result;
</span><span class="cx"> 
</span><span class="lines">@@ -621,7 +584,7 @@
</span><span class="cx">     PerProcess<Zone>::get()->addRange(Range(memory, size));
</span><span class="cx"> #endif
</span><span class="cx"> 
</span><del>-    return LargeRange(memory, size, 0, 0);
</del><ins>+    return LargeRange(memory, size, 0, 0, memory);
</ins><span class="cx"> }
</span><span class="cx"> 
</span><span class="cx"> size_t Heap::largeSize(UniqueLockHolder&, void* object)
</span><span class="lines">@@ -634,7 +597,7 @@
</span><span class="cx">     BASSERT(object.size() > newSize);
</span><span class="cx"> 
</span><span class="cx">     size_t size = m_largeAllocated.remove(object.begin());
</span><del>-    LargeRange range = LargeRange(object, size, size);
</del><ins>+    LargeRange range = LargeRange(object, size, size, object.begin() + size);
</ins><span class="cx">     splitAndAllocate(lock, range, alignment, newSize);
</span><span class="cx"> 
</span><span class="cx">     m_scavenger->schedule(size);
</span><span class="lines">@@ -643,7 +606,7 @@
</span><span class="cx"> void Heap::deallocateLarge(UniqueLockHolder&, void* object)
</span><span class="cx"> {
</span><span class="cx">     size_t size = m_largeAllocated.remove(object);
</span><del>-    m_largeFree.add(LargeRange(object, size, size, size));
</del><ins>+    m_largeFree.add(LargeRange(object, size, size, size, static_cast<char*>(object) + size));
</ins><span class="cx">     m_freeableMemory += size;
</span><span class="cx">     m_scavenger->schedule(size);
</span><span class="cx"> }
</span></span></pre></div>
<a id="trunkSourcebmallocbmallocHeaph"></a>
<div class="modfile"><h4>Modified: trunk/Source/bmalloc/bmalloc/Heap.h (279620 => 279621)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/bmalloc/bmalloc/Heap.h      2021-07-06 21:12:55 UTC (rev 279620)
+++ trunk/Source/bmalloc/bmalloc/Heap.h 2021-07-06 21:20:53 UTC (rev 279621)
</span><span class="lines">@@ -74,12 +74,7 @@
</span><span class="cx">     size_t largeSize(UniqueLockHolder&, void*);
</span><span class="cx">     void shrinkLarge(UniqueLockHolder&, const Range&, size_t);
</span><span class="cx"> 
</span><del>-#if BUSE(PARTIAL_SCAVENGE)
-    void scavengeToHighWatermark(UniqueLockHolder&, BulkDecommit&);
-    void scavenge(UniqueLockHolder&, BulkDecommit&);
-#else
</del><span class="cx">     void scavenge(UniqueLockHolder&, BulkDecommit&, size_t& deferredDecommits);
</span><del>-#endif
</del><span class="cx">     void scavenge(UniqueLockHolder&, BulkDecommit&, size_t& freed, size_t goal);
</span><span class="cx"> 
</span><span class="cx">     size_t freeableMemory(UniqueLockHolder&);
</span><span class="lines">@@ -147,10 +142,6 @@
</span><span class="cx"> #if ENABLE_PHYSICAL_PAGE_MAP 
</span><span class="cx">     PhysicalPageMap m_physicalPageMap;
</span><span class="cx"> #endif
</span><del>-    
-#if BUSE(PARTIAL_SCAVENGE)
-    void* m_highWatermark { nullptr };
-#endif
</del><span class="cx"> };
</span><span class="cx"> 
</span><span class="cx"> inline void Heap::allocateSmallBumpRanges(
</span></span></pre></div>
<a id="trunkSourcebmallocbmallocIsoDirectoryh"></a>
<div class="modfile"><h4>Modified: trunk/Source/bmalloc/bmalloc/IsoDirectory.h (279620 => 279621)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/bmalloc/bmalloc/IsoDirectory.h      2021-07-06 21:12:55 UTC (rev 279620)
+++ trunk/Source/bmalloc/bmalloc/IsoDirectory.h 2021-07-06 21:20:53 UTC (rev 279621)
</span><span class="lines">@@ -76,9 +76,6 @@
</span><span class="cx">     // Iterate over all empty and committed pages, and put them into the vector. This also records the
</span><span class="cx">     // pages as being decommitted. It's the caller's job to do the actual decommitting.
</span><span class="cx">     void scavenge(const LockHolder&, Vector<DeferredDecommit>&);
</span><del>-#if BUSE(PARTIAL_SCAVENGE)
-    void scavengeToHighWatermark(const LockHolder&, Vector<DeferredDecommit>&);
-#endif
</del><span class="cx"> 
</span><span class="cx">     template<typename Func>
</span><span class="cx">     void forEachCommittedPage(const LockHolder&, const Func&);
</span><span class="lines">@@ -93,9 +90,6 @@
</span><span class="cx">     Bits<numPages> m_empty;
</span><span class="cx">     Bits<numPages> m_committed;
</span><span class="cx">     unsigned m_firstEligibleOrDecommitted { 0 };
</span><del>-#if BUSE(PARTIAL_SCAVENGE)
-    unsigned m_highWatermark { 0 };
-#endif
</del><span class="cx"> };
</span><span class="cx"> 
</span><span class="cx"> } // namespace bmalloc
</span></span></pre></div>
<a id="trunkSourcebmallocbmallocIsoDirectoryInlinesh"></a>
<div class="modfile"><h4>Modified: trunk/Source/bmalloc/bmalloc/IsoDirectoryInlines.h (279620 => 279621)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/bmalloc/bmalloc/IsoDirectoryInlines.h       2021-07-06 21:12:55 UTC (rev 279620)
+++ trunk/Source/bmalloc/bmalloc/IsoDirectoryInlines.h  2021-07-06 21:20:53 UTC (rev 279621)
</span><span class="lines">@@ -50,12 +50,7 @@
</span><span class="cx">     if (pageIndex >= numPages)
</span><span class="cx">         return EligibilityKind::Full;
</span><span class="cx"> 
</span><del>-#if BUSE(PARTIAL_SCAVENGE)
-    m_highWatermark = std::max(pageIndex, m_highWatermark);
-#endif
-
</del><span class="cx">     Scavenger& scavenger = *Scavenger::get();
</span><del>-    scavenger.didStartGrowing();
</del><span class="cx">     
</span><span class="cx">     IsoPage<Config>* page = m_pages[pageIndex].get();
</span><span class="cx">     
</span><span class="lines">@@ -146,25 +141,9 @@
</span><span class="cx">         [&] (size_t index) {
</span><span class="cx">             scavengePage(locker, index, decommits);
</span><span class="cx">         });
</span><del>-#if BUSE(PARTIAL_SCAVENGE)
-    m_highWatermark = 0;
-#endif
</del><span class="cx"> }
</span><span class="cx"> 
</span><del>-#if BUSE(PARTIAL_SCAVENGE)
</del><span class="cx"> template<typename Config, unsigned passedNumPages>
</span><del>-void IsoDirectory<Config, passedNumPages>::scavengeToHighWatermark(const LockHolder& locker, Vector<DeferredDecommit>& decommits)
-{
-    (m_empty & m_committed).forEachSetBit(
-        [&] (size_t index) {
-            if (index > m_highWatermark)
-                scavengePage(locker, index, decommits);
-        });
-    m_highWatermark = 0;
-}
-#endif
-
-template<typename Config, unsigned passedNumPages>
</del><span class="cx"> template<typename Func>
</span><span class="cx"> void IsoDirectory<Config, passedNumPages>::forEachCommittedPage(const LockHolder&, const Func& func)
</span><span class="cx"> {
</span></span></pre></div>
<a id="trunkSourcebmallocbmallocIsoHeapImplh"></a>
<div class="modfile"><h4>Modified: trunk/Source/bmalloc/bmalloc/IsoHeapImpl.h (279620 => 279621)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/bmalloc/bmalloc/IsoHeapImpl.h       2021-07-06 21:12:55 UTC (rev 279620)
+++ trunk/Source/bmalloc/bmalloc/IsoHeapImpl.h  2021-07-06 21:20:53 UTC (rev 279621)
</span><span class="lines">@@ -49,9 +49,6 @@
</span><span class="cx">     virtual ~IsoHeapImplBase();
</span><span class="cx">     
</span><span class="cx">     virtual void scavenge(Vector<DeferredDecommit>&) = 0;
</span><del>-#if BUSE(PARTIAL_SCAVENGE)
-    virtual void scavengeToHighWatermark(Vector<DeferredDecommit>&) = 0;
-#endif
</del><span class="cx">     
</span><span class="cx">     void scavengeNow();
</span><span class="cx">     static void finishScavenging(Vector<DeferredDecommit>&);
</span><span class="lines">@@ -112,9 +109,6 @@
</span><span class="cx">     void didBecomeEligibleOrDecommited(const LockHolder&, IsoDirectory<Config, IsoDirectoryPage<Config>::numPages>*);
</span><span class="cx">     
</span><span class="cx">     void scavenge(Vector<DeferredDecommit>&) override;
</span><del>-#if BUSE(PARTIAL_SCAVENGE)
-    void scavengeToHighWatermark(Vector<DeferredDecommit>&) override;
-#endif
</del><span class="cx"> 
</span><span class="cx">     unsigned allocatorOffset();
</span><span class="cx">     unsigned deallocatorOffset();
</span></span></pre></div>
<a id="trunkSourcebmallocbmallocIsoHeapImplInlinesh"></a>
<div class="modfile"><h4>Modified: trunk/Source/bmalloc/bmalloc/IsoHeapImplInlines.h (279620 => 279621)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/bmalloc/bmalloc/IsoHeapImplInlines.h        2021-07-06 21:12:55 UTC (rev 279620)
+++ trunk/Source/bmalloc/bmalloc/IsoHeapImplInlines.h   2021-07-06 21:20:53 UTC (rev 279621)
</span><span class="lines">@@ -121,21 +121,6 @@
</span><span class="cx">     m_directoryHighWatermark = 0;
</span><span class="cx"> }
</span><span class="cx"> 
</span><del>-#if BUSE(PARTIAL_SCAVENGE)
-template<typename Config>
-void IsoHeapImpl<Config>::scavengeToHighWatermark(Vector<DeferredDecommit>& decommits)
-{
-    LockHolder locker(this->lock);
-    if (!m_directoryHighWatermark)
-        m_inlineDirectory.scavengeToHighWatermark(locker, decommits);
-    for (IsoDirectoryPage<Config>* page = m_headDirectory.get(); page; page = page->next) {
-        if (page->index() >= m_directoryHighWatermark)
-            page->payload.scavengeToHighWatermark(locker, decommits);
-    }
-    m_directoryHighWatermark = 0;
-}
-#endif
-
</del><span class="cx"> inline size_t IsoHeapImplBase::freeableMemory()
</span><span class="cx"> {
</span><span class="cx">     return m_freeableMemory;
</span></span></pre></div>
<a id="trunkSourcebmallocbmallocIsoSharedHeapInlinesh"></a>
<div class="modfile"><h4>Modified: trunk/Source/bmalloc/bmalloc/IsoSharedHeapInlines.h (279620 => 279621)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/bmalloc/bmalloc/IsoSharedHeapInlines.h      2021-07-06 21:12:55 UTC (rev 279620)
+++ trunk/Source/bmalloc/bmalloc/IsoSharedHeapInlines.h 2021-07-06 21:20:53 UTC (rev 279621)
</span><span class="lines">@@ -63,7 +63,6 @@
</span><span class="cx"> BNO_INLINE void* IsoSharedHeap::allocateSlow(const LockHolder& locker, bool abortOnFailure)
</span><span class="cx"> {
</span><span class="cx">     Scavenger& scavenger = *Scavenger::get();
</span><del>-    scavenger.didStartGrowing();
</del><span class="cx">     scavenger.scheduleIfUnderMemoryPressure(IsoSharedPage::pageSize);
</span><span class="cx"> 
</span><span class="cx">     IsoSharedPage* page = IsoSharedPage::tryCreate();
</span></span></pre></div>
<a id="trunkSourcebmallocbmallocLargeMapcpp"></a>
<div class="modfile"><h4>Modified: trunk/Source/bmalloc/bmalloc/LargeMap.cpp (279620 => 279621)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/bmalloc/bmalloc/LargeMap.cpp        2021-07-06 21:12:55 UTC (rev 279620)
+++ trunk/Source/bmalloc/bmalloc/LargeMap.cpp   2021-07-06 21:20:53 UTC (rev 279621)
</span><span class="lines">@@ -76,9 +76,7 @@
</span><span class="cx">         merged = merge(merged, m_free.pop(i--));
</span><span class="cx">     }
</span><span class="cx"> 
</span><del>-#if !BUSE(PARTIAL_SCAVENGE)
</del><span class="cx">     merged.setUsedSinceLastScavenge();
</span><del>-#endif
</del><span class="cx">     m_free.push(merged);
</span><span class="cx"> }
</span><span class="cx"> 
</span></span></pre></div>
<a id="trunkSourcebmallocbmallocLargeRangeh"></a>
<div class="modfile"><h4>Modified: trunk/Source/bmalloc/bmalloc/LargeRange.h (279620 => 279621)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/bmalloc/bmalloc/LargeRange.h        2021-07-06 21:12:55 UTC (rev 279620)
+++ trunk/Source/bmalloc/bmalloc/LargeRange.h   2021-07-06 21:20:53 UTC (rev 279621)
</span><span class="lines">@@ -37,40 +37,29 @@
</span><span class="cx">         : Range()
</span><span class="cx">         , m_startPhysicalSize(0)
</span><span class="cx">         , m_totalPhysicalSize(0)
</span><del>-#if !BUSE(PARTIAL_SCAVENGE)
</del><ins>+        , m_physicalEnd(begin())
</ins><span class="cx">         , m_isEligible(true)
</span><span class="cx">         , m_usedSinceLastScavenge(false)
</span><del>-#endif
</del><span class="cx">     {
</span><span class="cx">     }
</span><span class="cx"> 
</span><del>-    LargeRange(const Range& other, size_t startPhysicalSize, size_t totalPhysicalSize)
</del><ins>+    LargeRange(const Range& other, size_t startPhysicalSize, size_t totalPhysicalSize, void* physicalEnd)
</ins><span class="cx">         : Range(other)
</span><span class="cx">         , m_startPhysicalSize(startPhysicalSize)
</span><span class="cx">         , m_totalPhysicalSize(totalPhysicalSize)
</span><del>-#if !BUSE(PARTIAL_SCAVENGE)
</del><ins>+        , m_physicalEnd(static_cast<char*>(physicalEnd))
</ins><span class="cx">         , m_isEligible(true)
</span><span class="cx">         , m_usedSinceLastScavenge(false)
</span><del>-#endif
</del><span class="cx">     {
</span><span class="cx">         BASSERT(this->size() >= this->totalPhysicalSize());
</span><span class="cx">         BASSERT(this->totalPhysicalSize() >= this->startPhysicalSize());
</span><span class="cx">     }
</span><span class="cx"> 
</span><del>-#if BUSE(PARTIAL_SCAVENGE)
-    LargeRange(void* begin, size_t size, size_t startPhysicalSize, size_t totalPhysicalSize)
</del><ins>+    LargeRange(void* begin, size_t size, size_t startPhysicalSize, size_t totalPhysicalSize, void* physicalEnd, bool usedSinceLastScavenge = false)
</ins><span class="cx">         : Range(begin, size)
</span><span class="cx">         , m_startPhysicalSize(startPhysicalSize)
</span><span class="cx">         , m_totalPhysicalSize(totalPhysicalSize)
</span><del>-    {
-        BASSERT(this->size() >= this->totalPhysicalSize());
-        BASSERT(this->totalPhysicalSize() >= this->startPhysicalSize());
-    }
-#else
-    LargeRange(void* begin, size_t size, size_t startPhysicalSize, size_t totalPhysicalSize, bool usedSinceLastScavenge = false)
-        : Range(begin, size)
-        , m_startPhysicalSize(startPhysicalSize)
-        , m_totalPhysicalSize(totalPhysicalSize)
</del><ins>+        , m_physicalEnd(static_cast<char*>(physicalEnd))
</ins><span class="cx">         , m_isEligible(true)
</span><span class="cx">         , m_usedSinceLastScavenge(usedSinceLastScavenge)
</span><span class="cx">     {
</span><span class="lines">@@ -77,7 +66,6 @@
</span><span class="cx">         BASSERT(this->size() >= this->totalPhysicalSize());
</span><span class="cx">         BASSERT(this->totalPhysicalSize() >= this->startPhysicalSize());
</span><span class="cx">     }
</span><del>-#endif
</del><span class="cx"> 
</span><span class="cx">     // Returns a lower bound on physical size at the start of the range. Ranges that
</span><span class="cx">     // span non-physical fragments use this number to remember the physical size of
</span><span class="lines">@@ -99,16 +87,21 @@
</span><span class="cx">     size_t totalPhysicalSize() const { return m_totalPhysicalSize; }
</span><span class="cx">     void setTotalPhysicalSize(size_t totalPhysicalSize) { m_totalPhysicalSize = totalPhysicalSize; }
</span><span class="cx"> 
</span><ins>+    // This is the address past the end of physical memory in this range.
+    // When decomitting this range, we decommitt [begin(), physicalEnd).
+    char* physicalEnd() const { return m_physicalEnd; }
+    void setPhysicalEnd(void* physicalEnd) { m_physicalEnd = static_cast<char*>(physicalEnd); }
+    void clearPhysicalEnd() { m_physicalEnd = begin(); }
+    bool hasPhysicalPages() { return m_physicalEnd != begin(); }
+
</ins><span class="cx">     std::pair<LargeRange, LargeRange> split(size_t) const;
</span><span class="cx"> 
</span><span class="cx">     void setEligible(bool eligible) { m_isEligible = eligible; }
</span><span class="cx">     bool isEligibile() const { return m_isEligible; }
</span><span class="cx"> 
</span><del>-#if !BUSE(PARTIAL_SCAVENGE)
</del><span class="cx">     bool usedSinceLastScavenge() const { return m_usedSinceLastScavenge; }
</span><span class="cx">     void clearUsedSinceLastScavenge() { m_usedSinceLastScavenge = false; }
</span><span class="cx">     void setUsedSinceLastScavenge() { m_usedSinceLastScavenge = true; }
</span><del>-#endif
</del><span class="cx"> 
</span><span class="cx">     bool operator<(const void* other) const { return begin() < other; }
</span><span class="cx">     bool operator<(const LargeRange& other) const { return begin() < other.begin(); }
</span><span class="lines">@@ -116,12 +109,9 @@
</span><span class="cx"> private:
</span><span class="cx">     size_t m_startPhysicalSize;
</span><span class="cx">     size_t m_totalPhysicalSize;
</span><del>-#if BUSE(PARTIAL_SCAVENGE)
-    bool m_isEligible { true };
-#else
</del><ins>+    char* m_physicalEnd;
</ins><span class="cx">     unsigned m_isEligible: 1;
</span><span class="cx">     unsigned m_usedSinceLastScavenge: 1;
</span><del>-#endif
</del><span class="cx"> };
</span><span class="cx"> 
</span><span class="cx"> inline bool canMerge(const LargeRange& a, const LargeRange& b)
</span><span class="lines">@@ -144,18 +134,17 @@
</span><span class="cx"> inline LargeRange merge(const LargeRange& a, const LargeRange& b)
</span><span class="cx"> {
</span><span class="cx">     const LargeRange& left = std::min(a, b);
</span><del>-#if !BUSE(PARTIAL_SCAVENGE)
</del><ins>+    const LargeRange& right = std::max(a, b);
+    void* physicalEnd = right.totalPhysicalSize() ? right.physicalEnd() : left.physicalEnd();
</ins><span class="cx">     bool mergedUsedSinceLastScavenge = a.usedSinceLastScavenge() || b.usedSinceLastScavenge();
</span><del>-#endif
</del><span class="cx">     if (left.size() == left.startPhysicalSize()) {
</span><span class="cx">         return LargeRange(
</span><span class="cx">             left.begin(),
</span><span class="cx">             a.size() + b.size(),
</span><span class="cx">             a.startPhysicalSize() + b.startPhysicalSize(),
</span><del>-            a.totalPhysicalSize() + b.totalPhysicalSize()
-#if !BUSE(PARTIAL_SCAVENGE)
-            , mergedUsedSinceLastScavenge
-#endif
</del><ins>+            a.totalPhysicalSize() + b.totalPhysicalSize(),
+            physicalEnd,
+            mergedUsedSinceLastScavenge
</ins><span class="cx">         );
</span><span class="cx">         
</span><span class="cx">     }
</span><span class="lines">@@ -164,10 +153,9 @@
</span><span class="cx">         left.begin(),
</span><span class="cx">         a.size() + b.size(),
</span><span class="cx">         left.startPhysicalSize(),
</span><del>-        a.totalPhysicalSize() + b.totalPhysicalSize()
-#if !BUSE(PARTIAL_SCAVENGE)
-        , mergedUsedSinceLastScavenge
-#endif
</del><ins>+        a.totalPhysicalSize() + b.totalPhysicalSize(),
+        physicalEnd,
+        mergedUsedSinceLastScavenge
</ins><span class="cx">     );
</span><span class="cx"> }
</span><span class="cx"> 
</span><span class="lines">@@ -175,11 +163,12 @@
</span><span class="cx"> {
</span><span class="cx">     BASSERT(leftSize <= this->size());
</span><span class="cx">     size_t rightSize = this->size() - leftSize;
</span><ins>+    char* physicalEnd = this->physicalEnd();
</ins><span class="cx"> 
</span><span class="cx">     if (leftSize <= startPhysicalSize()) {
</span><span class="cx">         BASSERT(totalPhysicalSize() >= leftSize);
</span><del>-        LargeRange left(begin(), leftSize, leftSize, leftSize);
-        LargeRange right(left.end(), rightSize, startPhysicalSize() - leftSize, totalPhysicalSize() - leftSize);
</del><ins>+        LargeRange left(begin(), leftSize, leftSize, leftSize, std::min(physicalEnd, begin() + leftSize));
+        LargeRange right(left.end(), rightSize, startPhysicalSize() - leftSize, totalPhysicalSize() - leftSize, std::max(physicalEnd, left.end()));
</ins><span class="cx">         return std::make_pair(left, right);
</span><span class="cx">     }
</span><span class="cx"> 
</span><span class="lines">@@ -194,8 +183,8 @@
</span><span class="cx">         rightTotalPhysicalSize = rightSize;
</span><span class="cx">     }
</span><span class="cx"> 
</span><del>-    LargeRange left(begin(), leftSize, startPhysicalSize(), leftTotalPhysicalSize);
-    LargeRange right(left.end(), rightSize, 0, rightTotalPhysicalSize);
</del><ins>+    LargeRange left(begin(), leftSize, startPhysicalSize(), leftTotalPhysicalSize, std::min(physicalEnd, begin() + leftSize));
+    LargeRange right(left.end(), rightSize, 0, rightTotalPhysicalSize, std::max(physicalEnd, left.end()));
</ins><span class="cx">     return std::make_pair(left, right);
</span><span class="cx"> }
</span><span class="cx"> 
</span></span></pre></div>
<a id="trunkSourcebmallocbmallocScavengercpp"></a>
<div class="modfile"><h4>Modified: trunk/Source/bmalloc/bmalloc/Scavenger.cpp (279620 => 279621)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/bmalloc/bmalloc/Scavenger.cpp       2021-07-06 21:12:55 UTC (rev 279620)
+++ trunk/Source/bmalloc/bmalloc/Scavenger.cpp  2021-07-06 21:20:53 UTC (rev 279621)
</span><span class="lines">@@ -85,11 +85,7 @@
</span><span class="cx">     dispatch_resume(m_pressureHandlerDispatchSource);
</span><span class="cx">     dispatch_release(queue);
</span><span class="cx"> #endif
</span><del>-#if BUSE(PARTIAL_SCAVENGE)
-    m_waitTime = std::chrono::milliseconds(m_isInMiniMode ? 200 : 2000);
-#else
</del><span class="cx">     m_waitTime = std::chrono::milliseconds(10);
</span><del>-#endif
</del><span class="cx"> 
</span><span class="cx">     m_thread = std::thread(&threadEntryPoint, this);
</span><span class="cx"> }
</span><span class="lines">@@ -120,12 +116,6 @@
</span><span class="cx">     m_condition.notify_all();
</span><span class="cx"> }
</span><span class="cx"> 
</span><del>-void Scavenger::didStartGrowing()
-{
-    // We don't really need to lock here, since this is just a heuristic.
-    m_isProbablyGrowing = true;
-}
-
</del><span class="cx"> void Scavenger::scheduleIfUnderMemoryPressure(size_t bytes)
</span><span class="cx"> {
</span><span class="cx">     LockHolder lock(mutex());
</span><span class="lines">@@ -146,7 +136,6 @@
</span><span class="cx">     if (!isUnderMemoryPressure())
</span><span class="cx">         return;
</span><span class="cx"> 
</span><del>-    m_isProbablyGrowing = false;
</del><span class="cx">     run(lock);
</span><span class="cx"> }
</span><span class="cx"> 
</span><span class="lines">@@ -158,7 +147,6 @@
</span><span class="cx">     if (willRunSoon())
</span><span class="cx">         return;
</span><span class="cx">     
</span><del>-    m_isProbablyGrowing = false;
</del><span class="cx">     runSoon(lock);
</span><span class="cx"> }
</span><span class="cx"> 
</span><span class="lines">@@ -187,14 +175,6 @@
</span><span class="cx">     return std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::steady_clock::now() - m_lastFullScavengeTime);
</span><span class="cx"> }
</span><span class="cx"> 
</span><del>-#if BUSE(PARTIAL_SCAVENGE)
-std::chrono::milliseconds Scavenger::timeSinceLastPartialScavenge()
-{
-    UniqueLockHolder lock(mutex());
-    return std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::steady_clock::now() - m_lastPartialScavengeTime);
-}
-#endif
-
</del><span class="cx"> void Scavenger::enableMiniMode()
</span><span class="cx"> {
</span><span class="cx">     m_isInMiniMode = true; // We just store to this racily. The scavenger thread will eventually pick up the right value.
</span><span class="lines">@@ -220,25 +200,17 @@
</span><span class="cx"> 
</span><span class="cx">         {
</span><span class="cx">             PrintTime printTime("\nfull scavenge under lock time");
</span><del>-#if !BUSE(PARTIAL_SCAVENGE)
</del><span class="cx">             size_t deferredDecommits = 0;
</span><del>-#endif
</del><span class="cx">             UniqueLockHolder lock(Heap::mutex());
</span><span class="cx">             for (unsigned i = numHeaps; i--;) {
</span><span class="cx">                 if (!isActiveHeapKind(static_cast<HeapKind>(i)))
</span><span class="cx">                     continue;
</span><del>-#if BUSE(PARTIAL_SCAVENGE)
-                PerProcess<PerHeapKind<Heap>>::get()->at(i).scavenge(lock, decommitter);
-#else
</del><span class="cx">                 PerProcess<PerHeapKind<Heap>>::get()->at(i).scavenge(lock, decommitter, deferredDecommits);
</span><del>-#endif
</del><span class="cx">             }
</span><span class="cx">             decommitter.processEager();
</span><span class="cx"> 
</span><del>-#if !BUSE(PARTIAL_SCAVENGE)
</del><span class="cx">             if (deferredDecommits)
</span><span class="cx">                 m_state = State::RunSoon;
</span><del>-#endif
</del><span class="cx">         }
</span><span class="cx"> 
</span><span class="cx">         {
</span><span class="lines">@@ -279,78 +251,6 @@
</span><span class="cx">     }
</span><span class="cx"> }
</span><span class="cx"> 
</span><del>-#if BUSE(PARTIAL_SCAVENGE)
-void Scavenger::partialScavenge()
-{
-    if (!m_isEnabled)
-        return;
-
-    UniqueLockHolder lock(m_scavengingMutex);
-
-    if (verbose) {
-        fprintf(stderr, "--------------------------------\n");
-        fprintf(stderr, "--before partial scavenging--\n");
-        dumpStats();
-    }
-
-    {
-        BulkDecommit decommitter;
-        {
-            PrintTime printTime("\npartialScavenge under lock time");
-            UniqueLockHolder lock(Heap::mutex());
-            for (unsigned i = numHeaps; i--;) {
-                if (!isActiveHeapKind(static_cast<HeapKind>(i)))
-                    continue;
-                Heap& heap = PerProcess<PerHeapKind<Heap>>::get()->at(i);
-                size_t freeableMemory = heap.freeableMemory(lock);
-                if (freeableMemory < 4 * MB)
-                    continue;
-                heap.scavengeToHighWatermark(lock, decommitter);
-            }
-
-            decommitter.processEager();
-        }
-
-        {
-            PrintTime printTime("partialScavenge lazy decommit time");
-            decommitter.processLazy();
-        }
-
-        {
-            PrintTime printTime("partialScavenge mark all as eligible time");
-            LockHolder lock(Heap::mutex());
-            for (unsigned i = numHeaps; i--;) {
-                if (!isActiveHeapKind(static_cast<HeapKind>(i)))
-                    continue;
-                Heap& heap = PerProcess<PerHeapKind<Heap>>::get()->at(i);
-                heap.markAllLargeAsEligibile(lock);
-            }
-        }
-    }
-
-    {
-        RELEASE_BASSERT(!m_deferredDecommits.size());
-        AllIsoHeaps::get()->forEach(
-            [&] (IsoHeapImplBase& heap) {
-                heap.scavengeToHighWatermark(m_deferredDecommits);
-            });
-        IsoHeapImplBase::finishScavenging(m_deferredDecommits);
-        m_deferredDecommits.shrink(0);
-    }
-
-    if (verbose) {
-        fprintf(stderr, "--after partial scavenging--\n");
-        dumpStats();
-        fprintf(stderr, "--------------------------------\n");
-    }
-
-    {
-        UniqueLockHolder lock(mutex());
-        m_lastPartialScavengeTime = std::chrono::steady_clock::now();
-    }
-}
-#endif
-
</del><span class="cx"> size_t Scavenger::freeableMemory()
</span><span class="cx"> {
</span><span class="cx">     size_t result = 0;
</span><span class="lines">@@ -432,69 +332,6 @@
</span><span class="cx">             fprintf(stderr, "--------------------------------\n");
</span><span class="cx">         }
</span><span class="cx"> 
</span><del>-#if BUSE(PARTIAL_SCAVENGE)
-        enum class ScavengeMode {
-            None,
-            Partial,
-            Full
-        };
-
-        size_t freeableMemory = this->freeableMemory();
-
-        ScavengeMode scavengeMode = [&] {
-            auto timeSinceLastFullScavenge = this->timeSinceLastFullScavenge();
-            auto timeSinceLastPartialScavenge = this->timeSinceLastPartialScavenge();
-            auto timeSinceLastScavenge = std::min(timeSinceLastPartialScavenge, timeSinceLastFullScavenge);
-
-            if (isUnderMemoryPressure() && freeableMemory > 1 * MB && timeSinceLastScavenge > std::chrono::milliseconds(5))
-                return ScavengeMode::Full;
-
-            if (!m_isProbablyGrowing) {
-                if (timeSinceLastFullScavenge < std::chrono::milliseconds(1000) && !m_isInMiniMode)
-                    return ScavengeMode::Partial;
-                return ScavengeMode::Full;
-            }
-
-            if (m_isInMiniMode) {
-                if (timeSinceLastFullScavenge < std::chrono::milliseconds(200))
-                    return ScavengeMode::Partial;
-                return ScavengeMode::Full;
-            }
-
-#if BCPU(X86_64)
-            auto partialScavengeInterval = std::chrono::milliseconds(12000);
-#else
-            auto partialScavengeInterval = std::chrono::milliseconds(8000);
-#endif
-            if (timeSinceLastScavenge < partialScavengeInterval) {
-                // Rate limit partial scavenges.
-                return ScavengeMode::None;
-            }
-            if (freeableMemory < 25 * MB)
-                return ScavengeMode::None;
-            if (5 * freeableMemory < footprint())
-                return ScavengeMode::None;
-            return ScavengeMode::Partial;
-        }();
-
-        m_isProbablyGrowing = false;
-
-        switch (scavengeMode) {
-        case ScavengeMode::None: {
-            runSoon();
-            break;
-        }
-        case ScavengeMode::Partial: {
-            partialScavenge();
-            runSoon();
-            break;
-        }
-        case ScavengeMode::Full: {
-            scavenge();
-            break;
-        }
-        }
-#else
</del><span class="cx">         std::chrono::steady_clock::time_point start { std::chrono::steady_clock::now() };
</span><span class="cx">         
</span><span class="cx">         scavenge();
</span><span class="lines">@@ -509,14 +346,13 @@
</span><span class="cx">         // FIXME: We need to investigate mini-mode's adjustment.
</span><span class="cx">         // https://bugs.webkit.org/show_bug.cgi?id=203987
</span><span class="cx">         if (!m_isInMiniMode) {
</span><del>-            timeSpentScavenging *= 150;
</del><ins>+            timeSpentScavenging *= s_newWaitMultiplier;
</ins><span class="cx">             std::chrono::milliseconds newWaitTime = std::chrono::duration_cast<std::chrono::milliseconds>(timeSpentScavenging);
</span><del>-            m_waitTime = std::min(std::max(newWaitTime, std::chrono::milliseconds(100)), std::chrono::milliseconds(10000));
</del><ins>+            m_waitTime = std::min(std::max(newWaitTime, std::chrono::milliseconds(s_minWaitTimeMilliseconds)), std::chrono::milliseconds(s_maxWaitTimeMilliseconds));
</ins><span class="cx">         }
</span><span class="cx"> 
</span><span class="cx">         if (verbose)
</span><span class="cx">             fprintf(stderr, "new wait time %lldms\n", static_cast<long long int>(m_waitTime.count()));
</span><del>-#endif
</del><span class="cx">     }
</span><span class="cx"> }
</span><span class="cx"> 
</span></span></pre></div>
<a id="trunkSourcebmallocbmallocScavengerh"></a>
<div class="modfile"><h4>Modified: trunk/Source/bmalloc/bmalloc/Scavenger.h (279620 => 279621)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/bmalloc/bmalloc/Scavenger.h 2021-07-06 21:12:55 UTC (rev 279620)
+++ trunk/Source/bmalloc/bmalloc/Scavenger.h    2021-07-06 21:20:53 UTC (rev 279621)
</span><span class="lines">@@ -59,7 +59,6 @@
</span><span class="cx">     bool willRunSoon() { return m_state > State::Sleep; }
</span><span class="cx">     void runSoon();
</span><span class="cx">     
</span><del>-    BEXPORT void didStartGrowing();
</del><span class="cx">     BEXPORT void scheduleIfUnderMemoryPressure(size_t bytes);
</span><span class="cx">     BEXPORT void schedule(size_t bytes);
</span><span class="cx"> 
</span><span class="lines">@@ -92,15 +91,10 @@
</span><span class="cx">     void setThreadName(const char*);
</span><span class="cx"> 
</span><span class="cx">     std::chrono::milliseconds timeSinceLastFullScavenge();
</span><del>-#if BUSE(PARTIAL_SCAVENGE)
-    std::chrono::milliseconds timeSinceLastPartialScavenge();
-    void partialScavenge();
-#endif
</del><span class="cx"> 
</span><span class="cx">     std::atomic<State> m_state { State::Sleep };
</span><span class="cx">     size_t m_scavengerBytes { 0 };
</span><span class="cx">     std::chrono::milliseconds m_waitTime;
</span><del>-    bool m_isProbablyGrowing { false };
</del><span class="cx">     bool m_isInMiniMode { false };
</span><span class="cx">     
</span><span class="cx">     Mutex m_scavengingMutex;
</span><span class="lines">@@ -108,15 +102,22 @@
</span><span class="cx"> 
</span><span class="cx">     std::thread m_thread;
</span><span class="cx">     std::chrono::steady_clock::time_point m_lastFullScavengeTime { std::chrono::steady_clock::now() };
</span><del>-#if BUSE(PARTIAL_SCAVENGE)
-    std::chrono::steady_clock::time_point m_lastPartialScavengeTime { std::chrono::steady_clock::now() };
-#endif
</del><span class="cx"> 
</span><span class="cx"> #if BOS(DARWIN)
</span><span class="cx">     dispatch_source_t m_pressureHandlerDispatchSource;
</span><span class="cx">     qos_class_t m_requestedScavengerThreadQOSClass { QOS_CLASS_USER_INITIATED };
</span><span class="cx"> #endif
</span><del>-    
</del><ins>+
+#if BPLATFORM(MAC)
+    const unsigned s_newWaitMultiplier = 300;
+    const unsigned s_minWaitTimeMilliseconds = 750;
+    const unsigned s_maxWaitTimeMilliseconds = 20000;
+#else
+    const unsigned s_newWaitMultiplier = 150;
+    const unsigned s_minWaitTimeMilliseconds = 100;
+    const unsigned s_maxWaitTimeMilliseconds = 10000;
+#endif
+
</ins><span class="cx">     Vector<DeferredDecommit> m_deferredDecommits;
</span><span class="cx">     bool m_isEnabled { true };
</span><span class="cx"> };
</span></span></pre></div>
<a id="trunkSourcebmallocbmallocSmallPageh"></a>
<div class="modfile"><h4>Modified: trunk/Source/bmalloc/bmalloc/SmallPage.h (279620 => 279621)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/bmalloc/bmalloc/SmallPage.h 2021-07-06 21:12:55 UTC (rev 279620)
+++ trunk/Source/bmalloc/bmalloc/SmallPage.h    2021-07-06 21:20:53 UTC (rev 279621)
</span><span class="lines">@@ -51,11 +51,9 @@
</span><span class="cx">     bool hasPhysicalPages() { return m_hasPhysicalPages; }
</span><span class="cx">     void setHasPhysicalPages(bool hasPhysicalPages) { m_hasPhysicalPages = hasPhysicalPages; }
</span><span class="cx"> 
</span><del>-#if !BUSE(PARTIAL_SCAVENGE)
</del><span class="cx">     bool usedSinceLastScavenge() { return m_usedSinceLastScavenge; }
</span><span class="cx">     void clearUsedSinceLastScavenge() { m_usedSinceLastScavenge = false; }
</span><span class="cx">     void setUsedSinceLastScavenge() { m_usedSinceLastScavenge = true; }
</span><del>-#endif
</del><span class="cx"> 
</span><span class="cx">     SmallLine* begin();
</span><span class="cx"> 
</span><span class="lines">@@ -65,9 +63,7 @@
</span><span class="cx"> private:
</span><span class="cx">     unsigned char m_hasFreeLines: 1;
</span><span class="cx">     unsigned char m_hasPhysicalPages: 1;
</span><del>-#if !BUSE(PARTIAL_SCAVENGE)
</del><span class="cx">     unsigned char m_usedSinceLastScavenge: 1;
</span><del>-#endif
</del><span class="cx">     unsigned char m_refCount: 7;
</span><span class="cx">     unsigned char m_sizeClass;
</span><span class="cx">     unsigned char m_slide;
</span></span></pre>
</div>
</div>

</body>
</html>