[webkit-reviews] review denied: [Bug 192389] bmalloc uses more memory on iOS compared to macOS due to physical page size differences : [Attachment 361315] Patch with some pref improvements.
bugzilla-daemon at webkit.org
bugzilla-daemon at webkit.org
Wed Feb 6 14:56:50 PST 2019
Mark Lam <mark.lam at apple.com> has denied Michael Saboff <msaboff at apple.com>'s
request for review:
Bug 192389: bmalloc uses more memory on iOS compared to macOS due to physical
page size differences
https://bugs.webkit.org/show_bug.cgi?id=192389
Attachment 361315: Patch with some pref improvements.
https://bugs.webkit.org/attachment.cgi?id=361315&action=review
--- Comment #12 from Mark Lam <mark.lam at apple.com> ---
Comment on attachment 361315
--> https://bugs.webkit.org/attachment.cgi?id=361315
Patch with some pref improvements.
View in context: https://bugs.webkit.org/attachment.cgi?id=361315&action=review
r- because I think there are many fixes that should be done.
> Source/bmalloc/bmalloc/Heap.cpp:47
> +static_assert(bmalloc::isPowerOfTwo(smallPageSize), "");
No need for bmalloc:: qualifier. We're in bmalloc namespace.
> Source/bmalloc/bmalloc/Heap.cpp:276
> +void Heap::tryDecommitSmallPageSlow(std::lock_guard<Mutex>&, BulkDecommit&
decommitter, SmallPage* smallPage, size_t pageSize)
> +{
> + Chunk* chunk = Chunk::get(smallPage);
I suggest you pass in the chunk since it's always know in the caller.
> Source/bmalloc/bmalloc/Heap.cpp:300
> + SmallPage* firstPageInRange =
chunk->page(chunk->offset(physicalPagesBegin));
> + SmallPage* lastPageInRange =
chunk->page(chunk->offset(physicalPagesBegin + pageSize * (smallPageCount -
1)));
> +
> + for (auto* page : chunk->freePages()) {
> + if (page >= firstPageInRange && page <= lastPageInRange) {
> + matchingPageCount++;
> + if (matchingPageCount == smallPageCount) {
> + firstPageToDecommit = firstPageInRange;
> + pagesToDecommit = matchingPageCount;
> + break;
> + }
> + }
> + }
> +
> + if (!firstPageToDecommit || !pagesToDecommit)
> + return;
Instead of doing this, can we instead do the following:
1. In class SmallPage, add a bit: isCommitted.
2. In Heap::commitSmallPage():
if the SmallPage does not have a physical page yet, allocate the physical
page, and set all of its SmallPage isCommitted bits to false.
Always set the requested SmallPage's isCommitted bit to true.
3. Here in Heap::tryDecommitSmallPage():
Assert that all SmallPages in this physical page bounds has a physical
page.
Clear the isCommitted bit for this SmallPage.
Check if all SmallPage's isCommitted bit is cleared. If so, decommit the
physical page and clear all the m_hasPhysicalPages bits.
> Source/bmalloc/bmalloc/Heap.cpp:327
> +void Heap::commitSmallPageSlow(std::unique_lock<Mutex>&, SmallPage* page,
size_t pageSize)
> +{
> + Chunk* chunk = Chunk::get(page);
I suggest you pass in the chunk since it's always know in the caller.
> Source/bmalloc/bmalloc/Heap.cpp:331
> + char* physicalPagesBegin = roundDownToMultipleOf(m_vmPageSizePhysical,
page->begin()->begin());
Can you add the following above this?
BASSERT(isPowerOfTwo(pageSize));
BASSERT(pageSize < m_vmPageSizePhysical);
> Source/bmalloc/bmalloc/Heap.cpp:335
> + unsigned smallPageCount = m_vmPageSizePhysical / pageSize;
> +
> + size_t firstPageOffset = chunk->offset(physicalPagesBegin);
> + size_t lastPageOffset = firstPageOffset + smallPageCount * pageSize;
You can do away with smallPageCount and just compute lastPageOffset as:
size_t lastPageOffset = firstPageOffset + m_vmPageSizePhysical;
Since we make reference to a SmallPage and a physical page in this body of
code, I think it would be clearer if we name variables more clearly to
distinguish between which one we're referring to. I suggest renaming
firstPageOffset to physicalPageBeginOffset, and lastPageOffset to
physicalPageEndOffset.
Ditto for firstPageToCommit: I suggest renaming to firstSmallPageToCommit.
Ditto for pagesToCommit: I suggest renaming to smallPagesToCommit.
> Source/bmalloc/bmalloc/Heap.cpp:350
> + for (auto it = begin; it + pageSize <= end; it = it + pageSize) {
> + if (!firstPageToCommit) {
> + if (!it.page()->hasPhysicalPages()) {
> + firstPageToCommit = it.page();
> + pagesToCommit = 1;
> + }
> + } else if (!it.page()->hasPhysicalPages())
> + pagesToCommit++;
> + else
> + break;
> + }
I don't understand this. My understanding is that either all SmallPages in
this physical page's bound has a physical page or they don't. Why not just
check the requested page and be done? That lessens the number of cache lines
we touch too. You can turn this into a debug assert loop to verify that all
SmallPages in this bounds to be the same as the requested page in terms of
whether they have a physical page or not.
> Source/bmalloc/bmalloc/Heap.cpp:352
> + BASSERT(firstPageToCommit && pagesToCommit);
This assert is invalid in light of the observation about the loop above.
> Source/bmalloc/bmalloc/Heap.cpp:374
> + char* firstPageBegin = firstPageToCommit->begin()->begin();
> + size_t commitSize = physicalPageSizeSloppyRoundUp(firstPageBegin,
pagesToCommit * pageSize);
> + m_footprint += commitSize;
> +
> + vmAllocatePhysicalPagesSloppy(firstPageBegin, commitSize);
> +
> + if (pagesToCommit == 1)
> + firstPageToCommit->setHasPhysicalPages(true);
> + else {
> + size_t firstPageOffset = chunk->offset(firstPageBegin);
> + size_t lastPageOffset = firstPageOffset + pagesToCommit * pageSize;
> +
> + Object begin(chunk, firstPageOffset);
> + Object end(chunk, lastPageOffset);
> +
> + for (auto it = begin; it + pageSize <= end; it = it + pageSize)
> + it.page()->setHasPhysicalPages(true);
> + }
> +#if ENABLE_PHYSICAL_PAGE_MAP
> + m_physicalPageMap.commit(firstPageToCommit, commitSize);
> +#endif
This is also no longer valid. This can be simplified as follows:
if (!page->hasPhysicalPages()) {
commit physical page;
m_footprint += m_vmPageSizePhysical;
#if ENABLE_PHYSICAL_PAGE_MAP
m_physicalPageMap.commit(physicalPageBegin, m_vmPageSizePhysical);
#endif
}
> Source/bmalloc/bmalloc/Heap.h:124
> + void tryDecommitSmallPageSlow(std::lock_guard<Mutex>&, BulkDecommit&
decommitter, SmallPage*, size_t pageSize);
> + void commitSmallPageSlow(std::unique_lock<Mutex>&, SmallPage*, size_t
pageSize);
I think you can drop the "Slow" in the names since there is no fast version of
this.
More information about the webkit-reviews
mailing list