[webkit-reviews] review canceled: [Bug 195000] Add support for incremental bytecode cache updates : [Attachment 365013] Patch

bugzilla-daemon at webkit.org bugzilla-daemon at webkit.org
Mon Mar 18 14:34:17 PDT 2019

Tadeu Zagallo <tzagallo at apple.com> has canceled Tadeu Zagallo
<tzagallo at apple.com>'s request for review:
Bug 195000: Add support for incremental bytecode cache updates

Attachment 365013: Patch


--- Comment #17 from Tadeu Zagallo <tzagallo at apple.com> ---
Comment on attachment 365013
  --> https://bugs.webkit.org/attachment.cgi?id=365013

View in context: https://bugs.webkit.org/attachment.cgi?id=365013&action=review

>> Source/JavaScriptCore/runtime/CachedBytecode.cpp:139
>> +	    auto addResult = m_leafExecutables.add(it.key, it.value + m_size);
> I'm a little bit lost on seeing why this is the correct offset math. Can you
explain why this works?

The leaf executable was recorded while encoding the function in its own buffer,
so when we append to the cache we need to offset it by the current size of the
cache so that it's relative to the start of the file.

>> Source/JavaScriptCore/runtime/CachedBytecode.cpp:145
>> +bool CachedBytecode::commitUpdates(const ForEachUpdateCallback& callback)
> What happens when one of the later updates fails but an earlier one doesn't?
Don't we end up in some weird corrupted state? E.g, what if we successfully
update the offset but fail to write the payload the offset points to?
> Maybe the right API to use is to reserve the final file size. If that fails,
we bail. If that succeeds, we can just write everything out to disk assuming
> I think we should find a way to make this update atomic.

Right now we just truncate the file if any of the updates fail, but I agree
your solution is better. I'll update it.

More information about the webkit-reviews mailing list