<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN"
"http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head><meta http-equiv="content-type" content="text/html; charset=utf-8" />
<title>[202214] trunk/Source/JavaScriptCore</title>
</head>
<body>

<style type="text/css"><!--
#msg dl.meta { border: 1px #006 solid; background: #369; padding: 6px; color: #fff; }
#msg dl.meta dt { float: left; width: 6em; font-weight: bold; }
#msg dt:after { content:':';}
#msg dl, #msg dt, #msg ul, #msg li, #header, #footer, #logmsg { font-family: verdana,arial,helvetica,sans-serif; font-size: 10pt;  }
#msg dl a { font-weight: bold}
#msg dl a:link    { color:#fc3; }
#msg dl a:active  { color:#ff0; }
#msg dl a:visited { color:#cc6; }
h3 { font-family: verdana,arial,helvetica,sans-serif; font-size: 10pt; font-weight: bold; }
#msg pre { overflow: auto; background: #ffc; border: 1px #fa0 solid; padding: 6px; }
#logmsg { background: #ffc; border: 1px #fa0 solid; padding: 1em 1em 0 1em; }
#logmsg p, #logmsg pre, #logmsg blockquote { margin: 0 0 1em 0; }
#logmsg p, #logmsg li, #logmsg dt, #logmsg dd { line-height: 14pt; }
#logmsg h1, #logmsg h2, #logmsg h3, #logmsg h4, #logmsg h5, #logmsg h6 { margin: .5em 0; }
#logmsg h1:first-child, #logmsg h2:first-child, #logmsg h3:first-child, #logmsg h4:first-child, #logmsg h5:first-child, #logmsg h6:first-child { margin-top: 0; }
#logmsg ul, #logmsg ol { padding: 0; list-style-position: inside; margin: 0 0 0 1em; }
#logmsg ul { text-indent: -1em; padding-left: 1em; }#logmsg ol { text-indent: -1.5em; padding-left: 1.5em; }
#logmsg > ul, #logmsg > ol { margin: 0 0 1em 0; }
#logmsg pre { background: #eee; padding: 1em; }
#logmsg blockquote { border: 1px solid #fa0; border-left-width: 10px; padding: 1em 1em 0 1em; background: white;}
#logmsg dl { margin: 0; }
#logmsg dt { font-weight: bold; }
#logmsg dd { margin: 0; padding: 0 0 0.5em 0; }
#logmsg dd:before { content:'\00bb';}
#logmsg table { border-spacing: 0px; border-collapse: collapse; border-top: 4px solid #fa0; border-bottom: 1px solid #fa0; background: #fff; }
#logmsg table th { text-align: left; font-weight: normal; padding: 0.2em 0.5em; border-top: 1px dotted #fa0; }
#logmsg table td { text-align: right; border-top: 1px dotted #fa0; padding: 0.2em 0.5em; }
#logmsg table thead th { text-align: center; border-bottom: 1px solid #fa0; }
#logmsg table th.Corner { text-align: left; }
#logmsg hr { border: none 0; border-top: 2px dashed #fa0; height: 1px; }
#header, #footer { color: #fff; background: #636; border: 1px #300 solid; padding: 6px; }
#patch { width: 100%; }
#patch h4 {font-family: verdana,arial,helvetica,sans-serif;font-size:10pt;padding:8px;background:#369;color:#fff;margin:0;}
#patch .propset h4, #patch .binary h4 {margin:0;}
#patch pre {padding:0;line-height:1.2em;margin:0;}
#patch .diff {width:100%;background:#eee;padding: 0 0 10px 0;overflow:auto;}
#patch .propset .diff, #patch .binary .diff  {padding:10px 0;}
#patch span {display:block;padding:0 10px;}
#patch .modfile, #patch .addfile, #patch .delfile, #patch .propset, #patch .binary, #patch .copfile {border:1px solid #ccc;margin:10px 0;}
#patch ins {background:#dfd;text-decoration:none;display:block;padding:0 10px;}
#patch del {background:#fdd;text-decoration:none;display:block;padding:0 10px;}
#patch .lines, .info {color:#888;background:#fff;}
--></style>
<div id="msg">
<dl class="meta">
<dt>Revision</dt> <dd><a href="http://trac.webkit.org/projects/webkit/changeset/202214">202214</a></dd>
<dt>Author</dt> <dd>sbarati@apple.com</dd>
<dt>Date</dt> <dd>2016-06-19 12:42:18 -0700 (Sun, 19 Jun 2016)</dd>
</dl>

<h3>Log Message</h3>
<pre>We should be able to generate more types of ICs inline
https://bugs.webkit.org/show_bug.cgi?id=158719
&lt;rdar://problem/26825641&gt;

Reviewed by Filip Pizlo.

This patch changes how we emit code for *byId ICs inline.
We no longer keep data labels to patch structure checks, etc.
Instead, we just regenerate the entire IC into a designated
region of code that the Baseline/DFG/FTL JIT will emit inline.
This makes it much simpler to patch inline ICs. All that's
needed to patch an inline IC is to memcpy the code from
a macro assembler inline using LinkBuffer. This architecture
will be easy to extend into other forms of ICs, such as one
for add, in the future.

To support this change, I've reworked the fields inside
StructureStubInfo. It now has one field that is the CodeLocationLabel 
of the start of the inline IC. Then it has a few ints that track deltas
to other locations in the IC such as the slow path start, slow path call, the
ICs 'done' location. We used to perform math on these ints in a bunch of different
places. I've consolidated that math into methods inside StructureStubInfo.

To generate inline ICs, I've implemented a new class called InlineAccess.
InlineAccess is stateless: it just has a bunch of static methods for
generating code into the inline region specified by StructureStubInfo.
Repatch will now decide when it wants to generate such an inline
IC, and it will ask InlineAccess to do so.

I've implemented three types of inline ICs to begin with (extending
this in the future should be easy):
- Self property loads (both inline and out of line offsets).
- Self property replace (both inline and out of line offsets).
- Array length on specific array types.
(An easy extension would be to implement JSString length.)

To know how much inline space to reserve, I've implemented a
method that stubs out the various inline cache shapes and 
dumps their size. This is used to determine how much space
to save inline. When InlineAccess ends up generating more
code than can fit inline, we will fall back to generating
code with PolymorphicAccess instead.

To make generating code into already allocated executable memory
efficient, I've made AssemblerData have 128 bytes of inline storage.
This saves us a malloc when splatting code into the inline region.

This patch also tidies up LinkBuffer's API for generating
into already allocated executable memory. Now, when generating
code that has less size than the already allocated space, LinkBuffer
will fill the extra space with nops. Also, if branch compaction shrinks
the code, LinkBuffer will add a nop sled at the end of the shrunken
code to take up the entire allocated size.

This looks like it could be a 1% octane progression.

* CMakeLists.txt:
* JavaScriptCore.xcodeproj/project.pbxproj:
* assembler/ARM64Assembler.h:
(JSC::ARM64Assembler::nop):
(JSC::ARM64Assembler::fillNops):
* assembler/ARMv7Assembler.h:
(JSC::ARMv7Assembler::nopw):
(JSC::ARMv7Assembler::nopPseudo16):
(JSC::ARMv7Assembler::nopPseudo32):
(JSC::ARMv7Assembler::fillNops):
(JSC::ARMv7Assembler::dmbSY):
* assembler/AbstractMacroAssembler.h:
(JSC::AbstractMacroAssembler::addLinkTask):
(JSC::AbstractMacroAssembler::emitNops):
(JSC::AbstractMacroAssembler::AbstractMacroAssembler):
* assembler/AssemblerBuffer.h:
(JSC::AssemblerData::AssemblerData):
(JSC::AssemblerData::operator=):
(JSC::AssemblerData::~AssemblerData):
(JSC::AssemblerData::buffer):
(JSC::AssemblerData::grow):
(JSC::AssemblerData::isInlineBuffer):
(JSC::AssemblerBuffer::AssemblerBuffer):
(JSC::AssemblerBuffer::ensureSpace):
(JSC::AssemblerBuffer::codeSize):
(JSC::AssemblerBuffer::setCodeSize):
(JSC::AssemblerBuffer::label):
(JSC::AssemblerBuffer::debugOffset):
(JSC::AssemblerBuffer::releaseAssemblerData):
* assembler/LinkBuffer.cpp:
(JSC::LinkBuffer::copyCompactAndLinkCode):
(JSC::LinkBuffer::linkCode):
(JSC::LinkBuffer::allocate):
(JSC::LinkBuffer::performFinalization):
(JSC::LinkBuffer::shrink): Deleted.
* assembler/LinkBuffer.h:
(JSC::LinkBuffer::LinkBuffer):
(JSC::LinkBuffer::debugAddress):
(JSC::LinkBuffer::size):
(JSC::LinkBuffer::wasAlreadyDisassembled):
(JSC::LinkBuffer::didAlreadyDisassemble):
(JSC::LinkBuffer::applyOffset):
(JSC::LinkBuffer::code):
* assembler/MacroAssemblerARM64.h:
(JSC::MacroAssemblerARM64::patchableBranch32):
(JSC::MacroAssemblerARM64::patchableBranch64):
* assembler/MacroAssemblerARMv7.h:
(JSC::MacroAssemblerARMv7::patchableBranch32):
(JSC::MacroAssemblerARMv7::patchableBranchPtrWithPatch):
* assembler/X86Assembler.h:
(JSC::X86Assembler::nop):
(JSC::X86Assembler::fillNops):
* bytecode/CodeBlock.cpp:
(JSC::CodeBlock::printGetByIdCacheStatus):
* bytecode/InlineAccess.cpp: Added.
(JSC::InlineAccess::dumpCacheSizesAndCrash):
(JSC::linkCodeInline):
(JSC::InlineAccess::generateSelfPropertyAccess):
(JSC::getScratchRegister):
(JSC::hasFreeRegister):
(JSC::InlineAccess::canGenerateSelfPropertyReplace):
(JSC::InlineAccess::generateSelfPropertyReplace):
(JSC::InlineAccess::isCacheableArrayLength):
(JSC::InlineAccess::generateArrayLength):
(JSC::InlineAccess::rewireStubAsJump):
* bytecode/InlineAccess.h: Added.
(JSC::InlineAccess::sizeForPropertyAccess):
(JSC::InlineAccess::sizeForPropertyReplace):
(JSC::InlineAccess::sizeForLengthAccess):
* bytecode/PolymorphicAccess.cpp:
(JSC::PolymorphicAccess::regenerate):
* bytecode/StructureStubInfo.cpp:
(JSC::StructureStubInfo::initGetByIdSelf):
(JSC::StructureStubInfo::initArrayLength):
(JSC::StructureStubInfo::initPutByIdReplace):
(JSC::StructureStubInfo::deref):
(JSC::StructureStubInfo::aboutToDie):
(JSC::StructureStubInfo::propagateTransitions):
(JSC::StructureStubInfo::containsPC):
* bytecode/StructureStubInfo.h:
(JSC::StructureStubInfo::considerCaching):
(JSC::StructureStubInfo::slowPathCallLocation):
(JSC::StructureStubInfo::doneLocation):
(JSC::StructureStubInfo::slowPathStartLocation):
(JSC::StructureStubInfo::patchableJumpForIn):
(JSC::StructureStubInfo::valueRegs):
* dfg/DFGJITCompiler.cpp:
(JSC::DFG::JITCompiler::link):
* dfg/DFGOSRExitCompilerCommon.cpp:
(JSC::DFG::reifyInlinedCallFrames):
* dfg/DFGSpeculativeJIT32_64.cpp:
(JSC::DFG::SpeculativeJIT::cachedGetById):
* dfg/DFGSpeculativeJIT64.cpp:
(JSC::DFG::SpeculativeJIT::cachedGetById):
* ftl/FTLLowerDFGToB3.cpp:
(JSC::FTL::DFG::LowerDFGToB3::compileIn):
(JSC::FTL::DFG::LowerDFGToB3::getById):
* jit/JITInlineCacheGenerator.cpp:
(JSC::JITByIdGenerator::finalize):
(JSC::JITByIdGenerator::generateFastCommon):
(JSC::JITGetByIdGenerator::JITGetByIdGenerator):
(JSC::JITGetByIdGenerator::generateFastPath):
(JSC::JITPutByIdGenerator::JITPutByIdGenerator):
(JSC::JITPutByIdGenerator::generateFastPath):
(JSC::JITPutByIdGenerator::slowPathFunction):
(JSC::JITByIdGenerator::generateFastPathChecks): Deleted.
* jit/JITInlineCacheGenerator.h:
(JSC::JITByIdGenerator::reportSlowPathCall):
(JSC::JITByIdGenerator::slowPathBegin):
(JSC::JITByIdGenerator::slowPathJump):
(JSC::JITGetByIdGenerator::JITGetByIdGenerator):
* jit/JITPropertyAccess.cpp:
(JSC::JIT::emitGetByValWithCachedId):
(JSC::JIT::emit_op_try_get_by_id):
(JSC::JIT::emit_op_get_by_id):
* jit/JITPropertyAccess32_64.cpp:
(JSC::JIT::emitGetByValWithCachedId):
(JSC::JIT::emit_op_try_get_by_id):
(JSC::JIT::emit_op_get_by_id):
* jit/Repatch.cpp:
(JSC::repatchCall):
(JSC::tryCacheGetByID):
(JSC::repatchGetByID):
(JSC::appropriateGenericPutByIdFunction):
(JSC::tryCachePutByID):
(JSC::repatchPutByID):
(JSC::tryRepatchIn):
(JSC::repatchIn):
(JSC::linkSlowFor):
(JSC::resetGetByID):
(JSC::resetPutByID):
(JSC::resetIn):
(JSC::repatchByIdSelfAccess): Deleted.
(JSC::resetGetByIDCheckAndLoad): Deleted.
(JSC::resetPutByIDCheckAndLoad): Deleted.
(JSC::replaceWithJump): Deleted.</pre>

<h3>Modified Paths</h3>
<ul>
<li><a href="#trunkSourceJavaScriptCoreCMakeListstxt">trunk/Source/JavaScriptCore/CMakeLists.txt</a></li>
<li><a href="#trunkSourceJavaScriptCoreChangeLog">trunk/Source/JavaScriptCore/ChangeLog</a></li>
<li><a href="#trunkSourceJavaScriptCoreJavaScriptCorexcodeprojprojectpbxproj">trunk/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj</a></li>
<li><a href="#trunkSourceJavaScriptCoreassemblerARM64Assemblerh">trunk/Source/JavaScriptCore/assembler/ARM64Assembler.h</a></li>
<li><a href="#trunkSourceJavaScriptCoreassemblerARMv7Assemblerh">trunk/Source/JavaScriptCore/assembler/ARMv7Assembler.h</a></li>
<li><a href="#trunkSourceJavaScriptCoreassemblerAbstractMacroAssemblerh">trunk/Source/JavaScriptCore/assembler/AbstractMacroAssembler.h</a></li>
<li><a href="#trunkSourceJavaScriptCoreassemblerAssemblerBufferh">trunk/Source/JavaScriptCore/assembler/AssemblerBuffer.h</a></li>
<li><a href="#trunkSourceJavaScriptCoreassemblerLinkBuffercpp">trunk/Source/JavaScriptCore/assembler/LinkBuffer.cpp</a></li>
<li><a href="#trunkSourceJavaScriptCoreassemblerLinkBufferh">trunk/Source/JavaScriptCore/assembler/LinkBuffer.h</a></li>
<li><a href="#trunkSourceJavaScriptCoreassemblerMacroAssemblerARM64h">trunk/Source/JavaScriptCore/assembler/MacroAssemblerARM64.h</a></li>
<li><a href="#trunkSourceJavaScriptCoreassemblerMacroAssemblerARMv7h">trunk/Source/JavaScriptCore/assembler/MacroAssemblerARMv7.h</a></li>
<li><a href="#trunkSourceJavaScriptCoreassemblerX86Assemblerh">trunk/Source/JavaScriptCore/assembler/X86Assembler.h</a></li>
<li><a href="#trunkSourceJavaScriptCorebytecodeCodeBlockcpp">trunk/Source/JavaScriptCore/bytecode/CodeBlock.cpp</a></li>
<li><a href="#trunkSourceJavaScriptCorebytecodePolymorphicAccesscpp">trunk/Source/JavaScriptCore/bytecode/PolymorphicAccess.cpp</a></li>
<li><a href="#trunkSourceJavaScriptCorebytecodeStructureStubInfocpp">trunk/Source/JavaScriptCore/bytecode/StructureStubInfo.cpp</a></li>
<li><a href="#trunkSourceJavaScriptCorebytecodeStructureStubInfoh">trunk/Source/JavaScriptCore/bytecode/StructureStubInfo.h</a></li>
<li><a href="#trunkSourceJavaScriptCoredfgDFGJITCompilercpp">trunk/Source/JavaScriptCore/dfg/DFGJITCompiler.cpp</a></li>
<li><a href="#trunkSourceJavaScriptCoredfgDFGOSRExitCompilerCommoncpp">trunk/Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp</a></li>
<li><a href="#trunkSourceJavaScriptCoredfgDFGSpeculativeJIT32_64cpp">trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp</a></li>
<li><a href="#trunkSourceJavaScriptCoredfgDFGSpeculativeJIT64cpp">trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp</a></li>
<li><a href="#trunkSourceJavaScriptCoreftlFTLLowerDFGToB3cpp">trunk/Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp</a></li>
<li><a href="#trunkSourceJavaScriptCorejitJITInlineCacheGeneratorcpp">trunk/Source/JavaScriptCore/jit/JITInlineCacheGenerator.cpp</a></li>
<li><a href="#trunkSourceJavaScriptCorejitJITInlineCacheGeneratorh">trunk/Source/JavaScriptCore/jit/JITInlineCacheGenerator.h</a></li>
<li><a href="#trunkSourceJavaScriptCorejitJITPropertyAccesscpp">trunk/Source/JavaScriptCore/jit/JITPropertyAccess.cpp</a></li>
<li><a href="#trunkSourceJavaScriptCorejitJITPropertyAccess32_64cpp">trunk/Source/JavaScriptCore/jit/JITPropertyAccess32_64.cpp</a></li>
<li><a href="#trunkSourceJavaScriptCorejitRepatchcpp">trunk/Source/JavaScriptCore/jit/Repatch.cpp</a></li>
</ul>

<h3>Added Paths</h3>
<ul>
<li><a href="#trunkSourceJavaScriptCorebytecodeInlineAccesscpp">trunk/Source/JavaScriptCore/bytecode/InlineAccess.cpp</a></li>
<li><a href="#trunkSourceJavaScriptCorebytecodeInlineAccessh">trunk/Source/JavaScriptCore/bytecode/InlineAccess.h</a></li>
</ul>

</div>
<div id="patch">
<h3>Diff</h3>
<a id="trunkSourceJavaScriptCoreCMakeListstxt"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/CMakeLists.txt (202213 => 202214)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/CMakeLists.txt        2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/JavaScriptCore/CMakeLists.txt        2016-06-19 19:42:18 UTC (rev 202214)
</span><span class="lines">@@ -200,6 +200,7 @@
</span><span class="cx">     bytecode/ExitingJITType.cpp
</span><span class="cx">     bytecode/GetByIdStatus.cpp
</span><span class="cx">     bytecode/GetByIdVariant.cpp
</span><ins>+    bytecode/InlineAccess.cpp
</ins><span class="cx">     bytecode/InlineCallFrame.cpp
</span><span class="cx">     bytecode/InlineCallFrameSet.cpp
</span><span class="cx">     bytecode/JumpTable.cpp
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreChangeLog"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/ChangeLog (202213 => 202214)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/ChangeLog        2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/JavaScriptCore/ChangeLog        2016-06-19 19:42:18 UTC (rev 202214)
</span><span class="lines">@@ -1,3 +1,198 @@
</span><ins>+2016-06-19  Saam Barati  &lt;sbarati@apple.com&gt;
+
+        We should be able to generate more types of ICs inline
+        https://bugs.webkit.org/show_bug.cgi?id=158719
+        &lt;rdar://problem/26825641&gt;
+
+        Reviewed by Filip Pizlo.
+
+        This patch changes how we emit code for *byId ICs inline.
+        We no longer keep data labels to patch structure checks, etc.
+        Instead, we just regenerate the entire IC into a designated
+        region of code that the Baseline/DFG/FTL JIT will emit inline.
+        This makes it much simpler to patch inline ICs. All that's
+        needed to patch an inline IC is to memcpy the code from
+        a macro assembler inline using LinkBuffer. This architecture
+        will be easy to extend into other forms of ICs, such as one
+        for add, in the future.
+
+        To support this change, I've reworked the fields inside
+        StructureStubInfo. It now has one field that is the CodeLocationLabel 
+        of the start of the inline IC. Then it has a few ints that track deltas
+        to other locations in the IC such as the slow path start, slow path call, the
+        ICs 'done' location. We used to perform math on these ints in a bunch of different
+        places. I've consolidated that math into methods inside StructureStubInfo.
+
+        To generate inline ICs, I've implemented a new class called InlineAccess.
+        InlineAccess is stateless: it just has a bunch of static methods for
+        generating code into the inline region specified by StructureStubInfo.
+        Repatch will now decide when it wants to generate such an inline
+        IC, and it will ask InlineAccess to do so.
+
+        I've implemented three types of inline ICs to begin with (extending
+        this in the future should be easy):
+        - Self property loads (both inline and out of line offsets).
+        - Self property replace (both inline and out of line offsets).
+        - Array length on specific array types.
+        (An easy extension would be to implement JSString length.)
+
+        To know how much inline space to reserve, I've implemented a
+        method that stubs out the various inline cache shapes and 
+        dumps their size. This is used to determine how much space
+        to save inline. When InlineAccess ends up generating more
+        code than can fit inline, we will fall back to generating
+        code with PolymorphicAccess instead.
+
+        To make generating code into already allocated executable memory
+        efficient, I've made AssemblerData have 128 bytes of inline storage.
+        This saves us a malloc when splatting code into the inline region.
+
+        This patch also tidies up LinkBuffer's API for generating
+        into already allocated executable memory. Now, when generating
+        code that has less size than the already allocated space, LinkBuffer
+        will fill the extra space with nops. Also, if branch compaction shrinks
+        the code, LinkBuffer will add a nop sled at the end of the shrunken
+        code to take up the entire allocated size.
+
+        This looks like it could be a 1% octane progression.
+
+        * CMakeLists.txt:
+        * JavaScriptCore.xcodeproj/project.pbxproj:
+        * assembler/ARM64Assembler.h:
+        (JSC::ARM64Assembler::nop):
+        (JSC::ARM64Assembler::fillNops):
+        * assembler/ARMv7Assembler.h:
+        (JSC::ARMv7Assembler::nopw):
+        (JSC::ARMv7Assembler::nopPseudo16):
+        (JSC::ARMv7Assembler::nopPseudo32):
+        (JSC::ARMv7Assembler::fillNops):
+        (JSC::ARMv7Assembler::dmbSY):
+        * assembler/AbstractMacroAssembler.h:
+        (JSC::AbstractMacroAssembler::addLinkTask):
+        (JSC::AbstractMacroAssembler::emitNops):
+        (JSC::AbstractMacroAssembler::AbstractMacroAssembler):
+        * assembler/AssemblerBuffer.h:
+        (JSC::AssemblerData::AssemblerData):
+        (JSC::AssemblerData::operator=):
+        (JSC::AssemblerData::~AssemblerData):
+        (JSC::AssemblerData::buffer):
+        (JSC::AssemblerData::grow):
+        (JSC::AssemblerData::isInlineBuffer):
+        (JSC::AssemblerBuffer::AssemblerBuffer):
+        (JSC::AssemblerBuffer::ensureSpace):
+        (JSC::AssemblerBuffer::codeSize):
+        (JSC::AssemblerBuffer::setCodeSize):
+        (JSC::AssemblerBuffer::label):
+        (JSC::AssemblerBuffer::debugOffset):
+        (JSC::AssemblerBuffer::releaseAssemblerData):
+        * assembler/LinkBuffer.cpp:
+        (JSC::LinkBuffer::copyCompactAndLinkCode):
+        (JSC::LinkBuffer::linkCode):
+        (JSC::LinkBuffer::allocate):
+        (JSC::LinkBuffer::performFinalization):
+        (JSC::LinkBuffer::shrink): Deleted.
+        * assembler/LinkBuffer.h:
+        (JSC::LinkBuffer::LinkBuffer):
+        (JSC::LinkBuffer::debugAddress):
+        (JSC::LinkBuffer::size):
+        (JSC::LinkBuffer::wasAlreadyDisassembled):
+        (JSC::LinkBuffer::didAlreadyDisassemble):
+        (JSC::LinkBuffer::applyOffset):
+        (JSC::LinkBuffer::code):
+        * assembler/MacroAssemblerARM64.h:
+        (JSC::MacroAssemblerARM64::patchableBranch32):
+        (JSC::MacroAssemblerARM64::patchableBranch64):
+        * assembler/MacroAssemblerARMv7.h:
+        (JSC::MacroAssemblerARMv7::patchableBranch32):
+        (JSC::MacroAssemblerARMv7::patchableBranchPtrWithPatch):
+        * assembler/X86Assembler.h:
+        (JSC::X86Assembler::nop):
+        (JSC::X86Assembler::fillNops):
+        * bytecode/CodeBlock.cpp:
+        (JSC::CodeBlock::printGetByIdCacheStatus):
+        * bytecode/InlineAccess.cpp: Added.
+        (JSC::InlineAccess::dumpCacheSizesAndCrash):
+        (JSC::linkCodeInline):
+        (JSC::InlineAccess::generateSelfPropertyAccess):
+        (JSC::getScratchRegister):
+        (JSC::hasFreeRegister):
+        (JSC::InlineAccess::canGenerateSelfPropertyReplace):
+        (JSC::InlineAccess::generateSelfPropertyReplace):
+        (JSC::InlineAccess::isCacheableArrayLength):
+        (JSC::InlineAccess::generateArrayLength):
+        (JSC::InlineAccess::rewireStubAsJump):
+        * bytecode/InlineAccess.h: Added.
+        (JSC::InlineAccess::sizeForPropertyAccess):
+        (JSC::InlineAccess::sizeForPropertyReplace):
+        (JSC::InlineAccess::sizeForLengthAccess):
+        * bytecode/PolymorphicAccess.cpp:
+        (JSC::PolymorphicAccess::regenerate):
+        * bytecode/StructureStubInfo.cpp:
+        (JSC::StructureStubInfo::initGetByIdSelf):
+        (JSC::StructureStubInfo::initArrayLength):
+        (JSC::StructureStubInfo::initPutByIdReplace):
+        (JSC::StructureStubInfo::deref):
+        (JSC::StructureStubInfo::aboutToDie):
+        (JSC::StructureStubInfo::propagateTransitions):
+        (JSC::StructureStubInfo::containsPC):
+        * bytecode/StructureStubInfo.h:
+        (JSC::StructureStubInfo::considerCaching):
+        (JSC::StructureStubInfo::slowPathCallLocation):
+        (JSC::StructureStubInfo::doneLocation):
+        (JSC::StructureStubInfo::slowPathStartLocation):
+        (JSC::StructureStubInfo::patchableJumpForIn):
+        (JSC::StructureStubInfo::valueRegs):
+        * dfg/DFGJITCompiler.cpp:
+        (JSC::DFG::JITCompiler::link):
+        * dfg/DFGOSRExitCompilerCommon.cpp:
+        (JSC::DFG::reifyInlinedCallFrames):
+        * dfg/DFGSpeculativeJIT32_64.cpp:
+        (JSC::DFG::SpeculativeJIT::cachedGetById):
+        * dfg/DFGSpeculativeJIT64.cpp:
+        (JSC::DFG::SpeculativeJIT::cachedGetById):
+        * ftl/FTLLowerDFGToB3.cpp:
+        (JSC::FTL::DFG::LowerDFGToB3::compileIn):
+        (JSC::FTL::DFG::LowerDFGToB3::getById):
+        * jit/JITInlineCacheGenerator.cpp:
+        (JSC::JITByIdGenerator::finalize):
+        (JSC::JITByIdGenerator::generateFastCommon):
+        (JSC::JITGetByIdGenerator::JITGetByIdGenerator):
+        (JSC::JITGetByIdGenerator::generateFastPath):
+        (JSC::JITPutByIdGenerator::JITPutByIdGenerator):
+        (JSC::JITPutByIdGenerator::generateFastPath):
+        (JSC::JITPutByIdGenerator::slowPathFunction):
+        (JSC::JITByIdGenerator::generateFastPathChecks): Deleted.
+        * jit/JITInlineCacheGenerator.h:
+        (JSC::JITByIdGenerator::reportSlowPathCall):
+        (JSC::JITByIdGenerator::slowPathBegin):
+        (JSC::JITByIdGenerator::slowPathJump):
+        (JSC::JITGetByIdGenerator::JITGetByIdGenerator):
+        * jit/JITPropertyAccess.cpp:
+        (JSC::JIT::emitGetByValWithCachedId):
+        (JSC::JIT::emit_op_try_get_by_id):
+        (JSC::JIT::emit_op_get_by_id):
+        * jit/JITPropertyAccess32_64.cpp:
+        (JSC::JIT::emitGetByValWithCachedId):
+        (JSC::JIT::emit_op_try_get_by_id):
+        (JSC::JIT::emit_op_get_by_id):
+        * jit/Repatch.cpp:
+        (JSC::repatchCall):
+        (JSC::tryCacheGetByID):
+        (JSC::repatchGetByID):
+        (JSC::appropriateGenericPutByIdFunction):
+        (JSC::tryCachePutByID):
+        (JSC::repatchPutByID):
+        (JSC::tryRepatchIn):
+        (JSC::repatchIn):
+        (JSC::linkSlowFor):
+        (JSC::resetGetByID):
+        (JSC::resetPutByID):
+        (JSC::resetIn):
+        (JSC::repatchByIdSelfAccess): Deleted.
+        (JSC::resetGetByIDCheckAndLoad): Deleted.
+        (JSC::resetPutByIDCheckAndLoad): Deleted.
+        (JSC::replaceWithJump): Deleted.
+
</ins><span class="cx"> 2016-06-19  Filip Pizlo  &lt;fpizlo@apple.com&gt;
</span><span class="cx"> 
</span><span class="cx">         REGRESSION(concurrent baseline JIT): Kraken/ai-astar runs 20% slower
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreJavaScriptCorexcodeprojprojectpbxproj"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj (202213 => 202214)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj        2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj        2016-06-19 19:42:18 UTC (rev 202214)
</span><span class="lines">@@ -1273,6 +1273,8 @@
</span><span class="cx">                 70ECA6091AFDBEA200449739 /* TemplateRegistryKey.h in Headers */ = {isa = PBXBuildFile; fileRef = 70ECA6041AFDBEA200449739 /* TemplateRegistryKey.h */; settings = {ATTRIBUTES = (Private, ); }; };
</span><span class="cx">                 72AAF7CD1D0D31B3005E60BE /* JSCustomGetterSetterFunction.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 72AAF7CB1D0D318B005E60BE /* JSCustomGetterSetterFunction.cpp */; };
</span><span class="cx">                 72AAF7CE1D0D31B3005E60BE /* JSCustomGetterSetterFunction.h in Headers */ = {isa = PBXBuildFile; fileRef = 72AAF7CC1D0D318B005E60BE /* JSCustomGetterSetterFunction.h */; };
</span><ins>+                7905BB681D12050E0019FE57 /* InlineAccess.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 7905BB661D12050E0019FE57 /* InlineAccess.cpp */; };
+                7905BB691D12050E0019FE57 /* InlineAccess.h in Headers */ = {isa = PBXBuildFile; fileRef = 7905BB671D12050E0019FE57 /* InlineAccess.h */; settings = {ATTRIBUTES = (Private, ); }; };
</ins><span class="cx">                 79160DBD1C8E3EC8008C085A /* ProxyRevoke.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 79160DBB1C8E3EC8008C085A /* ProxyRevoke.cpp */; };
</span><span class="cx">                 79160DBE1C8E3EC8008C085A /* ProxyRevoke.h in Headers */ = {isa = PBXBuildFile; fileRef = 79160DBC1C8E3EC8008C085A /* ProxyRevoke.h */; settings = {ATTRIBUTES = (Private, ); }; };
</span><span class="cx">                 792CB3491C4EED5C00D13AF3 /* PCToCodeOriginMap.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 792CB3471C4EED5C00D13AF3 /* PCToCodeOriginMap.cpp */; };
</span><span class="lines">@@ -3457,6 +3459,8 @@
</span><span class="cx">                 70ECA6041AFDBEA200449739 /* TemplateRegistryKey.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = TemplateRegistryKey.h; sourceTree = &quot;&lt;group&gt;&quot;; };
</span><span class="cx">                 72AAF7CB1D0D318B005E60BE /* JSCustomGetterSetterFunction.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = JSCustomGetterSetterFunction.cpp; sourceTree = &quot;&lt;group&gt;&quot;; };
</span><span class="cx">                 72AAF7CC1D0D318B005E60BE /* JSCustomGetterSetterFunction.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JSCustomGetterSetterFunction.h; sourceTree = &quot;&lt;group&gt;&quot;; };
</span><ins>+                7905BB661D12050E0019FE57 /* InlineAccess.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = InlineAccess.cpp; sourceTree = &quot;&lt;group&gt;&quot;; };
+                7905BB671D12050E0019FE57 /* InlineAccess.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = InlineAccess.h; sourceTree = &quot;&lt;group&gt;&quot;; };
</ins><span class="cx">                 79160DBB1C8E3EC8008C085A /* ProxyRevoke.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = ProxyRevoke.cpp; sourceTree = &quot;&lt;group&gt;&quot;; };
</span><span class="cx">                 79160DBC1C8E3EC8008C085A /* ProxyRevoke.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ProxyRevoke.h; sourceTree = &quot;&lt;group&gt;&quot;; };
</span><span class="cx">                 792CB3471C4EED5C00D13AF3 /* PCToCodeOriginMap.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = PCToCodeOriginMap.cpp; sourceTree = &quot;&lt;group&gt;&quot;; };
</span><span class="lines">@@ -6586,6 +6590,8 @@
</span><span class="cx">                                 0F0332C118B01763005F979A /* GetByIdVariant.cpp */,
</span><span class="cx">                                 0F0332C218B01763005F979A /* GetByIdVariant.h */,
</span><span class="cx">                                 0F0B83A814BCF55E00885B4F /* HandlerInfo.h */,
</span><ins>+                                7905BB661D12050E0019FE57 /* InlineAccess.cpp */,
+                                7905BB671D12050E0019FE57 /* InlineAccess.h */,
</ins><span class="cx">                                 148A7BED1B82975A002D9157 /* InlineCallFrame.cpp */,
</span><span class="cx">                                 148A7BEE1B82975A002D9157 /* InlineCallFrame.h */,
</span><span class="cx">                                 0F24E55317F0B71C00ABB217 /* InlineCallFrameSet.cpp */,
</span><span class="lines">@@ -7330,6 +7336,7 @@
</span><span class="cx">                                 0F2B9CE919D0BA7D00B1D1B5 /* DFGObjectMaterializationData.h in Headers */,
</span><span class="cx">                                 43C392AB1C3BEB0500241F53 /* AssemblerCommon.h in Headers */,
</span><span class="cx">                                 86EC9DD01328DF82002B2AD7 /* DFGOperations.h in Headers */,
</span><ins>+                                7905BB691D12050E0019FE57 /* InlineAccess.h in Headers */,
</ins><span class="cx">                                 A7D89CFE17A0B8CC00773AD8 /* DFGOSRAvailabilityAnalysisPhase.h in Headers */,
</span><span class="cx">                                 0FD82E57141DAF1000179C94 /* DFGOSREntry.h in Headers */,
</span><span class="cx">                                 0F40E4A71C497F7400A577FA /* AirOpcode.h in Headers */,
</span><span class="lines">@@ -9315,6 +9322,7 @@
</span><span class="cx">                                 0F6B8AD81C4EDDA200969052 /* B3DuplicateTails.cpp in Sources */,
</span><span class="cx">                                 527773DE1AAF83AC00BDE7E8 /* RuntimeType.cpp in Sources */,
</span><span class="cx">                                 0F7700921402FF3C0078EB39 /* SamplingCounter.cpp in Sources */,
</span><ins>+                                7905BB681D12050E0019FE57 /* InlineAccess.cpp in Sources */,
</ins><span class="cx">                                 0FE050271AA9095600D33B33 /* ScopedArguments.cpp in Sources */,
</span><span class="cx">                                 0FE0502F1AAA806900D33B33 /* ScopedArgumentsTable.cpp in Sources */,
</span><span class="cx">                                 992ABCF91BEA9BD2006403A0 /* RemoteAutomationTarget.cpp in Sources */,
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreassemblerARM64Assemblerh"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/assembler/ARM64Assembler.h (202213 => 202214)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/assembler/ARM64Assembler.h        2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/JavaScriptCore/assembler/ARM64Assembler.h        2016-06-19 19:42:18 UTC (rev 202214)
</span><span class="lines">@@ -1484,13 +1484,16 @@
</span><span class="cx">         insn(nopPseudo());
</span><span class="cx">     }
</span><span class="cx">     
</span><del>-    static void fillNops(void* base, size_t size)
</del><ins>+    static void fillNops(void* base, size_t size, bool isCopyingToExecutableMemory)
</ins><span class="cx">     {
</span><span class="cx">         RELEASE_ASSERT(!(size % sizeof(int32_t)));
</span><span class="cx">         size_t n = size / sizeof(int32_t);
</span><span class="cx">         for (int32_t* ptr = static_cast&lt;int32_t*&gt;(base); n--;) {
</span><span class="cx">             int insn = nopPseudo();
</span><del>-            performJITMemcpy(ptr++, &amp;insn, sizeof(int));
</del><ins>+            if (isCopyingToExecutableMemory)
+                performJITMemcpy(ptr++, &amp;insn, sizeof(int));
+            else
+                memcpy(ptr++, &amp;insn, sizeof(int));
</ins><span class="cx">         }
</span><span class="cx">     }
</span><span class="cx">     
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreassemblerARMv7Assemblerh"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/assembler/ARMv7Assembler.h (202213 => 202214)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/assembler/ARMv7Assembler.h        2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/JavaScriptCore/assembler/ARMv7Assembler.h        2016-06-19 19:42:18 UTC (rev 202214)
</span><span class="lines">@@ -2004,6 +2004,43 @@
</span><span class="cx">         m_formatter.twoWordOp16Op16(OP_NOP_T2a, OP_NOP_T2b);
</span><span class="cx">     }
</span><span class="cx">     
</span><ins>+    static constexpr int16_t nopPseudo16()
+    {
+        return OP_NOP_T1;
+    }
+
+    static constexpr int32_t nopPseudo32()
+    {
+        return OP_NOP_T2a | (OP_NOP_T2b &lt;&lt; 16);
+    }
+
+    static void fillNops(void* base, size_t size, bool isCopyingToExecutableMemory)
+    {
+        RELEASE_ASSERT(!(size % sizeof(int16_t)));
+
+        char* ptr = static_cast&lt;char*&gt;(base);
+        const size_t num32s = size / sizeof(int32_t);
+        for (size_t i = 0; i &lt; num32s; i++) {
+            const int32_t insn = nopPseudo32();
+            if (isCopyingToExecutableMemory)
+                performJITMemcpy(ptr, &amp;insn, sizeof(int32_t));
+            else
+                memcpy(ptr, &amp;insn, sizeof(int32_t));
+            ptr += sizeof(int32_t);
+        }
+
+        const size_t num16s = (size % sizeof(int32_t)) / sizeof(int16_t);
+        ASSERT(num16s == 0 || num16s == 1);
+        ASSERT(num16s * sizeof(int16_t) + num32s * sizeof(int32_t) == size);
+        if (num16s) {
+            const int16_t insn = nopPseudo16();
+            if (isCopyingToExecutableMemory)
+                performJITMemcpy(ptr, &amp;insn, sizeof(int16_t));
+            else
+                memcpy(ptr, &amp;insn, sizeof(int16_t));
+        }
+    }
+
</ins><span class="cx">     void dmbSY()
</span><span class="cx">     {
</span><span class="cx">         m_formatter.twoWordOp16Op16(OP_DMB_SY_T2a, OP_DMB_SY_T2b);
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreassemblerAbstractMacroAssemblerh"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/assembler/AbstractMacroAssembler.h (202213 => 202214)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/assembler/AbstractMacroAssembler.h        2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/JavaScriptCore/assembler/AbstractMacroAssembler.h        2016-06-19 19:42:18 UTC (rev 202214)
</span><span class="lines">@@ -27,7 +27,6 @@
</span><span class="cx"> #define AbstractMacroAssembler_h
</span><span class="cx"> 
</span><span class="cx"> #include &quot;AbortReason.h&quot;
</span><del>-#include &quot;AssemblerBuffer.h&quot;
</del><span class="cx"> #include &quot;CodeLocation.h&quot;
</span><span class="cx"> #include &quot;MacroAssemblerCodeRef.h&quot;
</span><span class="cx"> #include &quot;Options.h&quot;
</span><span class="lines">@@ -1040,6 +1039,17 @@
</span><span class="cx">         m_linkTasks.append(createSharedTask&lt;void(LinkBuffer&amp;)&gt;(functor));
</span><span class="cx">     }
</span><span class="cx"> 
</span><ins>+    void emitNops(size_t memoryToFillWithNopsInBytes)
+    {
+        AssemblerBuffer&amp; buffer = m_assembler.buffer();
+        size_t startCodeSize = buffer.codeSize();
+        size_t targetCodeSize = startCodeSize + memoryToFillWithNopsInBytes;
+        buffer.ensureSpace(memoryToFillWithNopsInBytes);
+        bool isCopyingToExecutableMemory = false;
+        AssemblerType::fillNops(static_cast&lt;char*&gt;(buffer.data()) + startCodeSize, memoryToFillWithNopsInBytes, isCopyingToExecutableMemory);
+        buffer.setCodeSize(targetCodeSize);
+    }
+
</ins><span class="cx"> protected:
</span><span class="cx">     AbstractMacroAssembler()
</span><span class="cx">         : m_randomSource(0)
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreassemblerAssemblerBufferh"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/assembler/AssemblerBuffer.h (202213 => 202214)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/assembler/AssemblerBuffer.h        2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/JavaScriptCore/assembler/AssemblerBuffer.h        2016-06-19 19:42:18 UTC (rev 202214)
</span><span class="lines">@@ -62,32 +62,54 @@
</span><span class="cx">     };
</span><span class="cx"> 
</span><span class="cx">     class AssemblerData {
</span><ins>+        WTF_MAKE_NONCOPYABLE(AssemblerData);
+        static const size_t InlineCapacity = 128;
</ins><span class="cx">     public:
</span><span class="cx">         AssemblerData()
</span><del>-            : m_buffer(nullptr)
-            , m_capacity(0)
</del><ins>+            : m_buffer(m_inlineBuffer)
+            , m_capacity(InlineCapacity)
</ins><span class="cx">         {
</span><span class="cx">         }
</span><span class="cx"> 
</span><del>-        AssemblerData(unsigned initialCapacity)
</del><ins>+        AssemblerData(size_t initialCapacity)
</ins><span class="cx">         {
</span><del>-            m_capacity = initialCapacity;
-            m_buffer = static_cast&lt;char*&gt;(fastMalloc(m_capacity));
</del><ins>+            if (initialCapacity &lt;= InlineCapacity) {
+                m_capacity = InlineCapacity;
+                m_buffer = m_inlineBuffer;
+            } else {
+                m_capacity = initialCapacity;
+                m_buffer = static_cast&lt;char*&gt;(fastMalloc(m_capacity));
+            }
</ins><span class="cx">         }
</span><span class="cx"> 
</span><span class="cx">         AssemblerData(AssemblerData&amp;&amp; other)
</span><span class="cx">         {
</span><del>-            m_buffer = other.m_buffer;
</del><ins>+            if (other.isInlineBuffer()) {
+                ASSERT(other.m_capacity == InlineCapacity);
+                memcpy(m_inlineBuffer, other.m_inlineBuffer, InlineCapacity);
+                m_buffer = m_inlineBuffer;
+            } else
+                m_buffer = other.m_buffer;
+            m_capacity = other.m_capacity;
+
</ins><span class="cx">             other.m_buffer = nullptr;
</span><del>-            m_capacity = other.m_capacity;
</del><span class="cx">             other.m_capacity = 0;
</span><span class="cx">         }
</span><span class="cx"> 
</span><span class="cx">         AssemblerData&amp; operator=(AssemblerData&amp;&amp; other)
</span><span class="cx">         {
</span><del>-            m_buffer = other.m_buffer;
</del><ins>+            if (m_buffer &amp;&amp; !isInlineBuffer())
+                fastFree(m_buffer);
+
+            if (other.isInlineBuffer()) {
+                ASSERT(other.m_capacity == InlineCapacity);
+                memcpy(m_inlineBuffer, other.m_inlineBuffer, InlineCapacity);
+                m_buffer = m_inlineBuffer;
+            } else
+                m_buffer = other.m_buffer;
+            m_capacity = other.m_capacity;
+
</ins><span class="cx">             other.m_buffer = nullptr;
</span><del>-            m_capacity = other.m_capacity;
</del><span class="cx">             other.m_capacity = 0;
</span><span class="cx">             return *this;
</span><span class="cx">         }
</span><span class="lines">@@ -94,7 +116,8 @@
</span><span class="cx"> 
</span><span class="cx">         ~AssemblerData()
</span><span class="cx">         {
</span><del>-            fastFree(m_buffer);
</del><ins>+            if (m_buffer &amp;&amp; !isInlineBuffer())
+                fastFree(m_buffer);
</ins><span class="cx">         }
</span><span class="cx"> 
</span><span class="cx">         char* buffer() const { return m_buffer; }
</span><span class="lines">@@ -104,19 +127,24 @@
</span><span class="cx">         void grow(unsigned extraCapacity = 0)
</span><span class="cx">         {
</span><span class="cx">             m_capacity = m_capacity + m_capacity / 2 + extraCapacity;
</span><del>-            m_buffer = static_cast&lt;char*&gt;(fastRealloc(m_buffer, m_capacity));
</del><ins>+            if (isInlineBuffer()) {
+                m_buffer = static_cast&lt;char*&gt;(fastMalloc(m_capacity));
+                memcpy(m_buffer, m_inlineBuffer, InlineCapacity);
+            } else
+                m_buffer = static_cast&lt;char*&gt;(fastRealloc(m_buffer, m_capacity));
</ins><span class="cx">         }
</span><span class="cx"> 
</span><span class="cx">     private:
</span><ins>+        bool isInlineBuffer() const { return m_buffer == m_inlineBuffer; }
</ins><span class="cx">         char* m_buffer;
</span><ins>+        char m_inlineBuffer[InlineCapacity];
</ins><span class="cx">         unsigned m_capacity;
</span><span class="cx">     };
</span><span class="cx"> 
</span><span class="cx">     class AssemblerBuffer {
</span><del>-        static const int initialCapacity = 128;
</del><span class="cx">     public:
</span><span class="cx">         AssemblerBuffer()
</span><del>-            : m_storage(initialCapacity)
</del><ins>+            : m_storage()
</ins><span class="cx">             , m_index(0)
</span><span class="cx">         {
</span><span class="cx">         }
</span><span class="lines">@@ -128,7 +156,7 @@
</span><span class="cx"> 
</span><span class="cx">         void ensureSpace(unsigned space)
</span><span class="cx">         {
</span><del>-            if (!isAvailable(space))
</del><ins>+            while (!isAvailable(space))
</ins><span class="cx">                 outOfLineGrow();
</span><span class="cx">         }
</span><span class="cx"> 
</span><span class="lines">@@ -156,6 +184,15 @@
</span><span class="cx">             return m_index;
</span><span class="cx">         }
</span><span class="cx"> 
</span><ins>+        void setCodeSize(size_t index)
+        {
+            // Warning: Only use this if you know exactly what you are doing.
+            // For example, say you want 40 bytes of nops, it's ok to grow
+            // and then fill 40 bytes of nops using bigger instructions.
+            m_index = index;
+            ASSERT(m_index &lt;= m_storage.capacity());
+        }
+
</ins><span class="cx">         AssemblerLabel label() const
</span><span class="cx">         {
</span><span class="cx">             return AssemblerLabel(m_index);
</span><span class="lines">@@ -163,7 +200,7 @@
</span><span class="cx"> 
</span><span class="cx">         unsigned debugOffset() { return m_index; }
</span><span class="cx"> 
</span><del>-        AssemblerData releaseAssemblerData() { return WTFMove(m_storage); }
</del><ins>+        AssemblerData&amp;&amp; releaseAssemblerData() { return WTFMove(m_storage); }
</ins><span class="cx"> 
</span><span class="cx">         // LocalWriter is a trick to keep the storage buffer and the index
</span><span class="cx">         // in memory while issuing multiple Stores.
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreassemblerLinkBuffercpp"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/assembler/LinkBuffer.cpp (202213 => 202214)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/assembler/LinkBuffer.cpp        2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/JavaScriptCore/assembler/LinkBuffer.cpp        2016-06-19 19:42:18 UTC (rev 202214)
</span><span class="lines">@@ -98,15 +98,17 @@
</span><span class="cx"> template &lt;typename InstructionType&gt;
</span><span class="cx"> void LinkBuffer::copyCompactAndLinkCode(MacroAssembler&amp; macroAssembler, void* ownerUID, JITCompilationEffort effort)
</span><span class="cx"> {
</span><del>-    m_initialSize = macroAssembler.m_assembler.codeSize();
-    allocate(m_initialSize, ownerUID, effort);
</del><ins>+    allocate(macroAssembler, ownerUID, effort);
+    const size_t initialSize = macroAssembler.m_assembler.codeSize();
</ins><span class="cx">     if (didFailToAllocate())
</span><span class="cx">         return;
</span><ins>+
</ins><span class="cx">     Vector&lt;LinkRecord, 0, UnsafeVectorOverflow&gt;&amp; jumpsToLink = macroAssembler.jumpsToLink();
</span><span class="cx">     m_assemblerStorage = macroAssembler.m_assembler.buffer().releaseAssemblerData();
</span><span class="cx">     uint8_t* inData = reinterpret_cast&lt;uint8_t*&gt;(m_assemblerStorage.buffer());
</span><span class="cx"> 
</span><span class="cx">     AssemblerData outBuffer(m_size);
</span><ins>+
</ins><span class="cx">     uint8_t* outData = reinterpret_cast&lt;uint8_t*&gt;(outBuffer.buffer());
</span><span class="cx">     uint8_t* codeOutData = reinterpret_cast&lt;uint8_t*&gt;(m_code);
</span><span class="cx"> 
</span><span class="lines">@@ -113,47 +115,54 @@
</span><span class="cx">     int readPtr = 0;
</span><span class="cx">     int writePtr = 0;
</span><span class="cx">     unsigned jumpCount = jumpsToLink.size();
</span><del>-    for (unsigned i = 0; i &lt; jumpCount; ++i) {
-        int offset = readPtr - writePtr;
-        ASSERT(!(offset &amp; 1));
-            
-        // Copy the instructions from the last jump to the current one.
-        size_t regionSize = jumpsToLink[i].from() - readPtr;
-        InstructionType* copySource = reinterpret_cast_ptr&lt;InstructionType*&gt;(inData + readPtr);
-        InstructionType* copyEnd = reinterpret_cast_ptr&lt;InstructionType*&gt;(inData + readPtr + regionSize);
-        InstructionType* copyDst = reinterpret_cast_ptr&lt;InstructionType*&gt;(outData + writePtr);
-        ASSERT(!(regionSize % 2));
-        ASSERT(!(readPtr % 2));
-        ASSERT(!(writePtr % 2));
-        while (copySource != copyEnd)
-            *copyDst++ = *copySource++;
-        recordLinkOffsets(m_assemblerStorage, readPtr, jumpsToLink[i].from(), offset);
-        readPtr += regionSize;
-        writePtr += regionSize;
-            
-        // Calculate absolute address of the jump target, in the case of backwards
-        // branches we need to be precise, forward branches we are pessimistic
-        const uint8_t* target;
-        if (jumpsToLink[i].to() &gt;= jumpsToLink[i].from())
-            target = codeOutData + jumpsToLink[i].to() - offset; // Compensate for what we have collapsed so far
-        else
-            target = codeOutData + jumpsToLink[i].to() - executableOffsetFor(jumpsToLink[i].to());
-            
-        JumpLinkType jumpLinkType = MacroAssembler::computeJumpType(jumpsToLink[i], codeOutData + writePtr, target);
-        // Compact branch if we can...
-        if (MacroAssembler::canCompact(jumpsToLink[i].type())) {
-            // Step back in the write stream
-            int32_t delta = MacroAssembler::jumpSizeDelta(jumpsToLink[i].type(), jumpLinkType);
-            if (delta) {
-                writePtr -= delta;
-                recordLinkOffsets(m_assemblerStorage, jumpsToLink[i].from() - delta, readPtr, readPtr - writePtr);
</del><ins>+    if (m_shouldPerformBranchCompaction) {
+        for (unsigned i = 0; i &lt; jumpCount; ++i) {
+            int offset = readPtr - writePtr;
+            ASSERT(!(offset &amp; 1));
+                
+            // Copy the instructions from the last jump to the current one.
+            size_t regionSize = jumpsToLink[i].from() - readPtr;
+            InstructionType* copySource = reinterpret_cast_ptr&lt;InstructionType*&gt;(inData + readPtr);
+            InstructionType* copyEnd = reinterpret_cast_ptr&lt;InstructionType*&gt;(inData + readPtr + regionSize);
+            InstructionType* copyDst = reinterpret_cast_ptr&lt;InstructionType*&gt;(outData + writePtr);
+            ASSERT(!(regionSize % 2));
+            ASSERT(!(readPtr % 2));
+            ASSERT(!(writePtr % 2));
+            while (copySource != copyEnd)
+                *copyDst++ = *copySource++;
+            recordLinkOffsets(m_assemblerStorage, readPtr, jumpsToLink[i].from(), offset);
+            readPtr += regionSize;
+            writePtr += regionSize;
+                
+            // Calculate absolute address of the jump target, in the case of backwards
+            // branches we need to be precise, forward branches we are pessimistic
+            const uint8_t* target;
+            if (jumpsToLink[i].to() &gt;= jumpsToLink[i].from())
+                target = codeOutData + jumpsToLink[i].to() - offset; // Compensate for what we have collapsed so far
+            else
+                target = codeOutData + jumpsToLink[i].to() - executableOffsetFor(jumpsToLink[i].to());
+                
+            JumpLinkType jumpLinkType = MacroAssembler::computeJumpType(jumpsToLink[i], codeOutData + writePtr, target);
+            // Compact branch if we can...
+            if (MacroAssembler::canCompact(jumpsToLink[i].type())) {
+                // Step back in the write stream
+                int32_t delta = MacroAssembler::jumpSizeDelta(jumpsToLink[i].type(), jumpLinkType);
+                if (delta) {
+                    writePtr -= delta;
+                    recordLinkOffsets(m_assemblerStorage, jumpsToLink[i].from() - delta, readPtr, readPtr - writePtr);
+                }
</ins><span class="cx">             }
</span><ins>+            jumpsToLink[i].setFrom(writePtr);
</ins><span class="cx">         }
</span><del>-        jumpsToLink[i].setFrom(writePtr);
</del><ins>+    } else {
+        if (!ASSERT_DISABLED) {
+            for (unsigned i = 0; i &lt; jumpCount; ++i)
+                ASSERT(!MacroAssembler::canCompact(jumpsToLink[i].type()));
+        }
</ins><span class="cx">     }
</span><span class="cx">     // Copy everything after the last jump
</span><del>-    memcpy(outData + writePtr, inData + readPtr, m_initialSize - readPtr);
-    recordLinkOffsets(m_assemblerStorage, readPtr, m_initialSize, readPtr - writePtr);
</del><ins>+    memcpy(outData + writePtr, inData + readPtr, initialSize - readPtr);
+    recordLinkOffsets(m_assemblerStorage, readPtr, initialSize, readPtr - writePtr);
</ins><span class="cx">         
</span><span class="cx">     for (unsigned i = 0; i &lt; jumpCount; ++i) {
</span><span class="cx">         uint8_t* location = codeOutData + jumpsToLink[i].from();
</span><span class="lines">@@ -162,12 +171,21 @@
</span><span class="cx">     }
</span><span class="cx"> 
</span><span class="cx">     jumpsToLink.clear();
</span><del>-    shrink(writePtr + m_initialSize - readPtr);
</del><span class="cx"> 
</span><del>-    performJITMemcpy(m_code, outBuffer.buffer(), m_size);
</del><ins>+    size_t compactSize = writePtr + initialSize - readPtr;
+    if (m_executableMemory) {
+        m_size = compactSize;
+        m_executableMemory-&gt;shrink(m_size);
+    } else {
+        size_t nopSizeInBytes = initialSize - compactSize;
+        bool isCopyingToExecutableMemory = false;
+        MacroAssembler::AssemblerType_T::fillNops(outData + compactSize, nopSizeInBytes, isCopyingToExecutableMemory);
+    }
</ins><span class="cx"> 
</span><ins>+    performJITMemcpy(m_code, outData, m_size);
+
</ins><span class="cx"> #if DUMP_LINK_STATISTICS
</span><del>-    dumpLinkStatistics(m_code, m_initialSize, m_size);
</del><ins>+    dumpLinkStatistics(m_code, initialSize, m_size);
</ins><span class="cx"> #endif
</span><span class="cx"> #if DUMP_CODE
</span><span class="cx">     dumpCode(m_code, m_size);
</span><span class="lines">@@ -182,11 +200,11 @@
</span><span class="cx"> #if defined(ASSEMBLER_HAS_CONSTANT_POOL) &amp;&amp; ASSEMBLER_HAS_CONSTANT_POOL
</span><span class="cx">     macroAssembler.m_assembler.buffer().flushConstantPool(false);
</span><span class="cx"> #endif
</span><del>-    AssemblerBuffer&amp; buffer = macroAssembler.m_assembler.buffer();
-    allocate(buffer.codeSize(), ownerUID, effort);
</del><ins>+    allocate(macroAssembler, ownerUID, effort);
</ins><span class="cx">     if (!m_didAllocate)
</span><span class="cx">         return;
</span><span class="cx">     ASSERT(m_code);
</span><ins>+    AssemblerBuffer&amp; buffer = macroAssembler.m_assembler.buffer();
</ins><span class="cx"> #if CPU(ARM_TRADITIONAL)
</span><span class="cx">     macroAssembler.m_assembler.prepareExecutableCopy(m_code);
</span><span class="cx"> #endif
</span><span class="lines">@@ -198,19 +216,21 @@
</span><span class="cx">     copyCompactAndLinkCode&lt;uint16_t&gt;(macroAssembler, ownerUID, effort);
</span><span class="cx"> #elif CPU(ARM64)
</span><span class="cx">     copyCompactAndLinkCode&lt;uint32_t&gt;(macroAssembler, ownerUID, effort);
</span><del>-#endif
</del><ins>+#endif // !ENABLE(BRANCH_COMPACTION)
</ins><span class="cx"> 
</span><span class="cx">     m_linkTasks = WTFMove(macroAssembler.m_linkTasks);
</span><span class="cx"> }
</span><span class="cx"> 
</span><del>-void LinkBuffer::allocate(size_t initialSize, void* ownerUID, JITCompilationEffort effort)
</del><ins>+void LinkBuffer::allocate(MacroAssembler&amp; macroAssembler, void* ownerUID, JITCompilationEffort effort)
</ins><span class="cx"> {
</span><ins>+    size_t initialSize = macroAssembler.m_assembler.codeSize();
</ins><span class="cx">     if (m_code) {
</span><span class="cx">         if (initialSize &gt; m_size)
</span><span class="cx">             return;
</span><span class="cx">         
</span><ins>+        size_t nopsToFillInBytes = m_size - initialSize;
+        macroAssembler.emitNops(nopsToFillInBytes);
</ins><span class="cx">         m_didAllocate = true;
</span><del>-        m_size = initialSize;
</del><span class="cx">         return;
</span><span class="cx">     }
</span><span class="cx">     
</span><span class="lines">@@ -223,14 +243,6 @@
</span><span class="cx">     m_didAllocate = true;
</span><span class="cx"> }
</span><span class="cx"> 
</span><del>-void LinkBuffer::shrink(size_t newSize)
-{
-    if (!m_executableMemory)
-        return;
-    m_size = newSize;
-    m_executableMemory-&gt;shrink(m_size);
-}
-
</del><span class="cx"> void LinkBuffer::performFinalization()
</span><span class="cx"> {
</span><span class="cx">     for (auto&amp; task : m_linkTasks)
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreassemblerLinkBufferh"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/assembler/LinkBuffer.h (202213 => 202214)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/assembler/LinkBuffer.h        2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/JavaScriptCore/assembler/LinkBuffer.h        2016-06-19 19:42:18 UTC (rev 202214)
</span><span class="lines">@@ -82,9 +82,6 @@
</span><span class="cx"> public:
</span><span class="cx">     LinkBuffer(VM&amp; vm, MacroAssembler&amp; macroAssembler, void* ownerUID, JITCompilationEffort effort = JITCompilationMustSucceed)
</span><span class="cx">         : m_size(0)
</span><del>-#if ENABLE(BRANCH_COMPACTION)
-        , m_initialSize(0)
-#endif
</del><span class="cx">         , m_didAllocate(false)
</span><span class="cx">         , m_code(0)
</span><span class="cx">         , m_vm(&amp;vm)
</span><span class="lines">@@ -95,11 +92,8 @@
</span><span class="cx">         linkCode(macroAssembler, ownerUID, effort);
</span><span class="cx">     }
</span><span class="cx"> 
</span><del>-    LinkBuffer(MacroAssembler&amp; macroAssembler, void* code, size_t size, JITCompilationEffort effort = JITCompilationMustSucceed)
</del><ins>+    LinkBuffer(MacroAssembler&amp; macroAssembler, void* code, size_t size, JITCompilationEffort effort = JITCompilationMustSucceed, bool shouldPerformBranchCompaction = true)
</ins><span class="cx">         : m_size(size)
</span><del>-#if ENABLE(BRANCH_COMPACTION)
-        , m_initialSize(0)
-#endif
</del><span class="cx">         , m_didAllocate(false)
</span><span class="cx">         , m_code(code)
</span><span class="cx">         , m_vm(0)
</span><span class="lines">@@ -107,6 +101,11 @@
</span><span class="cx">         , m_completed(false)
</span><span class="cx"> #endif
</span><span class="cx">     {
</span><ins>+#if ENABLE(BRANCH_COMPACTION)
+        m_shouldPerformBranchCompaction = shouldPerformBranchCompaction;
+#else
+        UNUSED_PARAM(shouldPerformBranchCompaction);
+#endif
</ins><span class="cx">         linkCode(macroAssembler, 0, effort);
</span><span class="cx">     }
</span><span class="cx"> 
</span><span class="lines">@@ -250,11 +249,7 @@
</span><span class="cx">         return m_code;
</span><span class="cx">     }
</span><span class="cx"> 
</span><del>-    // FIXME: this does not account for the AssemblerData size!
-    size_t size()
-    {
-        return m_size;
-    }
</del><ins>+    size_t size() const { return m_size; }
</ins><span class="cx">     
</span><span class="cx">     bool wasAlreadyDisassembled() const { return m_alreadyDisassembled; }
</span><span class="cx">     void didAlreadyDisassemble() { m_alreadyDisassembled = true; }
</span><span class="lines">@@ -278,7 +273,7 @@
</span><span class="cx"> #endif
</span><span class="cx">         return src;
</span><span class="cx">     }
</span><del>-    
</del><ins>+
</ins><span class="cx">     // Keep this private! - the underlying code should only be obtained externally via finalizeCode().
</span><span class="cx">     void* code()
</span><span class="cx">     {
</span><span class="lines">@@ -285,8 +280,7 @@
</span><span class="cx">         return m_code;
</span><span class="cx">     }
</span><span class="cx">     
</span><del>-    void allocate(size_t initialSize, void* ownerUID, JITCompilationEffort);
-    void shrink(size_t newSize);
</del><ins>+    void allocate(MacroAssembler&amp;, void* ownerUID, JITCompilationEffort);
</ins><span class="cx"> 
</span><span class="cx">     JS_EXPORT_PRIVATE void linkCode(MacroAssembler&amp;, void* ownerUID, JITCompilationEffort);
</span><span class="cx"> #if ENABLE(BRANCH_COMPACTION)
</span><span class="lines">@@ -307,8 +301,8 @@
</span><span class="cx">     RefPtr&lt;ExecutableMemoryHandle&gt; m_executableMemory;
</span><span class="cx">     size_t m_size;
</span><span class="cx"> #if ENABLE(BRANCH_COMPACTION)
</span><del>-    size_t m_initialSize;
</del><span class="cx">     AssemblerData m_assemblerStorage;
</span><ins>+    bool m_shouldPerformBranchCompaction { true };
</ins><span class="cx"> #endif
</span><span class="cx">     bool m_didAllocate;
</span><span class="cx">     void* m_code;
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreassemblerMacroAssemblerARM64h"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/assembler/MacroAssemblerARM64.h (202213 => 202214)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/assembler/MacroAssemblerARM64.h        2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/JavaScriptCore/assembler/MacroAssemblerARM64.h        2016-06-19 19:42:18 UTC (rev 202214)
</span><span class="lines">@@ -3087,6 +3087,14 @@
</span><span class="cx">         return PatchableJump(result);
</span><span class="cx">     }
</span><span class="cx"> 
</span><ins>+    PatchableJump patchableBranch32(RelationalCondition cond, Address left, TrustedImm32 imm)
+    {
+        m_makeJumpPatchable = true;
+        Jump result = branch32(cond, left, imm);
+        m_makeJumpPatchable = false;
+        return PatchableJump(result);
+    }
+
</ins><span class="cx">     PatchableJump patchableBranch64(RelationalCondition cond, RegisterID reg, TrustedImm64 imm)
</span><span class="cx">     {
</span><span class="cx">         m_makeJumpPatchable = true;
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreassemblerMacroAssemblerARMv7h"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/assembler/MacroAssemblerARMv7.h (202213 => 202214)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/assembler/MacroAssemblerARMv7.h        2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/JavaScriptCore/assembler/MacroAssemblerARMv7.h        2016-06-19 19:42:18 UTC (rev 202214)
</span><span class="lines">@@ -1868,6 +1868,14 @@
</span><span class="cx">         return PatchableJump(result);
</span><span class="cx">     }
</span><span class="cx"> 
</span><ins>+    PatchableJump patchableBranch32(RelationalCondition cond, Address left, TrustedImm32 imm)
+    {
+        m_makeJumpPatchable = true;
+        Jump result = branch32(cond, left, imm);
+        m_makeJumpPatchable = false;
+        return PatchableJump(result);
+    }
+
</ins><span class="cx">     PatchableJump patchableBranchPtrWithPatch(RelationalCondition cond, Address left, DataLabelPtr&amp; dataLabel, TrustedImmPtr initialRightValue = TrustedImmPtr(0))
</span><span class="cx">     {
</span><span class="cx">         m_makeJumpPatchable = true;
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreassemblerX86Assemblerh"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/assembler/X86Assembler.h (202213 => 202214)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/assembler/X86Assembler.h        2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/JavaScriptCore/assembler/X86Assembler.h        2016-06-19 19:42:18 UTC (rev 202214)
</span><span class="lines">@@ -2946,8 +2946,9 @@
</span><span class="cx">         m_formatter.oneByteOp(OP_NOP);
</span><span class="cx">     }
</span><span class="cx"> 
</span><del>-    static void fillNops(void* base, size_t size)
</del><ins>+    static void fillNops(void* base, size_t size, bool isCopyingToExecutableMemory)
</ins><span class="cx">     {
</span><ins>+        UNUSED_PARAM(isCopyingToExecutableMemory);
</ins><span class="cx"> #if CPU(X86_64)
</span><span class="cx">         static const uint8_t nops[10][10] = {
</span><span class="cx">             // nop
</span></span></pre></div>
<a id="trunkSourceJavaScriptCorebytecodeCodeBlockcpp"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/bytecode/CodeBlock.cpp (202213 => 202214)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/bytecode/CodeBlock.cpp        2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/JavaScriptCore/bytecode/CodeBlock.cpp        2016-06-19 19:42:18 UTC (rev 202214)
</span><span class="lines">@@ -441,6 +441,9 @@
</span><span class="cx">         case CacheType::Unset:
</span><span class="cx">             out.printf(&quot;unset&quot;);
</span><span class="cx">             break;
</span><ins>+        case CacheType::ArrayLength:
+            out.printf(&quot;ArrayLength&quot;);
+            break;
</ins><span class="cx">         default:
</span><span class="cx">             RELEASE_ASSERT_NOT_REACHED();
</span><span class="cx">             break;
</span></span></pre></div>
<a id="trunkSourceJavaScriptCorebytecodeInlineAccesscpp"></a>
<div class="addfile"><h4>Added: trunk/Source/JavaScriptCore/bytecode/InlineAccess.cpp (0 => 202214)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/bytecode/InlineAccess.cpp                                (rev 0)
+++ trunk/Source/JavaScriptCore/bytecode/InlineAccess.cpp        2016-06-19 19:42:18 UTC (rev 202214)
</span><span class="lines">@@ -0,0 +1,299 @@
</span><ins>+/*
+ * Copyright (C) 2016 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#include &quot;config.h&quot;
+#include &quot;InlineAccess.h&quot;
+
+#if ENABLE(JIT)
+
+#include &quot;CCallHelpers.h&quot;
+#include &quot;JSArray.h&quot;
+#include &quot;JSCellInlines.h&quot;
+#include &quot;LinkBuffer.h&quot;
+#include &quot;ScratchRegisterAllocator.h&quot;
+#include &quot;Structure.h&quot;
+#include &quot;StructureStubInfo.h&quot;
+#include &quot;VM.h&quot;
+
+namespace JSC {
+
+void InlineAccess::dumpCacheSizesAndCrash(VM&amp; vm)
+{
+    GPRReg base = GPRInfo::regT0;
+    GPRReg value = GPRInfo::regT1;
+#if USE(JSVALUE32_64)
+    JSValueRegs regs(base, value);
+#else
+    JSValueRegs regs(base);
+#endif
+
+    {
+        CCallHelpers jit(&amp;vm);
+
+        GPRReg scratchGPR = value;
+        jit.load8(CCallHelpers::Address(base, JSCell::indexingTypeOffset()), value);
+        jit.and32(CCallHelpers::TrustedImm32(IsArray | IndexingShapeMask), value);
+        jit.patchableBranch32(
+            CCallHelpers::NotEqual, value, CCallHelpers::TrustedImm32(IsArray | ContiguousShape));
+        jit.loadPtr(CCallHelpers::Address(base, JSObject::butterflyOffset()), value);
+        jit.load32(CCallHelpers::Address(value, ArrayStorage::lengthOffset()), value);
+        jit.boxInt32(scratchGPR, regs);
+
+        dataLog(&quot;array length size: &quot;, jit.m_assembler.buffer().codeSize(), &quot;\n&quot;);
+    }
+
+    {
+        CCallHelpers jit(&amp;vm);
+
+        jit.patchableBranch32(
+            MacroAssembler::NotEqual,
+            MacroAssembler::Address(base, JSCell::structureIDOffset()),
+            MacroAssembler::TrustedImm32(0x000ab21ca));
+        jit.loadPtr(
+            CCallHelpers::Address(base, JSObject::butterflyOffset()),
+            value);
+        GPRReg storageGPR = value;
+        jit.loadValue(
+            CCallHelpers::Address(storageGPR, 0x000ab21ca), regs);
+
+        dataLog(&quot;out of line offset cache size: &quot;, jit.m_assembler.buffer().codeSize(), &quot;\n&quot;);
+    }
+
+    {
+        CCallHelpers jit(&amp;vm);
+
+        jit.patchableBranch32(
+            MacroAssembler::NotEqual,
+            MacroAssembler::Address(base, JSCell::structureIDOffset()),
+            MacroAssembler::TrustedImm32(0x000ab21ca));
+        jit.loadValue(
+            MacroAssembler::Address(base, 0x000ab21ca), regs);
+
+        dataLog(&quot;inline offset cache size: &quot;, jit.m_assembler.buffer().codeSize(), &quot;\n&quot;);
+    }
+
+    {
+        CCallHelpers jit(&amp;vm);
+
+        jit.patchableBranch32(
+            MacroAssembler::NotEqual,
+            MacroAssembler::Address(base, JSCell::structureIDOffset()),
+            MacroAssembler::TrustedImm32(0x000ab21ca));
+
+        jit.storeValue(
+            regs, MacroAssembler::Address(base, 0x000ab21ca));
+
+        dataLog(&quot;replace cache size: &quot;, jit.m_assembler.buffer().codeSize(), &quot;\n&quot;);
+    }
+
+    {
+        CCallHelpers jit(&amp;vm);
+
+        jit.patchableBranch32(
+            MacroAssembler::NotEqual,
+            MacroAssembler::Address(base, JSCell::structureIDOffset()),
+            MacroAssembler::TrustedImm32(0x000ab21ca));
+
+        jit.loadPtr(MacroAssembler::Address(base, JSObject::butterflyOffset()), value);
+        jit.storeValue(
+            regs,
+            MacroAssembler::Address(base, 120342));
+
+        dataLog(&quot;replace out of line cache size: &quot;, jit.m_assembler.buffer().codeSize(), &quot;\n&quot;);
+    }
+
+    CRASH();
+}
+
+
+template &lt;typename Function&gt;
+ALWAYS_INLINE static bool linkCodeInline(const char* name, CCallHelpers&amp; jit, StructureStubInfo&amp; stubInfo, const Function&amp; function)
+{
+    if (jit.m_assembler.buffer().codeSize() &lt;= stubInfo.patch.inlineSize) {
+        bool needsBranchCompaction = false;
+        LinkBuffer linkBuffer(jit, stubInfo.patch.start.dataLocation(), stubInfo.patch.inlineSize, JITCompilationMustSucceed, needsBranchCompaction);
+        ASSERT(linkBuffer.isValid());
+        function(linkBuffer);
+        FINALIZE_CODE(linkBuffer, (&quot;InlineAccessType: '%s'&quot;, name));
+        return true;
+    }
+
+    // This is helpful when determining the size for inline ICs on various
+    // platforms. You want to choose a size that usually succeeds, but sometimes
+    // there may be variability in the length of the code we generate just because
+    // of randomness. It's helpful to flip this on when running tests or browsing
+    // the web just to see how often it fails. You don't want an IC size that always fails.
+    const bool failIfCantInline = false;
+    if (failIfCantInline) {
+        dataLog(&quot;Failure for: &quot;, name, &quot;\n&quot;);
+        dataLog(&quot;real size: &quot;, jit.m_assembler.buffer().codeSize(), &quot; inline size:&quot;, stubInfo.patch.inlineSize, &quot;\n&quot;);
+        CRASH();
+    }
+
+    return false;
+}
+
+bool InlineAccess::generateSelfPropertyAccess(VM&amp; vm, StructureStubInfo&amp; stubInfo, Structure* structure, PropertyOffset offset)
+{
+    CCallHelpers jit(&amp;vm);
+
+    GPRReg base = static_cast&lt;GPRReg&gt;(stubInfo.patch.baseGPR);
+    JSValueRegs value = stubInfo.valueRegs();
+
+    auto branchToSlowPath = jit.patchableBranch32(
+        MacroAssembler::NotEqual,
+        MacroAssembler::Address(base, JSCell::structureIDOffset()),
+        MacroAssembler::TrustedImm32(bitwise_cast&lt;uint32_t&gt;(structure-&gt;id())));
+    GPRReg storage;
+    if (isInlineOffset(offset))
+        storage = base;
+    else {
+        jit.loadPtr(CCallHelpers::Address(base, JSObject::butterflyOffset()), value.payloadGPR());
+        storage = value.payloadGPR();
+    }
+
+    jit.loadValue(
+        MacroAssembler::Address(storage, offsetRelativeToBase(offset)), value);
+
+    bool linkedCodeInline = linkCodeInline(&quot;property access&quot;, jit, stubInfo, [&amp;] (LinkBuffer&amp; linkBuffer) {
+        linkBuffer.link(branchToSlowPath, stubInfo.slowPathStartLocation());
+    });
+    return linkedCodeInline;
+}
+
+ALWAYS_INLINE static GPRReg getScratchRegister(StructureStubInfo&amp; stubInfo)
+{
+    ScratchRegisterAllocator allocator(stubInfo.patch.usedRegisters);
+    allocator.lock(static_cast&lt;GPRReg&gt;(stubInfo.patch.baseGPR));
+    allocator.lock(static_cast&lt;GPRReg&gt;(stubInfo.patch.valueGPR));
+#if USE(JSVALUE32_64)
+    allocator.lock(static_cast&lt;GPRReg&gt;(stubInfo.patch.baseTagGPR));
+    allocator.lock(static_cast&lt;GPRReg&gt;(stubInfo.patch.valueTagGPR));
+#endif
+    GPRReg scratch = allocator.allocateScratchGPR();
+    if (allocator.didReuseRegisters())
+        return InvalidGPRReg;
+    return scratch;
+}
+
+ALWAYS_INLINE static bool hasFreeRegister(StructureStubInfo&amp; stubInfo)
+{
+    return getScratchRegister(stubInfo) != InvalidGPRReg;
+}
+
+bool InlineAccess::canGenerateSelfPropertyReplace(StructureStubInfo&amp; stubInfo, PropertyOffset offset)
+{
+    if (isInlineOffset(offset))
+        return true;
+
+    return hasFreeRegister(stubInfo);
+}
+
+bool InlineAccess::generateSelfPropertyReplace(VM&amp; vm, StructureStubInfo&amp; stubInfo, Structure* structure, PropertyOffset offset)
+{
+    ASSERT(canGenerateSelfPropertyReplace(stubInfo, offset));
+
+    CCallHelpers jit(&amp;vm);
+
+    GPRReg base = static_cast&lt;GPRReg&gt;(stubInfo.patch.baseGPR);
+    JSValueRegs value = stubInfo.valueRegs();
+
+    auto branchToSlowPath = jit.patchableBranch32(
+        MacroAssembler::NotEqual,
+        MacroAssembler::Address(base, JSCell::structureIDOffset()),
+        MacroAssembler::TrustedImm32(bitwise_cast&lt;uint32_t&gt;(structure-&gt;id())));
+
+    GPRReg storage;
+    if (isInlineOffset(offset))
+        storage = base;
+    else {
+        storage = getScratchRegister(stubInfo);
+        ASSERT(storage != InvalidGPRReg);
+        jit.loadPtr(CCallHelpers::Address(base, JSObject::butterflyOffset()), storage);
+    }
+
+    jit.storeValue(
+        value, MacroAssembler::Address(storage, offsetRelativeToBase(offset)));
+
+    bool linkedCodeInline = linkCodeInline(&quot;property replace&quot;, jit, stubInfo, [&amp;] (LinkBuffer&amp; linkBuffer) {
+        linkBuffer.link(branchToSlowPath, stubInfo.slowPathStartLocation());
+    });
+    return linkedCodeInline;
+}
+
+bool InlineAccess::isCacheableArrayLength(StructureStubInfo&amp; stubInfo, JSArray* array)
+{
+    ASSERT(array-&gt;indexingType() &amp; IsArray);
+
+    if (!hasFreeRegister(stubInfo))
+        return false;
+
+    return array-&gt;indexingType() == ArrayWithInt32
+        || array-&gt;indexingType() == ArrayWithDouble
+        || array-&gt;indexingType() == ArrayWithContiguous;
+}
+
+bool InlineAccess::generateArrayLength(VM&amp; vm, StructureStubInfo&amp; stubInfo, JSArray* array)
+{
+    ASSERT(isCacheableArrayLength(stubInfo, array));
+
+    CCallHelpers jit(&amp;vm);
+
+    GPRReg base = static_cast&lt;GPRReg&gt;(stubInfo.patch.baseGPR);
+    JSValueRegs value = stubInfo.valueRegs();
+    GPRReg scratch = getScratchRegister(stubInfo);
+
+    jit.load8(CCallHelpers::Address(base, JSCell::indexingTypeOffset()), scratch);
+    jit.and32(CCallHelpers::TrustedImm32(IsArray | IndexingShapeMask), scratch);
+    auto branchToSlowPath = jit.patchableBranch32(
+        CCallHelpers::NotEqual, scratch, CCallHelpers::TrustedImm32(array-&gt;indexingType()));
+    jit.loadPtr(CCallHelpers::Address(base, JSObject::butterflyOffset()), value.payloadGPR());
+    jit.load32(CCallHelpers::Address(value.payloadGPR(), ArrayStorage::lengthOffset()), value.payloadGPR());
+    jit.boxInt32(value.payloadGPR(), value);
+
+    bool linkedCodeInline = linkCodeInline(&quot;array length&quot;, jit, stubInfo, [&amp;] (LinkBuffer&amp; linkBuffer) {
+        linkBuffer.link(branchToSlowPath, stubInfo.slowPathStartLocation());
+    });
+    return linkedCodeInline;
+}
+
+void InlineAccess::rewireStubAsJump(VM&amp; vm, StructureStubInfo&amp; stubInfo, CodeLocationLabel target)
+{
+    CCallHelpers jit(&amp;vm);
+
+    auto jump = jit.jump();
+
+    // We don't need a nop sled here because nobody should be jumping into the middle of an IC.
+    bool needsBranchCompaction = false;
+    LinkBuffer linkBuffer(jit, stubInfo.patch.start.dataLocation(), jit.m_assembler.buffer().codeSize(), JITCompilationMustSucceed, needsBranchCompaction);
+    RELEASE_ASSERT(linkBuffer.isValid());
+    linkBuffer.link(jump, target);
+
+    FINALIZE_CODE(linkBuffer, (&quot;InlineAccess: linking constant jump&quot;));
+}
+
+} // namespace JSC
+
+#endif // ENABLE(JIT)
</ins></span></pre></div>
<a id="trunkSourceJavaScriptCorebytecodeInlineAccessh"></a>
<div class="addfile"><h4>Added: trunk/Source/JavaScriptCore/bytecode/InlineAccess.h (0 => 202214)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/bytecode/InlineAccess.h                                (rev 0)
+++ trunk/Source/JavaScriptCore/bytecode/InlineAccess.h        2016-06-19 19:42:18 UTC (rev 202214)
</span><span class="lines">@@ -0,0 +1,119 @@
</span><ins>+/*
+ * Copyright (C) 2016 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#ifndef InlineAccess_h
+#define InlineAccess_h
+
+#if ENABLE(JIT)
+
+#include &quot;CodeLocation.h&quot;
+#include &quot;PropertyOffset.h&quot;
+
+namespace JSC {
+
+class JSArray;
+class Structure;
+class StructureStubInfo;
+class VM;
+
+class InlineAccess {
+public:
+
+    static constexpr size_t sizeForPropertyAccess()
+    {
+#if CPU(X86_64)
+        return 23;
+#elif CPU(X86)
+        return 27;
+#elif CPU(ARM64)
+        return 40;
+#elif CPU(ARM)
+#if CPU(ARM_THUMB2)
+        return 48;
+#else
+        return 50;
+#endif
+#else
+#error &quot;unsupported platform&quot;
+#endif
+    }
+
+    static constexpr size_t sizeForPropertyReplace()
+    {
+#if CPU(X86_64)
+        return 23;
+#elif CPU(X86)
+        return 27;
+#elif CPU(ARM64)
+        return 40;
+#elif CPU(ARM)
+#if CPU(ARM_THUMB2)
+        return 48;
+#else
+        return 50;
+#endif
+#else
+#error &quot;unsupported platform&quot;
+#endif
+    }
+
+    static constexpr size_t sizeForLengthAccess()
+    {
+#if CPU(X86_64)
+        return 26;
+#elif CPU(X86)
+        return 27;
+#elif CPU(ARM64)
+        return 32;
+#elif CPU(ARM)
+#if CPU(ARM_THUMB2)
+        return 30;
+#else
+        return 50;
+#endif
+#else
+#error &quot;unsupported platform&quot;
+#endif
+    }
+
+    static bool generateSelfPropertyAccess(VM&amp;, StructureStubInfo&amp;, Structure*, PropertyOffset);
+    static bool canGenerateSelfPropertyReplace(StructureStubInfo&amp;, PropertyOffset);
+    static bool generateSelfPropertyReplace(VM&amp;, StructureStubInfo&amp;, Structure*, PropertyOffset);
+    static bool isCacheableArrayLength(StructureStubInfo&amp;, JSArray*);
+    static bool generateArrayLength(VM&amp;, StructureStubInfo&amp;, JSArray*);
+    static void rewireStubAsJump(VM&amp;, StructureStubInfo&amp;, CodeLocationLabel);
+
+    // This is helpful when determining the size of an IC on
+    // various platforms. When adding a new type of IC, implement
+    // its placeholder code here, and log the size. That way we
+    // can intelligently choose sizes on various platforms.
+    NO_RETURN_DUE_TO_CRASH void dumpCacheSizesAndCrash(VM&amp;);
+};
+
+} // namespace JSC
+
+#endif // ENABLE(JIT)
+
+#endif // InlineAccess_h
</ins></span></pre></div>
<a id="trunkSourceJavaScriptCorebytecodePolymorphicAccesscpp"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/bytecode/PolymorphicAccess.cpp (202213 => 202214)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/bytecode/PolymorphicAccess.cpp        2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/JavaScriptCore/bytecode/PolymorphicAccess.cpp        2016-06-19 19:42:18 UTC (rev 202214)
</span><span class="lines">@@ -1551,11 +1551,7 @@
</span><span class="cx">     state.ident = &amp;ident;
</span><span class="cx">     
</span><span class="cx">     state.baseGPR = static_cast&lt;GPRReg&gt;(stubInfo.patch.baseGPR);
</span><del>-    state.valueRegs = JSValueRegs(
-#if USE(JSVALUE32_64)
-        static_cast&lt;GPRReg&gt;(stubInfo.patch.valueTagGPR),
-#endif
-        static_cast&lt;GPRReg&gt;(stubInfo.patch.valueGPR));
</del><ins>+    state.valueRegs = stubInfo.valueRegs();
</ins><span class="cx"> 
</span><span class="cx">     ScratchRegisterAllocator allocator(stubInfo.patch.usedRegisters);
</span><span class="cx">     state.allocator = &amp;allocator;
</span><span class="lines">@@ -1753,14 +1749,11 @@
</span><span class="cx">         return AccessGenerationResult::GaveUp;
</span><span class="cx">     }
</span><span class="cx"> 
</span><del>-    CodeLocationLabel successLabel =
-        stubInfo.callReturnLocation.labelAtOffset(stubInfo.patch.deltaCallToDone);
</del><ins>+    CodeLocationLabel successLabel = stubInfo.doneLocation();
</ins><span class="cx">         
</span><span class="cx">     linkBuffer.link(state.success, successLabel);
</span><span class="cx"> 
</span><del>-    linkBuffer.link(
-        failure,
-        stubInfo.callReturnLocation.labelAtOffset(stubInfo.patch.deltaCallToSlowCase));
</del><ins>+    linkBuffer.link(failure, stubInfo.slowPathStartLocation());
</ins><span class="cx">     
</span><span class="cx">     if (verbose)
</span><span class="cx">         dataLog(*codeBlock, &quot; &quot;, stubInfo.codeOrigin, &quot;: Generating polymorphic access stub for &quot;, listDump(cases), &quot;\n&quot;);
</span></span></pre></div>
<a id="trunkSourceJavaScriptCorebytecodeStructureStubInfocpp"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/bytecode/StructureStubInfo.cpp (202213 => 202214)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/bytecode/StructureStubInfo.cpp        2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/JavaScriptCore/bytecode/StructureStubInfo.cpp        2016-06-19 19:42:18 UTC (rev 202214)
</span><span class="lines">@@ -63,6 +63,11 @@
</span><span class="cx">     u.byIdSelf.offset = offset;
</span><span class="cx"> }
</span><span class="cx"> 
</span><ins>+void StructureStubInfo::initArrayLength()
+{
+    cacheType = CacheType::ArrayLength;
+}
+
</ins><span class="cx"> void StructureStubInfo::initPutByIdReplace(CodeBlock* codeBlock, Structure* baseObjectStructure, PropertyOffset offset)
</span><span class="cx"> {
</span><span class="cx">     cacheType = CacheType::PutByIdReplace;
</span><span class="lines">@@ -87,6 +92,7 @@
</span><span class="cx">     case CacheType::Unset:
</span><span class="cx">     case CacheType::GetByIdSelf:
</span><span class="cx">     case CacheType::PutByIdReplace:
</span><ins>+    case CacheType::ArrayLength:
</ins><span class="cx">         return;
</span><span class="cx">     }
</span><span class="cx"> 
</span><span class="lines">@@ -102,6 +108,7 @@
</span><span class="cx">     case CacheType::Unset:
</span><span class="cx">     case CacheType::GetByIdSelf:
</span><span class="cx">     case CacheType::PutByIdReplace:
</span><ins>+    case CacheType::ArrayLength:
</ins><span class="cx">         return;
</span><span class="cx">     }
</span><span class="cx"> 
</span><span class="lines">@@ -257,6 +264,7 @@
</span><span class="cx"> {
</span><span class="cx">     switch (cacheType) {
</span><span class="cx">     case CacheType::Unset:
</span><ins>+    case CacheType::ArrayLength:
</ins><span class="cx">         return true;
</span><span class="cx">     case CacheType::GetByIdSelf:
</span><span class="cx">     case CacheType::PutByIdReplace:
</span><span class="lines">@@ -275,6 +283,7 @@
</span><span class="cx">         return false;
</span><span class="cx">     return u.stub-&gt;containsPC(pc);
</span><span class="cx"> }
</span><del>-#endif
</del><span class="cx"> 
</span><ins>+#endif // ENABLE(JIT)
+
</ins><span class="cx"> } // namespace JSC
</span></span></pre></div>
<a id="trunkSourceJavaScriptCorebytecodeStructureStubInfoh"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/bytecode/StructureStubInfo.h (202213 => 202214)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/bytecode/StructureStubInfo.h        2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/JavaScriptCore/bytecode/StructureStubInfo.h        2016-06-19 19:42:18 UTC (rev 202214)
</span><span class="lines">@@ -57,7 +57,8 @@
</span><span class="cx">     Unset,
</span><span class="cx">     GetByIdSelf,
</span><span class="cx">     PutByIdReplace,
</span><del>-    Stub
</del><ins>+    Stub,
+    ArrayLength
</ins><span class="cx"> };
</span><span class="cx"> 
</span><span class="cx"> class StructureStubInfo {
</span><span class="lines">@@ -68,6 +69,7 @@
</span><span class="cx">     ~StructureStubInfo();
</span><span class="cx"> 
</span><span class="cx">     void initGetByIdSelf(CodeBlock*, Structure* baseObjectStructure, PropertyOffset);
</span><ins>+    void initArrayLength();
</ins><span class="cx">     void initPutByIdReplace(CodeBlock*, Structure* baseObjectStructure, PropertyOffset);
</span><span class="cx">     void initStub(CodeBlock*, std::unique_ptr&lt;PolymorphicAccess&gt;);
</span><span class="cx"> 
</span><span class="lines">@@ -143,13 +145,11 @@
</span><span class="cx">         return false;
</span><span class="cx">     }
</span><span class="cx"> 
</span><del>-    CodeLocationCall callReturnLocation;
</del><ins>+    bool containsPC(void* pc) const;
</ins><span class="cx"> 
</span><span class="cx">     CodeOrigin codeOrigin;
</span><span class="cx">     CallSiteIndex callSiteIndex;
</span><span class="cx"> 
</span><del>-    bool containsPC(void* pc) const;
-
</del><span class="cx">     union {
</span><span class="cx">         struct {
</span><span class="cx">             WriteBarrierBase&lt;Structure&gt; baseObjectStructure;
</span><span class="lines">@@ -165,25 +165,39 @@
</span><span class="cx">     StructureSet bufferedStructures;
</span><span class="cx">     
</span><span class="cx">     struct {
</span><ins>+        CodeLocationLabel start; // This is either the start of the inline IC for *byId caches, or the location of patchable jump for 'in' caches.
+        RegisterSet usedRegisters;
+        uint32_t inlineSize;
+        int32_t deltaFromStartToSlowPathCallLocation;
+        int32_t deltaFromStartToSlowPathStart;
+
</ins><span class="cx">         int8_t baseGPR;
</span><ins>+        int8_t valueGPR;
</ins><span class="cx"> #if USE(JSVALUE32_64)
</span><span class="cx">         int8_t valueTagGPR;
</span><span class="cx">         int8_t baseTagGPR;
</span><span class="cx"> #endif
</span><del>-        int8_t valueGPR;
-        RegisterSet usedRegisters;
-        int32_t deltaCallToDone;
-        int32_t deltaCallToJump;
-        int32_t deltaCallToSlowCase;
-        int32_t deltaCheckImmToCall;
-#if USE(JSVALUE64)
-        int32_t deltaCallToLoadOrStore;
-#else
-        int32_t deltaCallToTagLoadOrStore;
-        int32_t deltaCallToPayloadLoadOrStore;
-#endif
</del><span class="cx">     } patch;
</span><span class="cx"> 
</span><ins>+    CodeLocationCall slowPathCallLocation() { return patch.start.callAtOffset(patch.deltaFromStartToSlowPathCallLocation); }
+    CodeLocationLabel doneLocation() { return patch.start.labelAtOffset(patch.inlineSize); }
+    CodeLocationLabel slowPathStartLocation() { return patch.start.labelAtOffset(patch.deltaFromStartToSlowPathStart); }
+    CodeLocationJump patchableJumpForIn()
+    { 
+        ASSERT(accessType == AccessType::In);
+        return patch.start.jumpAtOffset(0);
+    }
+
+    JSValueRegs valueRegs() const
+    {
+        return JSValueRegs(
+#if USE(JSVALUE32_64)
+            static_cast&lt;GPRReg&gt;(patch.valueTagGPR),
+#endif
+            static_cast&lt;GPRReg&gt;(patch.valueGPR));
+    }
+
+
</ins><span class="cx">     AccessType accessType;
</span><span class="cx">     CacheType cacheType;
</span><span class="cx">     uint8_t countdown; // We repatch only when this is zero. If not zero, we decrement.
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoredfgDFGJITCompilercpp"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/dfg/DFGJITCompiler.cpp (202213 => 202214)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/dfg/DFGJITCompiler.cpp        2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/JavaScriptCore/dfg/DFGJITCompiler.cpp        2016-06-19 19:42:18 UTC (rev 202214)
</span><span class="lines">@@ -258,11 +258,20 @@
</span><span class="cx"> 
</span><span class="cx">     for (unsigned i = 0; i &lt; m_ins.size(); ++i) {
</span><span class="cx">         StructureStubInfo&amp; info = *m_ins[i].m_stubInfo;
</span><del>-        CodeLocationCall callReturnLocation = linkBuffer.locationOf(m_ins[i].m_slowPathGenerator-&gt;call());
-        info.patch.deltaCallToDone = differenceBetweenCodePtr(callReturnLocation, linkBuffer.locationOf(m_ins[i].m_done));
-        info.patch.deltaCallToJump = differenceBetweenCodePtr(callReturnLocation, linkBuffer.locationOf(m_ins[i].m_jump));
-        info.callReturnLocation = callReturnLocation;
-        info.patch.deltaCallToSlowCase = differenceBetweenCodePtr(callReturnLocation, linkBuffer.locationOf(m_ins[i].m_slowPathGenerator-&gt;label()));
</del><ins>+
+        CodeLocationLabel start = linkBuffer.locationOf(m_ins[i].m_jump);
+        info.patch.start = start;
+
+        ptrdiff_t inlineSize = MacroAssembler::differenceBetweenCodePtr(
+            start, linkBuffer.locationOf(m_ins[i].m_done));
+        RELEASE_ASSERT(inlineSize &gt;= 0);
+        info.patch.inlineSize = inlineSize;
+
+        info.patch.deltaFromStartToSlowPathCallLocation = MacroAssembler::differenceBetweenCodePtr(
+            start, linkBuffer.locationOf(m_ins[i].m_slowPathGenerator-&gt;call()));
+
+        info.patch.deltaFromStartToSlowPathStart = MacroAssembler::differenceBetweenCodePtr(
+            start, linkBuffer.locationOf(m_ins[i].m_slowPathGenerator-&gt;label()));
</ins><span class="cx">     }
</span><span class="cx">     
</span><span class="cx">     for (unsigned i = 0; i &lt; m_jsCalls.size(); ++i) {
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoredfgDFGOSRExitCompilerCommoncpp"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp (202213 => 202214)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp        2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp        2016-06-19 19:42:18 UTC (rev 202214)
</span><span class="lines">@@ -186,8 +186,7 @@
</span><span class="cx">                     baselineCodeBlockForCaller-&gt;findStubInfo(CodeOrigin(callBytecodeIndex));
</span><span class="cx">                 RELEASE_ASSERT(stubInfo);
</span><span class="cx"> 
</span><del>-                jumpTarget = stubInfo-&gt;callReturnLocation.labelAtOffset(
-                    stubInfo-&gt;patch.deltaCallToDone).executableAddress();
</del><ins>+                jumpTarget = stubInfo-&gt;doneLocation().executableAddress();
</ins><span class="cx">                 break;
</span><span class="cx">             }
</span><span class="cx"> 
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoredfgDFGSpeculativeJIT32_64cpp"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp (202213 => 202214)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp        2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp        2016-06-19 19:42:18 UTC (rev 202214)
</span><span class="lines">@@ -197,9 +197,8 @@
</span><span class="cx">     
</span><span class="cx">     CallSiteIndex callSite = m_jit.recordCallSiteAndGenerateExceptionHandlingOSRExitIfNeeded(codeOrigin, m_stream-&gt;size());
</span><span class="cx">     JITGetByIdGenerator gen(
</span><del>-        m_jit.codeBlock(), codeOrigin, callSite, usedRegisters,
-        JSValueRegs(baseTagGPROrNone, basePayloadGPR),
-        JSValueRegs(resultTagGPR, resultPayloadGPR), type);
</del><ins>+        m_jit.codeBlock(), codeOrigin, callSite, usedRegisters, identifierUID(identifierNumber),
+        JSValueRegs(baseTagGPROrNone, basePayloadGPR), JSValueRegs(resultTagGPR, resultPayloadGPR), type);
</ins><span class="cx">     
</span><span class="cx">     gen.generateFastPath(m_jit);
</span><span class="cx">     
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoredfgDFGSpeculativeJIT64cpp"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp (202213 => 202214)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp        2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp        2016-06-19 19:42:18 UTC (rev 202214)
</span><span class="lines">@@ -168,8 +168,8 @@
</span><span class="cx">         usedRegisters.set(resultGPR, false);
</span><span class="cx">     }
</span><span class="cx">     JITGetByIdGenerator gen(
</span><del>-        m_jit.codeBlock(), codeOrigin, callSite, usedRegisters, JSValueRegs(baseGPR),
-        JSValueRegs(resultGPR), type);
</del><ins>+        m_jit.codeBlock(), codeOrigin, callSite, usedRegisters, identifierUID(identifierNumber),
+        JSValueRegs(baseGPR), JSValueRegs(resultGPR), type);
</ins><span class="cx">     gen.generateFastPath(m_jit);
</span><span class="cx">     
</span><span class="cx">     JITCompiler::JumpList slowCases;
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreftlFTLLowerDFGToB3cpp"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp (202213 => 202214)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp        2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp        2016-06-19 19:42:18 UTC (rev 202214)
</span><span class="lines">@@ -6183,21 +6183,19 @@
</span><span class="cx"> 
</span><span class="cx">                                 jit.addLinkTask(
</span><span class="cx">                                     [=] (LinkBuffer&amp; linkBuffer) {
</span><del>-                                        CodeLocationCall callReturnLocation =
-                                            linkBuffer.locationOf(slowPathCall);
-                                        stubInfo-&gt;patch.deltaCallToDone =
-                                            CCallHelpers::differenceBetweenCodePtr(
-                                                callReturnLocation,
-                                                linkBuffer.locationOf(done));
-                                        stubInfo-&gt;patch.deltaCallToJump =
-                                            CCallHelpers::differenceBetweenCodePtr(
-                                                callReturnLocation,
-                                                linkBuffer.locationOf(jump));
-                                        stubInfo-&gt;callReturnLocation = callReturnLocation;
-                                        stubInfo-&gt;patch.deltaCallToSlowCase =
-                                            CCallHelpers::differenceBetweenCodePtr(
-                                                callReturnLocation,
-                                                linkBuffer.locationOf(slowPathBegin));
</del><ins>+                                        CodeLocationLabel start = linkBuffer.locationOf(jump);
+                                        stubInfo-&gt;patch.start = start;
+                                        ptrdiff_t inlineSize = MacroAssembler::differenceBetweenCodePtr(
+                                            start, linkBuffer.locationOf(done));
+                                        RELEASE_ASSERT(inlineSize &gt;= 0);
+                                        stubInfo-&gt;patch.inlineSize = inlineSize;
+
+                                        stubInfo-&gt;patch.deltaFromStartToSlowPathCallLocation = MacroAssembler::differenceBetweenCodePtr(
+                                            start, linkBuffer.locationOf(slowPathCall));
+
+                                        stubInfo-&gt;patch.deltaFromStartToSlowPathStart = MacroAssembler::differenceBetweenCodePtr(
+                                            start, linkBuffer.locationOf(slowPathBegin));
+
</ins><span class="cx">                                     });
</span><span class="cx">                             });
</span><span class="cx">                     });
</span><span class="lines">@@ -7616,7 +7614,7 @@
</span><span class="cx"> 
</span><span class="cx">                 auto generator = Box&lt;JITGetByIdGenerator&gt;::create(
</span><span class="cx">                     jit.codeBlock(), node-&gt;origin.semantic, callSiteIndex,
</span><del>-                    params.unavailableRegisters(), JSValueRegs(params[1].gpr()),
</del><ins>+                    params.unavailableRegisters(), uid, JSValueRegs(params[1].gpr()),
</ins><span class="cx">                     JSValueRegs(params[0].gpr()), type);
</span><span class="cx"> 
</span><span class="cx">                 generator-&gt;generateFastPath(jit);
</span></span></pre></div>
<a id="trunkSourceJavaScriptCorejitJITInlineCacheGeneratorcpp"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/jit/JITInlineCacheGenerator.cpp (202213 => 202214)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/jit/JITInlineCacheGenerator.cpp        2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/JavaScriptCore/jit/JITInlineCacheGenerator.cpp        2016-06-19 19:42:18 UTC (rev 202214)
</span><span class="lines">@@ -29,8 +29,9 @@
</span><span class="cx"> #if ENABLE(JIT)
</span><span class="cx"> 
</span><span class="cx"> #include &quot;CodeBlock.h&quot;
</span><ins>+#include &quot;InlineAccess.h&quot;
+#include &quot;JSCInlines.h&quot;
</ins><span class="cx"> #include &quot;LinkBuffer.h&quot;
</span><del>-#include &quot;JSCInlines.h&quot;
</del><span class="cx"> #include &quot;StructureStubInfo.h&quot;
</span><span class="cx"> 
</span><span class="cx"> namespace JSC {
</span><span class="lines">@@ -69,25 +70,19 @@
</span><span class="cx"> 
</span><span class="cx"> void JITByIdGenerator::finalize(LinkBuffer&amp; fastPath, LinkBuffer&amp; slowPath)
</span><span class="cx"> {
</span><del>-    CodeLocationCall callReturnLocation = slowPath.locationOf(m_call);
-    m_stubInfo-&gt;callReturnLocation = callReturnLocation;
-    m_stubInfo-&gt;patch.deltaCheckImmToCall = MacroAssembler::differenceBetweenCodePtr(
-        fastPath.locationOf(m_structureImm), callReturnLocation);
-    m_stubInfo-&gt;patch.deltaCallToJump = MacroAssembler::differenceBetweenCodePtr(
-        callReturnLocation, fastPath.locationOf(m_structureCheck));
-#if USE(JSVALUE64)
-    m_stubInfo-&gt;patch.deltaCallToLoadOrStore = MacroAssembler::differenceBetweenCodePtr(
-        callReturnLocation, fastPath.locationOf(m_loadOrStore));
-#else
-    m_stubInfo-&gt;patch.deltaCallToTagLoadOrStore = MacroAssembler::differenceBetweenCodePtr(
-        callReturnLocation, fastPath.locationOf(m_tagLoadOrStore));
-    m_stubInfo-&gt;patch.deltaCallToPayloadLoadOrStore = MacroAssembler::differenceBetweenCodePtr(
-        callReturnLocation, fastPath.locationOf(m_loadOrStore));
-#endif
-    m_stubInfo-&gt;patch.deltaCallToSlowCase = MacroAssembler::differenceBetweenCodePtr(
-        callReturnLocation, slowPath.locationOf(m_slowPathBegin));
-    m_stubInfo-&gt;patch.deltaCallToDone = MacroAssembler::differenceBetweenCodePtr(
-        callReturnLocation, fastPath.locationOf(m_done));
</del><ins>+    ASSERT(m_start.isSet());
+    CodeLocationLabel start = fastPath.locationOf(m_start);
+    m_stubInfo-&gt;patch.start = start;
+
+    int32_t inlineSize = MacroAssembler::differenceBetweenCodePtr(
+        start, fastPath.locationOf(m_done));
+    ASSERT(inlineSize &gt; 0);
+    m_stubInfo-&gt;patch.inlineSize = inlineSize;
+
+    m_stubInfo-&gt;patch.deltaFromStartToSlowPathCallLocation = MacroAssembler::differenceBetweenCodePtr(
+        start, slowPath.locationOf(m_slowPathCall));
+    m_stubInfo-&gt;patch.deltaFromStartToSlowPathStart = MacroAssembler::differenceBetweenCodePtr(
+        start, slowPath.locationOf(m_slowPathBegin));
</ins><span class="cx"> }
</span><span class="cx"> 
</span><span class="cx"> void JITByIdGenerator::finalize(LinkBuffer&amp; linkBuffer)
</span><span class="lines">@@ -95,19 +90,23 @@
</span><span class="cx">     finalize(linkBuffer, linkBuffer);
</span><span class="cx"> }
</span><span class="cx"> 
</span><del>-void JITByIdGenerator::generateFastPathChecks(MacroAssembler&amp; jit)
</del><ins>+void JITByIdGenerator::generateFastCommon(MacroAssembler&amp; jit, size_t inlineICSize)
</ins><span class="cx"> {
</span><del>-    m_structureCheck = jit.patchableBranch32WithPatch(
-        MacroAssembler::NotEqual,
-        MacroAssembler::Address(m_base.payloadGPR(), JSCell::structureIDOffset()),
-        m_structureImm, MacroAssembler::TrustedImm32(0));
</del><ins>+    m_start = jit.label();
+    size_t startSize = jit.m_assembler.buffer().codeSize();
+    m_slowPathJump = jit.jump();
+    size_t jumpSize = jit.m_assembler.buffer().codeSize() - startSize;
+    size_t nopsToEmitInBytes = inlineICSize - jumpSize;
+    jit.emitNops(nopsToEmitInBytes);
+    ASSERT(jit.m_assembler.buffer().codeSize() - startSize == inlineICSize);
+    m_done = jit.label();
</ins><span class="cx"> }
</span><span class="cx"> 
</span><span class="cx"> JITGetByIdGenerator::JITGetByIdGenerator(
</span><span class="cx">     CodeBlock* codeBlock, CodeOrigin codeOrigin, CallSiteIndex callSite, const RegisterSet&amp; usedRegisters,
</span><del>-    JSValueRegs base, JSValueRegs value, AccessType accessType)
-    : JITByIdGenerator(
-        codeBlock, codeOrigin, callSite, accessType, usedRegisters, base, value)
</del><ins>+    UniquedStringImpl* propertyName, JSValueRegs base, JSValueRegs value, AccessType accessType)
+    : JITByIdGenerator(codeBlock, codeOrigin, callSite, accessType, usedRegisters, base, value)
+    , m_isLengthAccess(propertyName == codeBlock-&gt;vm()-&gt;propertyNames-&gt;length.impl())
</ins><span class="cx"> {
</span><span class="cx">     RELEASE_ASSERT(base.payloadGPR() != value.tagGPR());
</span><span class="cx"> }
</span><span class="lines">@@ -114,19 +113,7 @@
</span><span class="cx"> 
</span><span class="cx"> void JITGetByIdGenerator::generateFastPath(MacroAssembler&amp; jit)
</span><span class="cx"> {
</span><del>-    generateFastPathChecks(jit);
-    
-#if USE(JSVALUE64)
-    m_loadOrStore = jit.load64WithCompactAddressOffsetPatch(
-        MacroAssembler::Address(m_base.payloadGPR(), 0), m_value.payloadGPR()).label();
-#else
-    m_tagLoadOrStore = jit.load32WithCompactAddressOffsetPatch(
-        MacroAssembler::Address(m_base.payloadGPR(), 0), m_value.tagGPR()).label();
-    m_loadOrStore = jit.load32WithCompactAddressOffsetPatch(
-        MacroAssembler::Address(m_base.payloadGPR(), 0), m_value.payloadGPR()).label();
-#endif
-    
-    m_done = jit.label();
</del><ins>+    generateFastCommon(jit, m_isLengthAccess ? InlineAccess::sizeForLengthAccess() : InlineAccess::sizeForPropertyAccess());
</ins><span class="cx"> }
</span><span class="cx"> 
</span><span class="cx"> JITPutByIdGenerator::JITPutByIdGenerator(
</span><span class="lines">@@ -143,19 +130,7 @@
</span><span class="cx"> 
</span><span class="cx"> void JITPutByIdGenerator::generateFastPath(MacroAssembler&amp; jit)
</span><span class="cx"> {
</span><del>-    generateFastPathChecks(jit);
-    
-#if USE(JSVALUE64)
-    m_loadOrStore = jit.store64WithAddressOffsetPatch(
-        m_value.payloadGPR(), MacroAssembler::Address(m_base.payloadGPR(), 0)).label();
-#else
-    m_tagLoadOrStore = jit.store32WithAddressOffsetPatch(
-        m_value.tagGPR(), MacroAssembler::Address(m_base.payloadGPR(), 0)).label();
-    m_loadOrStore = jit.store32WithAddressOffsetPatch(
-        m_value.payloadGPR(), MacroAssembler::Address(m_base.payloadGPR(), 0)).label();
-#endif
-    
-    m_done = jit.label();
</del><ins>+    generateFastCommon(jit, InlineAccess::sizeForPropertyReplace());
</ins><span class="cx"> }
</span><span class="cx"> 
</span><span class="cx"> V_JITOperation_ESsiJJI JITPutByIdGenerator::slowPathFunction()
</span></span></pre></div>
<a id="trunkSourceJavaScriptCorejitJITInlineCacheGeneratorh"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/jit/JITInlineCacheGenerator.h (202213 => 202214)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/jit/JITInlineCacheGenerator.h        2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/JavaScriptCore/jit/JITInlineCacheGenerator.h        2016-06-19 19:42:18 UTC (rev 202214)
</span><span class="lines">@@ -68,30 +68,30 @@
</span><span class="cx">     void reportSlowPathCall(MacroAssembler::Label slowPathBegin, MacroAssembler::Call call)
</span><span class="cx">     {
</span><span class="cx">         m_slowPathBegin = slowPathBegin;
</span><del>-        m_call = call;
</del><ins>+        m_slowPathCall = call;
</ins><span class="cx">     }
</span><span class="cx">     
</span><span class="cx">     MacroAssembler::Label slowPathBegin() const { return m_slowPathBegin; }
</span><del>-    MacroAssembler::Jump slowPathJump() const { return m_structureCheck.m_jump; }
</del><ins>+    MacroAssembler::Jump slowPathJump() const
+    {
+        ASSERT(m_slowPathJump.isSet());
+        return m_slowPathJump;
+    }
</ins><span class="cx"> 
</span><span class="cx">     void finalize(LinkBuffer&amp; fastPathLinkBuffer, LinkBuffer&amp; slowPathLinkBuffer);
</span><span class="cx">     void finalize(LinkBuffer&amp;);
</span><span class="cx">     
</span><span class="cx"> protected:
</span><del>-    void generateFastPathChecks(MacroAssembler&amp;);
</del><ins>+    void generateFastCommon(MacroAssembler&amp;, size_t size);
</ins><span class="cx">     
</span><span class="cx">     JSValueRegs m_base;
</span><span class="cx">     JSValueRegs m_value;
</span><span class="cx">     
</span><del>-    MacroAssembler::DataLabel32 m_structureImm;
-    MacroAssembler::PatchableJump m_structureCheck;
-    AssemblerLabel m_loadOrStore;
-#if USE(JSVALUE32_64)
-    AssemblerLabel m_tagLoadOrStore;
-#endif
</del><ins>+    MacroAssembler::Label m_start;
</ins><span class="cx">     MacroAssembler::Label m_done;
</span><span class="cx">     MacroAssembler::Label m_slowPathBegin;
</span><del>-    MacroAssembler::Call m_call;
</del><ins>+    MacroAssembler::Call m_slowPathCall;
+    MacroAssembler::Jump m_slowPathJump;
</ins><span class="cx"> };
</span><span class="cx"> 
</span><span class="cx"> class JITGetByIdGenerator : public JITByIdGenerator {
</span><span class="lines">@@ -99,10 +99,13 @@
</span><span class="cx">     JITGetByIdGenerator() { }
</span><span class="cx"> 
</span><span class="cx">     JITGetByIdGenerator(
</span><del>-        CodeBlock*, CodeOrigin, CallSiteIndex, const RegisterSet&amp; usedRegisters, JSValueRegs base,
-        JSValueRegs value, AccessType);
</del><ins>+        CodeBlock*, CodeOrigin, CallSiteIndex, const RegisterSet&amp; usedRegisters, UniquedStringImpl* propertyName,
+        JSValueRegs base, JSValueRegs value, AccessType);
</ins><span class="cx">     
</span><span class="cx">     void generateFastPath(MacroAssembler&amp;);
</span><ins>+
+private:
+    bool m_isLengthAccess;
</ins><span class="cx"> };
</span><span class="cx"> 
</span><span class="cx"> class JITPutByIdGenerator : public JITByIdGenerator {
</span></span></pre></div>
<a id="trunkSourceJavaScriptCorejitJITPropertyAccesscpp"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/jit/JITPropertyAccess.cpp (202213 => 202214)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/jit/JITPropertyAccess.cpp        2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/JavaScriptCore/jit/JITPropertyAccess.cpp        2016-06-19 19:42:18 UTC (rev 202214)
</span><span class="lines">@@ -222,7 +222,7 @@
</span><span class="cx"> 
</span><span class="cx">     JITGetByIdGenerator gen(
</span><span class="cx">         m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), RegisterSet::stubUnavailableRegisters(),
</span><del>-        JSValueRegs(regT0), JSValueRegs(regT0), AccessType::Get);
</del><ins>+        propertyName.impl(), JSValueRegs(regT0), JSValueRegs(regT0), AccessType::Get);
</ins><span class="cx">     gen.generateFastPath(*this);
</span><span class="cx"> 
</span><span class="cx">     fastDoneCase = jump();
</span><span class="lines">@@ -571,6 +571,7 @@
</span><span class="cx"> {
</span><span class="cx">     int resultVReg = currentInstruction[1].u.operand;
</span><span class="cx">     int baseVReg = currentInstruction[2].u.operand;
</span><ins>+    const Identifier* ident = &amp;(m_codeBlock-&gt;identifier(currentInstruction[3].u.operand));
</ins><span class="cx"> 
</span><span class="cx">     emitGetVirtualRegister(baseVReg, regT0);
</span><span class="cx"> 
</span><span class="lines">@@ -578,7 +579,7 @@
</span><span class="cx"> 
</span><span class="cx">     JITGetByIdGenerator gen(
</span><span class="cx">         m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), RegisterSet::stubUnavailableRegisters(),
</span><del>-        JSValueRegs(regT0), JSValueRegs(regT0), AccessType::GetPure);
</del><ins>+        ident-&gt;impl(), JSValueRegs(regT0), JSValueRegs(regT0), AccessType::GetPure);
</ins><span class="cx">     gen.generateFastPath(*this);
</span><span class="cx">     addSlowCase(gen.slowPathJump());
</span><span class="cx">     m_getByIds.append(gen);
</span><span class="lines">@@ -619,7 +620,7 @@
</span><span class="cx"> 
</span><span class="cx">     JITGetByIdGenerator gen(
</span><span class="cx">         m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), RegisterSet::stubUnavailableRegisters(),
</span><del>-        JSValueRegs(regT0), JSValueRegs(regT0), AccessType::Get);
</del><ins>+        ident-&gt;impl(), JSValueRegs(regT0), JSValueRegs(regT0), AccessType::Get);
</ins><span class="cx">     gen.generateFastPath(*this);
</span><span class="cx">     addSlowCase(gen.slowPathJump());
</span><span class="cx">     m_getByIds.append(gen);
</span></span></pre></div>
<a id="trunkSourceJavaScriptCorejitJITPropertyAccess32_64cpp"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/jit/JITPropertyAccess32_64.cpp (202213 => 202214)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/jit/JITPropertyAccess32_64.cpp        2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/JavaScriptCore/jit/JITPropertyAccess32_64.cpp        2016-06-19 19:42:18 UTC (rev 202214)
</span><span class="lines">@@ -292,7 +292,7 @@
</span><span class="cx"> 
</span><span class="cx">     JITGetByIdGenerator gen(
</span><span class="cx">         m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(currentInstruction), RegisterSet::stubUnavailableRegisters(),
</span><del>-        JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0), AccessType::Get);
</del><ins>+        propertyName.impl(), JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0), AccessType::Get);
</ins><span class="cx">     gen.generateFastPath(*this);
</span><span class="cx"> 
</span><span class="cx">     fastDoneCase = jump();
</span><span class="lines">@@ -587,6 +587,7 @@
</span><span class="cx"> {
</span><span class="cx">     int dst = currentInstruction[1].u.operand;
</span><span class="cx">     int base = currentInstruction[2].u.operand;
</span><ins>+    const Identifier* ident = &amp;(m_codeBlock-&gt;identifier(currentInstruction[3].u.operand));
</ins><span class="cx"> 
</span><span class="cx">     emitLoad(base, regT1, regT0);
</span><span class="cx">     emitJumpSlowCaseIfNotJSCell(base, regT1);
</span><span class="lines">@@ -593,7 +594,7 @@
</span><span class="cx"> 
</span><span class="cx">     JITGetByIdGenerator gen(
</span><span class="cx">         m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(currentInstruction), RegisterSet::stubUnavailableRegisters(),
</span><del>-        JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0), AccessType::GetPure);
</del><ins>+        ident-&gt;impl(), JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0), AccessType::GetPure);
</ins><span class="cx">     gen.generateFastPath(*this);
</span><span class="cx">     addSlowCase(gen.slowPathJump());
</span><span class="cx">     m_getByIds.append(gen);
</span><span class="lines">@@ -634,7 +635,7 @@
</span><span class="cx"> 
</span><span class="cx">     JITGetByIdGenerator gen(
</span><span class="cx">         m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(currentInstruction), RegisterSet::stubUnavailableRegisters(),
</span><del>-        JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0), AccessType::Get);
</del><ins>+        ident-&gt;impl(), JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0), AccessType::Get);
</ins><span class="cx">     gen.generateFastPath(*this);
</span><span class="cx">     addSlowCase(gen.slowPathJump());
</span><span class="cx">     m_getByIds.append(gen);
</span></span></pre></div>
<a id="trunkSourceJavaScriptCorejitRepatchcpp"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/jit/Repatch.cpp (202213 => 202214)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/jit/Repatch.cpp        2016-06-19 18:26:52 UTC (rev 202213)
+++ trunk/Source/JavaScriptCore/jit/Repatch.cpp        2016-06-19 19:42:18 UTC (rev 202214)
</span><span class="lines">@@ -38,6 +38,7 @@
</span><span class="cx"> #include &quot;GCAwareJITStubRoutine.h&quot;
</span><span class="cx"> #include &quot;GetterSetter.h&quot;
</span><span class="cx"> #include &quot;ICStats.h&quot;
</span><ins>+#include &quot;InlineAccess.h&quot;
</ins><span class="cx"> #include &quot;JIT.h&quot;
</span><span class="cx"> #include &quot;JITInlines.h&quot;
</span><span class="cx"> #include &quot;LinkBuffer.h&quot;
</span><span class="lines">@@ -90,95 +91,6 @@
</span><span class="cx">     MacroAssembler::repatchCall(call, newCalleeFunction);
</span><span class="cx"> }
</span><span class="cx"> 
</span><del>-static void repatchByIdSelfAccess(
-    CodeBlock* codeBlock, StructureStubInfo&amp; stubInfo, Structure* structure,
-    PropertyOffset offset, const FunctionPtr&amp; slowPathFunction,
-    bool compact)
-{
-    // Only optimize once!
-    repatchCall(codeBlock, stubInfo.callReturnLocation, slowPathFunction);
-
-    // Patch the structure check &amp; the offset of the load.
-    MacroAssembler::repatchInt32(
-        stubInfo.callReturnLocation.dataLabel32AtOffset(-(intptr_t)stubInfo.patch.deltaCheckImmToCall),
-        bitwise_cast&lt;int32_t&gt;(structure-&gt;id()));
-#if USE(JSVALUE64)
-    if (compact)
-        MacroAssembler::repatchCompact(stubInfo.callReturnLocation.dataLabelCompactAtOffset(stubInfo.patch.deltaCallToLoadOrStore), offsetRelativeToBase(offset));
-    else
-        MacroAssembler::repatchInt32(stubInfo.callReturnLocation.dataLabel32AtOffset(stubInfo.patch.deltaCallToLoadOrStore), offsetRelativeToBase(offset));
-#elif USE(JSVALUE32_64)
-    if (compact) {
-        MacroAssembler::repatchCompact(stubInfo.callReturnLocation.dataLabelCompactAtOffset(stubInfo.patch.deltaCallToTagLoadOrStore), offsetRelativeToBase(offset) + OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.tag));
-        MacroAssembler::repatchCompact(stubInfo.callReturnLocation.dataLabelCompactAtOffset(stubInfo.patch.deltaCallToPayloadLoadOrStore), offsetRelativeToBase(offset) + OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.payload));
-    } else {
-        MacroAssembler::repatchInt32(stubInfo.callReturnLocation.dataLabel32AtOffset(stubInfo.patch.deltaCallToTagLoadOrStore), offsetRelativeToBase(offset) + OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.tag));
-        MacroAssembler::repatchInt32(stubInfo.callReturnLocation.dataLabel32AtOffset(stubInfo.patch.deltaCallToPayloadLoadOrStore), offsetRelativeToBase(offset) + OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.payload));
-    }
-#endif
-}
-
-static void resetGetByIDCheckAndLoad(StructureStubInfo&amp; stubInfo)
-{
-    CodeLocationDataLabel32 structureLabel = stubInfo.callReturnLocation.dataLabel32AtOffset(-(intptr_t)stubInfo.patch.deltaCheckImmToCall);
-    if (MacroAssembler::canJumpReplacePatchableBranch32WithPatch()) {
-        MacroAssembler::revertJumpReplacementToPatchableBranch32WithPatch(
-            MacroAssembler::startOfPatchableBranch32WithPatchOnAddress(structureLabel),
-            MacroAssembler::Address(
-                static_cast&lt;MacroAssembler::RegisterID&gt;(stubInfo.patch.baseGPR),
-                JSCell::structureIDOffset()),
-            static_cast&lt;int32_t&gt;(unusedPointer));
-    }
-    MacroAssembler::repatchInt32(structureLabel, static_cast&lt;int32_t&gt;(unusedPointer));
-#if USE(JSVALUE64)
-    MacroAssembler::repatchCompact(stubInfo.callReturnLocation.dataLabelCompactAtOffset(stubInfo.patch.deltaCallToLoadOrStore), 0);
-#else
-    MacroAssembler::repatchCompact(stubInfo.callReturnLocation.dataLabelCompactAtOffset(stubInfo.patch.deltaCallToTagLoadOrStore), 0);
-    MacroAssembler::repatchCompact(stubInfo.callReturnLocation.dataLabelCompactAtOffset(stubInfo.patch.deltaCallToPayloadLoadOrStore), 0);
-#endif
-}
-
-static void resetPutByIDCheckAndLoad(StructureStubInfo&amp; stubInfo)
-{
-    CodeLocationDataLabel32 structureLabel = stubInfo.callReturnLocation.dataLabel32AtOffset(-(intptr_t)stubInfo.patch.deltaCheckImmToCall);
-    if (MacroAssembler::canJumpReplacePatchableBranch32WithPatch()) {
-        MacroAssembler::revertJumpReplacementToPatchableBranch32WithPatch(
-            MacroAssembler::startOfPatchableBranch32WithPatchOnAddress(structureLabel),
-            MacroAssembler::Address(
-                static_cast&lt;MacroAssembler::RegisterID&gt;(stubInfo.patch.baseGPR),
-                JSCell::structureIDOffset()),
-            static_cast&lt;int32_t&gt;(unusedPointer));
-    }
-    MacroAssembler::repatchInt32(structureLabel, static_cast&lt;int32_t&gt;(unusedPointer));
-#if USE(JSVALUE64)
-    MacroAssembler::repatchInt32(stubInfo.callReturnLocation.dataLabel32AtOffset(stubInfo.patch.deltaCallToLoadOrStore), 0);
-#else
-    MacroAssembler::repatchInt32(stubInfo.callReturnLocation.dataLabel32AtOffset(stubInfo.patch.deltaCallToTagLoadOrStore), 0);
-    MacroAssembler::repatchInt32(stubInfo.callReturnLocation.dataLabel32AtOffset(stubInfo.patch.deltaCallToPayloadLoadOrStore), 0);
-#endif
-}
-
-static void replaceWithJump(StructureStubInfo&amp; stubInfo, const MacroAssemblerCodePtr target)
-{
-    RELEASE_ASSERT(target);
-    
-    if (MacroAssembler::canJumpReplacePatchableBranch32WithPatch()) {
-        MacroAssembler::replaceWithJump(
-            MacroAssembler::startOfPatchableBranch32WithPatchOnAddress(
-                stubInfo.callReturnLocation.dataLabel32AtOffset(
-                    -(intptr_t)stubInfo.patch.deltaCheckImmToCall)),
-            CodeLocationLabel(target));
-        return;
-    }
-
-    resetGetByIDCheckAndLoad(stubInfo);
-    
-    MacroAssembler::repatchJump(
-        stubInfo.callReturnLocation.jumpAtOffset(
-            stubInfo.patch.deltaCallToJump),
-        CodeLocationLabel(target));
-}
-
</del><span class="cx"> enum InlineCacheAction {
</span><span class="cx">     GiveUpOnCache,
</span><span class="cx">     RetryCacheLater,
</span><span class="lines">@@ -241,9 +153,21 @@
</span><span class="cx">     std::unique_ptr&lt;AccessCase&gt; newCase;
</span><span class="cx"> 
</span><span class="cx">     if (propertyName == vm.propertyNames-&gt;length) {
</span><del>-        if (isJSArray(baseValue))
</del><ins>+        if (isJSArray(baseValue)) {
+            if (stubInfo.cacheType == CacheType::Unset
+                &amp;&amp; slot.slotBase() == baseValue
+                &amp;&amp; InlineAccess::isCacheableArrayLength(stubInfo, jsCast&lt;JSArray*&gt;(baseValue))) {
+
+                bool generatedCodeInline = InlineAccess::generateArrayLength(*codeBlock-&gt;vm(), stubInfo, jsCast&lt;JSArray*&gt;(baseValue));
+                if (generatedCodeInline) {
+                    repatchCall(codeBlock, stubInfo.slowPathCallLocation(), appropriateOptimizingGetByIdFunction(kind));
+                    stubInfo.initArrayLength();
+                    return RetryCacheLater;
+                }
+            }
+
</ins><span class="cx">             newCase = AccessCase::getLength(vm, codeBlock, AccessCase::ArrayLength);
</span><del>-        else if (isJSString(baseValue))
</del><ins>+        } else if (isJSString(baseValue))
</ins><span class="cx">             newCase = AccessCase::getLength(vm, codeBlock, AccessCase::StringLength);
</span><span class="cx">         else if (DirectArguments* arguments = jsDynamicCast&lt;DirectArguments*&gt;(baseValue)) {
</span><span class="cx">             // If there were overrides, then we can handle this as a normal property load! Guarding
</span><span class="lines">@@ -276,22 +200,23 @@
</span><span class="cx">         InlineCacheAction action = actionForCell(vm, baseCell);
</span><span class="cx">         if (action != AttemptToCache)
</span><span class="cx">             return action;
</span><del>-        
</del><ins>+
</ins><span class="cx">         // Optimize self access.
</span><span class="cx">         if (stubInfo.cacheType == CacheType::Unset
</span><span class="cx">             &amp;&amp; slot.isCacheableValue()
</span><span class="cx">             &amp;&amp; slot.slotBase() == baseValue
</span><span class="cx">             &amp;&amp; !slot.watchpointSet()
</span><del>-            &amp;&amp; isInlineOffset(slot.cachedOffset())
-            &amp;&amp; MacroAssembler::isCompactPtrAlignedAddressOffset(maxOffsetRelativeToBase(slot.cachedOffset()))
-            &amp;&amp; action == AttemptToCache
</del><span class="cx">             &amp;&amp; !structure-&gt;needImpurePropertyWatchpoint()
</span><span class="cx">             &amp;&amp; !loadTargetFromProxy) {
</span><del>-            LOG_IC((ICEvent::GetByIdSelfPatch, structure-&gt;classInfo(), propertyName));
-            structure-&gt;startWatchingPropertyForReplacements(vm, slot.cachedOffset());
-            repatchByIdSelfAccess(codeBlock, stubInfo, structure, slot.cachedOffset(), appropriateOptimizingGetByIdFunction(kind), true);
-            stubInfo.initGetByIdSelf(codeBlock, structure, slot.cachedOffset());
-            return RetryCacheLater;
</del><ins>+
+            bool generatedCodeInline = InlineAccess::generateSelfPropertyAccess(*codeBlock-&gt;vm(), stubInfo, structure, slot.cachedOffset());
+            if (generatedCodeInline) {
+                LOG_IC((ICEvent::GetByIdSelfPatch, structure-&gt;classInfo(), propertyName));
+                structure-&gt;startWatchingPropertyForReplacements(vm, slot.cachedOffset());
+                repatchCall(codeBlock, stubInfo.slowPathCallLocation(), appropriateOptimizingGetByIdFunction(kind));
+                stubInfo.initGetByIdSelf(codeBlock, structure, slot.cachedOffset());
+                return RetryCacheLater;
+            }
</ins><span class="cx">         }
</span><span class="cx"> 
</span><span class="cx">         PropertyOffset offset = slot.isUnset() ? invalidOffset : slot.cachedOffset();
</span><span class="lines">@@ -370,7 +295,7 @@
</span><span class="cx">         LOG_IC((ICEvent::GetByIdReplaceWithJump, baseValue.classInfoOrNull(), propertyName));
</span><span class="cx">         
</span><span class="cx">         RELEASE_ASSERT(result.code());
</span><del>-        replaceWithJump(stubInfo, result.code());
</del><ins>+        InlineAccess::rewireStubAsJump(exec-&gt;vm(), stubInfo, CodeLocationLabel(result.code()));
</ins><span class="cx">     }
</span><span class="cx">     
</span><span class="cx">     return result.shouldGiveUpNow() ? GiveUpOnCache : RetryCacheLater;
</span><span class="lines">@@ -382,7 +307,7 @@
</span><span class="cx">     GCSafeConcurrentJITLocker locker(exec-&gt;codeBlock()-&gt;m_lock, exec-&gt;vm().heap);
</span><span class="cx">     
</span><span class="cx">     if (tryCacheGetByID(exec, baseValue, propertyName, slot, stubInfo, kind) == GiveUpOnCache)
</span><del>-        repatchCall(exec-&gt;codeBlock(), stubInfo.callReturnLocation, appropriateGenericGetByIdFunction(kind));
</del><ins>+        repatchCall(exec-&gt;codeBlock(), stubInfo.slowPathCallLocation(), appropriateGenericGetByIdFunction(kind));
</ins><span class="cx"> }
</span><span class="cx"> 
</span><span class="cx"> static V_JITOperation_ESsiJJI appropriateGenericPutByIdFunction(const PutPropertySlot &amp;slot, PutKind putKind)
</span><span class="lines">@@ -433,18 +358,17 @@
</span><span class="cx">             structure-&gt;didCachePropertyReplacement(vm, slot.cachedOffset());
</span><span class="cx">         
</span><span class="cx">             if (stubInfo.cacheType == CacheType::Unset
</span><del>-                &amp;&amp; isInlineOffset(slot.cachedOffset())
-                &amp;&amp; MacroAssembler::isPtrAlignedAddressOffset(maxOffsetRelativeToBase(slot.cachedOffset()))
</del><ins>+                &amp;&amp; InlineAccess::canGenerateSelfPropertyReplace(stubInfo, slot.cachedOffset())
</ins><span class="cx">                 &amp;&amp; !structure-&gt;needImpurePropertyWatchpoint()
</span><span class="cx">                 &amp;&amp; !structure-&gt;inferredTypeFor(ident.impl())) {
</span><span class="cx">                 
</span><del>-                LOG_IC((ICEvent::PutByIdSelfPatch, structure-&gt;classInfo(), ident));
-                
-                repatchByIdSelfAccess(
-                    codeBlock, stubInfo, structure, slot.cachedOffset(),
-                    appropriateOptimizingPutByIdFunction(slot, putKind), false);
-                stubInfo.initPutByIdReplace(codeBlock, structure, slot.cachedOffset());
-                return RetryCacheLater;
</del><ins>+                bool generatedCodeInline = InlineAccess::generateSelfPropertyReplace(vm, stubInfo, structure, slot.cachedOffset());
+                if (generatedCodeInline) {
+                    LOG_IC((ICEvent::PutByIdSelfPatch, structure-&gt;classInfo(), ident));
+                    repatchCall(codeBlock, stubInfo.slowPathCallLocation(), appropriateOptimizingPutByIdFunction(slot, putKind));
+                    stubInfo.initPutByIdReplace(codeBlock, structure, slot.cachedOffset());
+                    return RetryCacheLater;
+                }
</ins><span class="cx">             }
</span><span class="cx"> 
</span><span class="cx">             newCase = AccessCase::replace(vm, codeBlock, structure, slot.cachedOffset());
</span><span class="lines">@@ -524,11 +448,8 @@
</span><span class="cx">         LOG_IC((ICEvent::PutByIdReplaceWithJump, structure-&gt;classInfo(), ident));
</span><span class="cx">         
</span><span class="cx">         RELEASE_ASSERT(result.code());
</span><del>-        resetPutByIDCheckAndLoad(stubInfo);
-        MacroAssembler::repatchJump(
-            stubInfo.callReturnLocation.jumpAtOffset(
-                stubInfo.patch.deltaCallToJump),
-            CodeLocationLabel(result.code()));
</del><ins>+
+        InlineAccess::rewireStubAsJump(vm, stubInfo, CodeLocationLabel(result.code()));
</ins><span class="cx">     }
</span><span class="cx">     
</span><span class="cx">     return result.shouldGiveUpNow() ? GiveUpOnCache : RetryCacheLater;
</span><span class="lines">@@ -540,7 +461,7 @@
</span><span class="cx">     GCSafeConcurrentJITLocker locker(exec-&gt;codeBlock()-&gt;m_lock, exec-&gt;vm().heap);
</span><span class="cx">     
</span><span class="cx">     if (tryCachePutByID(exec, baseValue, structure, propertyName, slot, stubInfo, putKind) == GiveUpOnCache)
</span><del>-        repatchCall(exec-&gt;codeBlock(), stubInfo.callReturnLocation, appropriateGenericPutByIdFunction(slot, putKind));
</del><ins>+        repatchCall(exec-&gt;codeBlock(), stubInfo.slowPathCallLocation(), appropriateGenericPutByIdFunction(slot, putKind));
</ins><span class="cx"> }
</span><span class="cx"> 
</span><span class="cx"> static InlineCacheAction tryRepatchIn(
</span><span class="lines">@@ -586,8 +507,9 @@
</span><span class="cx">         LOG_IC((ICEvent::InReplaceWithJump, structure-&gt;classInfo(), ident));
</span><span class="cx">         
</span><span class="cx">         RELEASE_ASSERT(result.code());
</span><ins>+
</ins><span class="cx">         MacroAssembler::repatchJump(
</span><del>-            stubInfo.callReturnLocation.jumpAtOffset(stubInfo.patch.deltaCallToJump),
</del><ins>+            stubInfo.patchableJumpForIn(),
</ins><span class="cx">             CodeLocationLabel(result.code()));
</span><span class="cx">     }
</span><span class="cx">     
</span><span class="lines">@@ -600,7 +522,7 @@
</span><span class="cx"> {
</span><span class="cx">     SuperSamplerScope superSamplerScope(false);
</span><span class="cx">     if (tryRepatchIn(exec, base, ident, wasFound, slot, stubInfo) == GiveUpOnCache)
</span><del>-        repatchCall(exec-&gt;codeBlock(), stubInfo.callReturnLocation, operationIn);
</del><ins>+        repatchCall(exec-&gt;codeBlock(), stubInfo.slowPathCallLocation(), operationIn);
</ins><span class="cx"> }
</span><span class="cx"> 
</span><span class="cx"> static void linkSlowFor(VM*, CallLinkInfo&amp; callLinkInfo, MacroAssemblerCodeRef codeRef)
</span><span class="lines">@@ -972,14 +894,13 @@
</span><span class="cx"> 
</span><span class="cx"> void resetGetByID(CodeBlock* codeBlock, StructureStubInfo&amp; stubInfo, GetByIDKind kind)
</span><span class="cx"> {
</span><del>-    repatchCall(codeBlock, stubInfo.callReturnLocation, appropriateOptimizingGetByIdFunction(kind));
-    resetGetByIDCheckAndLoad(stubInfo);
-    MacroAssembler::repatchJump(stubInfo.callReturnLocation.jumpAtOffset(stubInfo.patch.deltaCallToJump), stubInfo.callReturnLocation.labelAtOffset(stubInfo.patch.deltaCallToSlowCase));
</del><ins>+    repatchCall(codeBlock, stubInfo.slowPathCallLocation(), appropriateOptimizingGetByIdFunction(kind));
+    InlineAccess::rewireStubAsJump(*codeBlock-&gt;vm(), stubInfo, stubInfo.slowPathStartLocation());
</ins><span class="cx"> }
</span><span class="cx"> 
</span><span class="cx"> void resetPutByID(CodeBlock* codeBlock, StructureStubInfo&amp; stubInfo)
</span><span class="cx"> {
</span><del>-    V_JITOperation_ESsiJJI unoptimizedFunction = bitwise_cast&lt;V_JITOperation_ESsiJJI&gt;(readCallTarget(codeBlock, stubInfo.callReturnLocation).executableAddress());
</del><ins>+    V_JITOperation_ESsiJJI unoptimizedFunction = bitwise_cast&lt;V_JITOperation_ESsiJJI&gt;(readCallTarget(codeBlock, stubInfo.slowPathCallLocation()).executableAddress());
</ins><span class="cx">     V_JITOperation_ESsiJJI optimizedFunction;
</span><span class="cx">     if (unoptimizedFunction == operationPutByIdStrict || unoptimizedFunction == operationPutByIdStrictOptimize)
</span><span class="cx">         optimizedFunction = operationPutByIdStrictOptimize;
</span><span class="lines">@@ -991,14 +912,14 @@
</span><span class="cx">         ASSERT(unoptimizedFunction == operationPutByIdDirectNonStrict || unoptimizedFunction == operationPutByIdDirectNonStrictOptimize);
</span><span class="cx">         optimizedFunction = operationPutByIdDirectNonStrictOptimize;
</span><span class="cx">     }
</span><del>-    repatchCall(codeBlock, stubInfo.callReturnLocation, optimizedFunction);
-    resetPutByIDCheckAndLoad(stubInfo);
-    MacroAssembler::repatchJump(stubInfo.callReturnLocation.jumpAtOffset(stubInfo.patch.deltaCallToJump), stubInfo.callReturnLocation.labelAtOffset(stubInfo.patch.deltaCallToSlowCase));
</del><ins>+
+    repatchCall(codeBlock, stubInfo.slowPathCallLocation(), optimizedFunction);
+    InlineAccess::rewireStubAsJump(*codeBlock-&gt;vm(), stubInfo, stubInfo.slowPathStartLocation());
</ins><span class="cx"> }
</span><span class="cx"> 
</span><span class="cx"> void resetIn(CodeBlock*, StructureStubInfo&amp; stubInfo)
</span><span class="cx"> {
</span><del>-    MacroAssembler::repatchJump(stubInfo.callReturnLocation.jumpAtOffset(stubInfo.patch.deltaCallToJump), stubInfo.callReturnLocation.labelAtOffset(stubInfo.patch.deltaCallToSlowCase));
</del><ins>+    MacroAssembler::repatchJump(stubInfo.patchableJumpForIn(), stubInfo.slowPathStartLocation());
</ins><span class="cx"> }
</span><span class="cx"> 
</span><span class="cx"> } // namespace JSC
</span></span></pre>
</div>
</div>

</body>
</html>