<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN"
"http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head><meta http-equiv="content-type" content="text/html; charset=utf-8" />
<title>[190860] trunk/Source</title>
</head>
<body>

<style type="text/css"><!--
#msg dl.meta { border: 1px #006 solid; background: #369; padding: 6px; color: #fff; }
#msg dl.meta dt { float: left; width: 6em; font-weight: bold; }
#msg dt:after { content:':';}
#msg dl, #msg dt, #msg ul, #msg li, #header, #footer, #logmsg { font-family: verdana,arial,helvetica,sans-serif; font-size: 10pt;  }
#msg dl a { font-weight: bold}
#msg dl a:link    { color:#fc3; }
#msg dl a:active  { color:#ff0; }
#msg dl a:visited { color:#cc6; }
h3 { font-family: verdana,arial,helvetica,sans-serif; font-size: 10pt; font-weight: bold; }
#msg pre { overflow: auto; background: #ffc; border: 1px #fa0 solid; padding: 6px; }
#logmsg { background: #ffc; border: 1px #fa0 solid; padding: 1em 1em 0 1em; }
#logmsg p, #logmsg pre, #logmsg blockquote { margin: 0 0 1em 0; }
#logmsg p, #logmsg li, #logmsg dt, #logmsg dd { line-height: 14pt; }
#logmsg h1, #logmsg h2, #logmsg h3, #logmsg h4, #logmsg h5, #logmsg h6 { margin: .5em 0; }
#logmsg h1:first-child, #logmsg h2:first-child, #logmsg h3:first-child, #logmsg h4:first-child, #logmsg h5:first-child, #logmsg h6:first-child { margin-top: 0; }
#logmsg ul, #logmsg ol { padding: 0; list-style-position: inside; margin: 0 0 0 1em; }
#logmsg ul { text-indent: -1em; padding-left: 1em; }#logmsg ol { text-indent: -1.5em; padding-left: 1.5em; }
#logmsg > ul, #logmsg > ol { margin: 0 0 1em 0; }
#logmsg pre { background: #eee; padding: 1em; }
#logmsg blockquote { border: 1px solid #fa0; border-left-width: 10px; padding: 1em 1em 0 1em; background: white;}
#logmsg dl { margin: 0; }
#logmsg dt { font-weight: bold; }
#logmsg dd { margin: 0; padding: 0 0 0.5em 0; }
#logmsg dd:before { content:'\00bb';}
#logmsg table { border-spacing: 0px; border-collapse: collapse; border-top: 4px solid #fa0; border-bottom: 1px solid #fa0; background: #fff; }
#logmsg table th { text-align: left; font-weight: normal; padding: 0.2em 0.5em; border-top: 1px dotted #fa0; }
#logmsg table td { text-align: right; border-top: 1px dotted #fa0; padding: 0.2em 0.5em; }
#logmsg table thead th { text-align: center; border-bottom: 1px solid #fa0; }
#logmsg table th.Corner { text-align: left; }
#logmsg hr { border: none 0; border-top: 2px dashed #fa0; height: 1px; }
#header, #footer { color: #fff; background: #636; border: 1px #300 solid; padding: 6px; }
#patch { width: 100%; }
#patch h4 {font-family: verdana,arial,helvetica,sans-serif;font-size:10pt;padding:8px;background:#369;color:#fff;margin:0;}
#patch .propset h4, #patch .binary h4 {margin:0;}
#patch pre {padding:0;line-height:1.2em;margin:0;}
#patch .diff {width:100%;background:#eee;padding: 0 0 10px 0;overflow:auto;}
#patch .propset .diff, #patch .binary .diff  {padding:10px 0;}
#patch span {display:block;padding:0 10px;}
#patch .modfile, #patch .addfile, #patch .delfile, #patch .propset, #patch .binary, #patch .copfile {border:1px solid #ccc;margin:10px 0;}
#patch ins {background:#dfd;text-decoration:none;display:block;padding:0 10px;}
#patch del {background:#fdd;text-decoration:none;display:block;padding:0 10px;}
#patch .lines, .info {color:#888;background:#fff;}
--></style>
<div id="msg">
<dl class="meta">
<dt>Revision</dt> <dd><a href="http://trac.webkit.org/projects/webkit/changeset/190860">190860</a></dd>
<dt>Author</dt> <dd>fpizlo@apple.com</dd>
<dt>Date</dt> <dd>2015-10-12 10:56:26 -0700 (Mon, 12 Oct 2015)</dd>
</dl>

<h3>Log Message</h3>
<pre>FTL should generate code to call slow paths lazily
https://bugs.webkit.org/show_bug.cgi?id=149936

Reviewed by Saam Barati.

Source/JavaScriptCore:

We often have complex slow paths in FTL-generated code. Those slow paths may never run. Even
if they do run, they don't need stellar performance. So, it doesn't make sense to have LLVM
worry about compiling such slow path code.

This patch enables us to use our own MacroAssembler for compiling the slow path inside FTL
code. It does this by using a crazy lambda thingy (see FTLLowerDFGToLLVM.cpp's lazySlowPath()
and its documentation). The result is quite natural to use.

Even for straight slow path calls via something like vmCall(), the lazySlowPath offers the
benefit that the call marshalling and the exception checking are not expressed using LLVM IR
and do not require LLVM to think about it. It also has the benefit that we never generate the
code if it never runs. That's great, since function calls usually involve ~10 instructions
total (move arguments to argument registers, make the call, check exception, etc.).

This patch adds the lazy slow path abstraction and uses it for some slow paths in the FTL.
The code we generate with lazy slow paths is worse than the code that LLVM would have
generated. Therefore, a lazy slow path only makes sense when we have strong evidence that
the slow path will execute infrequently relative to the fast path. This completely precludes
the use of lazy slow paths for out-of-line Nodes that unconditionally call a C++ function.
It also precludes their use for the GetByVal out-of-bounds handler, since when we generate
a GetByVal with an out-of-bounds handler it means that we only know that the out-of-bounds
case executed at least once. So, for all we know, it may actually be the common case. So,
this patch just deployed the lazy slow path for GC slow paths and masquerades-as-undefined
slow paths. It makes sense for GC slow paths because those have a statistical guarantee of
slow path frequency - probably bounded at less than 1/10. It makes sense for masquerades-as-
undefined because we can say quite confidently that this is an uncommon scenario on the
modern Web.

Something that's always been challenging about abstractions involving the MacroAssembler is
that linking is a separate phase, and there is no way for someone who is just given access to
the MacroAssembler&amp; to emit code that requires linking, since linking happens once we have
emitted all code and we are creating the LinkBuffer. Moreover, the FTL requires that the
final parts of linking happen on the main thread. This patch ran into this issue, and solved
it comprehensively, by introducing MacroAssembler::addLinkTask(). This takes a lambda and
runs it at the bitter end of linking - when performFinalization() is called. This ensure that
the task added by addLinkTask() runs on the main thread. This patch doesn't replace all of
the previously existing idioms for dealing with this issue; we can do that later.

This shows small speed-ups on a bunch of things. No big win on any benchmark aggregate. But
mainly this is done for https://bugs.webkit.org/show_bug.cgi?id=149852, where we found that
outlining the slow path in this way was a significant speed boost.

* CMakeLists.txt:
* JavaScriptCore.vcxproj/JavaScriptCore.vcxproj:
* JavaScriptCore.xcodeproj/project.pbxproj:
* assembler/AbstractMacroAssembler.h:
(JSC::AbstractMacroAssembler::replaceWithAddressComputation):
(JSC::AbstractMacroAssembler::addLinkTask):
(JSC::AbstractMacroAssembler::AbstractMacroAssembler):
* assembler/LinkBuffer.cpp:
(JSC::LinkBuffer::linkCode):
(JSC::LinkBuffer::allocate):
(JSC::LinkBuffer::performFinalization):
* assembler/LinkBuffer.h:
(JSC::LinkBuffer::wasAlreadyDisassembled):
(JSC::LinkBuffer::didAlreadyDisassemble):
(JSC::LinkBuffer::vm):
(JSC::LinkBuffer::executableOffsetFor):
* bytecode/CodeOrigin.h:
(JSC::CodeOrigin::CodeOrigin):
(JSC::CodeOrigin::isSet):
(JSC::CodeOrigin::operator bool):
(JSC::CodeOrigin::isHashTableDeletedValue):
(JSC::CodeOrigin::operator!): Deleted.
* ftl/FTLCompile.cpp:
(JSC::FTL::mmAllocateDataSection):
* ftl/FTLInlineCacheDescriptor.h:
(JSC::FTL::InlineCacheDescriptor::InlineCacheDescriptor):
(JSC::FTL::CheckInDescriptor::CheckInDescriptor):
(JSC::FTL::LazySlowPathDescriptor::LazySlowPathDescriptor):
* ftl/FTLJITCode.h:
* ftl/FTLJITFinalizer.cpp:
(JSC::FTL::JITFinalizer::finalizeFunction):
* ftl/FTLJITFinalizer.h:
* ftl/FTLLazySlowPath.cpp: Added.
(JSC::FTL::LazySlowPath::LazySlowPath):
(JSC::FTL::LazySlowPath::~LazySlowPath):
(JSC::FTL::LazySlowPath::generate):
* ftl/FTLLazySlowPath.h: Added.
(JSC::FTL::LazySlowPath::createGenerator):
(JSC::FTL::LazySlowPath::patchpoint):
(JSC::FTL::LazySlowPath::usedRegisters):
(JSC::FTL::LazySlowPath::callSiteIndex):
(JSC::FTL::LazySlowPath::stub):
* ftl/FTLLazySlowPathCall.h: Added.
(JSC::FTL::createLazyCallGenerator):
* ftl/FTLLowerDFGToLLVM.cpp:
(JSC::FTL::DFG::LowerDFGToLLVM::compileCreateActivation):
(JSC::FTL::DFG::LowerDFGToLLVM::compileNewFunction):
(JSC::FTL::DFG::LowerDFGToLLVM::compileCreateDirectArguments):
(JSC::FTL::DFG::LowerDFGToLLVM::compileNewArrayWithSize):
(JSC::FTL::DFG::LowerDFGToLLVM::compileMakeRope):
(JSC::FTL::DFG::LowerDFGToLLVM::compileNotifyWrite):
(JSC::FTL::DFG::LowerDFGToLLVM::compileIsObjectOrNull):
(JSC::FTL::DFG::LowerDFGToLLVM::compileIsFunction):
(JSC::FTL::DFG::LowerDFGToLLVM::compileIn):
(JSC::FTL::DFG::LowerDFGToLLVM::compileMaterializeNewObject):
(JSC::FTL::DFG::LowerDFGToLLVM::compileMaterializeCreateActivation):
(JSC::FTL::DFG::LowerDFGToLLVM::compileCheckWatchdogTimer):
(JSC::FTL::DFG::LowerDFGToLLVM::allocatePropertyStorageWithSizeImpl):
(JSC::FTL::DFG::LowerDFGToLLVM::allocateObject):
(JSC::FTL::DFG::LowerDFGToLLVM::allocateJSArray):
(JSC::FTL::DFG::LowerDFGToLLVM::buildTypeOf):
(JSC::FTL::DFG::LowerDFGToLLVM::sensibleDoubleToInt32):
(JSC::FTL::DFG::LowerDFGToLLVM::lazySlowPath):
(JSC::FTL::DFG::LowerDFGToLLVM::speculate):
(JSC::FTL::DFG::LowerDFGToLLVM::emitStoreBarrier):
* ftl/FTLOperations.cpp:
(JSC::FTL::operationMaterializeObjectInOSR):
(JSC::FTL::compileFTLLazySlowPath):
* ftl/FTLOperations.h:
* ftl/FTLSlowPathCall.cpp:
(JSC::FTL::SlowPathCallContext::SlowPathCallContext):
(JSC::FTL::SlowPathCallContext::~SlowPathCallContext):
(JSC::FTL::SlowPathCallContext::keyWithTarget):
(JSC::FTL::SlowPathCallContext::makeCall):
(JSC::FTL::callSiteIndexForCodeOrigin):
(JSC::FTL::storeCodeOrigin): Deleted.
(JSC::FTL::callOperation): Deleted.
* ftl/FTLSlowPathCall.h:
(JSC::FTL::callOperation):
* ftl/FTLState.h:
* ftl/FTLThunks.cpp:
(JSC::FTL::genericGenerationThunkGenerator):
(JSC::FTL::osrExitGenerationThunkGenerator):
(JSC::FTL::lazySlowPathGenerationThunkGenerator):
(JSC::FTL::registerClobberCheck):
* ftl/FTLThunks.h:
* interpreter/CallFrame.h:
(JSC::CallSiteIndex::CallSiteIndex):
(JSC::CallSiteIndex::operator bool):
(JSC::CallSiteIndex::bits):
* jit/CCallHelpers.h:
(JSC::CCallHelpers::setupArgument):
(JSC::CCallHelpers::setupArgumentsWithExecState):
* jit/JITOperations.cpp:

Source/WTF:

Enables SharedTask to handle any function type, not just void().

It's probably better to use SharedTask instead of std::function in performance-sensitive
code. std::function uses the system malloc and has copy semantics. SharedTask uses FastMalloc
and has aliasing semantics. So, you can just trust that it will have sensible performance
characteristics.

* wtf/ParallelHelperPool.cpp:
(WTF::ParallelHelperClient::~ParallelHelperClient):
(WTF::ParallelHelperClient::setTask):
(WTF::ParallelHelperClient::doSomeHelping):
(WTF::ParallelHelperClient::runTaskInParallel):
(WTF::ParallelHelperClient::finish):
(WTF::ParallelHelperClient::claimTask):
(WTF::ParallelHelperClient::runTask):
(WTF::ParallelHelperPool::doSomeHelping):
(WTF::ParallelHelperPool::helperThreadBody):
* wtf/ParallelHelperPool.h:
(WTF::ParallelHelperClient::setFunction):
(WTF::ParallelHelperClient::runFunctionInParallel):
(WTF::ParallelHelperClient::pool):
* wtf/SharedTask.h:
(WTF::createSharedTask):
(WTF::SharedTask::SharedTask): Deleted.
(WTF::SharedTask::~SharedTask): Deleted.
(WTF::SharedTaskFunctor::SharedTaskFunctor): Deleted.</pre>

<h3>Modified Paths</h3>
<ul>
<li><a href="#trunkSourceJavaScriptCoreCMakeListstxt">trunk/Source/JavaScriptCore/CMakeLists.txt</a></li>
<li><a href="#trunkSourceJavaScriptCoreChangeLog">trunk/Source/JavaScriptCore/ChangeLog</a></li>
<li><a href="#trunkSourceJavaScriptCoreJavaScriptCorevcxprojJavaScriptCorevcxproj">trunk/Source/JavaScriptCore/JavaScriptCore.vcxproj/JavaScriptCore.vcxproj</a></li>
<li><a href="#trunkSourceJavaScriptCoreJavaScriptCorexcodeprojprojectpbxproj">trunk/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj</a></li>
<li><a href="#trunkSourceJavaScriptCoreassemblerAbstractMacroAssemblerh">trunk/Source/JavaScriptCore/assembler/AbstractMacroAssembler.h</a></li>
<li><a href="#trunkSourceJavaScriptCoreassemblerLinkBuffercpp">trunk/Source/JavaScriptCore/assembler/LinkBuffer.cpp</a></li>
<li><a href="#trunkSourceJavaScriptCoreassemblerLinkBufferh">trunk/Source/JavaScriptCore/assembler/LinkBuffer.h</a></li>
<li><a href="#trunkSourceJavaScriptCorebytecodeCodeOriginh">trunk/Source/JavaScriptCore/bytecode/CodeOrigin.h</a></li>
<li><a href="#trunkSourceJavaScriptCoreftlFTLCompilecpp">trunk/Source/JavaScriptCore/ftl/FTLCompile.cpp</a></li>
<li><a href="#trunkSourceJavaScriptCoreftlFTLInlineCacheDescriptorh">trunk/Source/JavaScriptCore/ftl/FTLInlineCacheDescriptor.h</a></li>
<li><a href="#trunkSourceJavaScriptCoreftlFTLJITCodeh">trunk/Source/JavaScriptCore/ftl/FTLJITCode.h</a></li>
<li><a href="#trunkSourceJavaScriptCoreftlFTLJITFinalizercpp">trunk/Source/JavaScriptCore/ftl/FTLJITFinalizer.cpp</a></li>
<li><a href="#trunkSourceJavaScriptCoreftlFTLJITFinalizerh">trunk/Source/JavaScriptCore/ftl/FTLJITFinalizer.h</a></li>
<li><a href="#trunkSourceJavaScriptCoreftlFTLLowerDFGToLLVMcpp">trunk/Source/JavaScriptCore/ftl/FTLLowerDFGToLLVM.cpp</a></li>
<li><a href="#trunkSourceJavaScriptCoreftlFTLOperationscpp">trunk/Source/JavaScriptCore/ftl/FTLOperations.cpp</a></li>
<li><a href="#trunkSourceJavaScriptCoreftlFTLOperationsh">trunk/Source/JavaScriptCore/ftl/FTLOperations.h</a></li>
<li><a href="#trunkSourceJavaScriptCoreftlFTLSlowPathCallcpp">trunk/Source/JavaScriptCore/ftl/FTLSlowPathCall.cpp</a></li>
<li><a href="#trunkSourceJavaScriptCoreftlFTLSlowPathCallh">trunk/Source/JavaScriptCore/ftl/FTLSlowPathCall.h</a></li>
<li><a href="#trunkSourceJavaScriptCoreftlFTLStateh">trunk/Source/JavaScriptCore/ftl/FTLState.h</a></li>
<li><a href="#trunkSourceJavaScriptCoreftlFTLThunkscpp">trunk/Source/JavaScriptCore/ftl/FTLThunks.cpp</a></li>
<li><a href="#trunkSourceJavaScriptCoreftlFTLThunksh">trunk/Source/JavaScriptCore/ftl/FTLThunks.h</a></li>
<li><a href="#trunkSourceJavaScriptCoreinterpreterCallFrameh">trunk/Source/JavaScriptCore/interpreter/CallFrame.h</a></li>
<li><a href="#trunkSourceJavaScriptCorejitCCallHelpersh">trunk/Source/JavaScriptCore/jit/CCallHelpers.h</a></li>
<li><a href="#trunkSourceJavaScriptCorejitJITOperationscpp">trunk/Source/JavaScriptCore/jit/JITOperations.cpp</a></li>
<li><a href="#trunkSourceWTFChangeLog">trunk/Source/WTF/ChangeLog</a></li>
<li><a href="#trunkSourceWTFwtfParallelHelperPoolcpp">trunk/Source/WTF/wtf/ParallelHelperPool.cpp</a></li>
<li><a href="#trunkSourceWTFwtfParallelHelperPoolh">trunk/Source/WTF/wtf/ParallelHelperPool.h</a></li>
<li><a href="#trunkSourceWTFwtfSharedTaskh">trunk/Source/WTF/wtf/SharedTask.h</a></li>
</ul>

<h3>Added Paths</h3>
<ul>
<li><a href="#trunkSourceJavaScriptCoreftlFTLLazySlowPathcpp">trunk/Source/JavaScriptCore/ftl/FTLLazySlowPath.cpp</a></li>
<li><a href="#trunkSourceJavaScriptCoreftlFTLLazySlowPathh">trunk/Source/JavaScriptCore/ftl/FTLLazySlowPath.h</a></li>
<li><a href="#trunkSourceJavaScriptCoreftlFTLLazySlowPathCallh">trunk/Source/JavaScriptCore/ftl/FTLLazySlowPathCall.h</a></li>
</ul>

</div>
<div id="patch">
<h3>Diff</h3>
<a id="trunkSourceJavaScriptCoreCMakeListstxt"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/CMakeLists.txt (190859 => 190860)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/CMakeLists.txt        2015-10-12 17:25:37 UTC (rev 190859)
+++ trunk/Source/JavaScriptCore/CMakeLists.txt        2015-10-12 17:56:26 UTC (rev 190860)
</span><span class="lines">@@ -905,6 +905,7 @@
</span><span class="cx">         ftl/FTLJSCallBase.cpp
</span><span class="cx">         ftl/FTLJSCallVarargs.cpp
</span><span class="cx">         ftl/FTLJSTailCall.cpp
</span><ins>+        ftl/FTLLazySlowPath.cpp
</ins><span class="cx">         ftl/FTLLink.cpp
</span><span class="cx">         ftl/FTLLocation.cpp
</span><span class="cx">         ftl/FTLLowerDFGToLLVM.cpp
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreChangeLog"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/ChangeLog (190859 => 190860)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/ChangeLog        2015-10-12 17:25:37 UTC (rev 190859)
+++ trunk/Source/JavaScriptCore/ChangeLog        2015-10-12 17:56:26 UTC (rev 190860)
</span><span class="lines">@@ -1,3 +1,147 @@
</span><ins>+2015-10-10  Filip Pizlo  &lt;fpizlo@apple.com&gt;
+
+        FTL should generate code to call slow paths lazily
+        https://bugs.webkit.org/show_bug.cgi?id=149936
+
+        Reviewed by Saam Barati.
+
+        We often have complex slow paths in FTL-generated code. Those slow paths may never run. Even
+        if they do run, they don't need stellar performance. So, it doesn't make sense to have LLVM
+        worry about compiling such slow path code.
+
+        This patch enables us to use our own MacroAssembler for compiling the slow path inside FTL
+        code. It does this by using a crazy lambda thingy (see FTLLowerDFGToLLVM.cpp's lazySlowPath()
+        and its documentation). The result is quite natural to use.
+
+        Even for straight slow path calls via something like vmCall(), the lazySlowPath offers the
+        benefit that the call marshalling and the exception checking are not expressed using LLVM IR
+        and do not require LLVM to think about it. It also has the benefit that we never generate the
+        code if it never runs. That's great, since function calls usually involve ~10 instructions
+        total (move arguments to argument registers, make the call, check exception, etc.).
+
+        This patch adds the lazy slow path abstraction and uses it for some slow paths in the FTL.
+        The code we generate with lazy slow paths is worse than the code that LLVM would have
+        generated. Therefore, a lazy slow path only makes sense when we have strong evidence that
+        the slow path will execute infrequently relative to the fast path. This completely precludes
+        the use of lazy slow paths for out-of-line Nodes that unconditionally call a C++ function.
+        It also precludes their use for the GetByVal out-of-bounds handler, since when we generate
+        a GetByVal with an out-of-bounds handler it means that we only know that the out-of-bounds
+        case executed at least once. So, for all we know, it may actually be the common case. So,
+        this patch just deployed the lazy slow path for GC slow paths and masquerades-as-undefined
+        slow paths. It makes sense for GC slow paths because those have a statistical guarantee of
+        slow path frequency - probably bounded at less than 1/10. It makes sense for masquerades-as-
+        undefined because we can say quite confidently that this is an uncommon scenario on the
+        modern Web.
+
+        Something that's always been challenging about abstractions involving the MacroAssembler is
+        that linking is a separate phase, and there is no way for someone who is just given access to
+        the MacroAssembler&amp; to emit code that requires linking, since linking happens once we have
+        emitted all code and we are creating the LinkBuffer. Moreover, the FTL requires that the
+        final parts of linking happen on the main thread. This patch ran into this issue, and solved
+        it comprehensively, by introducing MacroAssembler::addLinkTask(). This takes a lambda and
+        runs it at the bitter end of linking - when performFinalization() is called. This ensure that
+        the task added by addLinkTask() runs on the main thread. This patch doesn't replace all of
+        the previously existing idioms for dealing with this issue; we can do that later.
+
+        This shows small speed-ups on a bunch of things. No big win on any benchmark aggregate. But
+        mainly this is done for https://bugs.webkit.org/show_bug.cgi?id=149852, where we found that
+        outlining the slow path in this way was a significant speed boost.
+
+        * CMakeLists.txt:
+        * JavaScriptCore.vcxproj/JavaScriptCore.vcxproj:
+        * JavaScriptCore.xcodeproj/project.pbxproj:
+        * assembler/AbstractMacroAssembler.h:
+        (JSC::AbstractMacroAssembler::replaceWithAddressComputation):
+        (JSC::AbstractMacroAssembler::addLinkTask):
+        (JSC::AbstractMacroAssembler::AbstractMacroAssembler):
+        * assembler/LinkBuffer.cpp:
+        (JSC::LinkBuffer::linkCode):
+        (JSC::LinkBuffer::allocate):
+        (JSC::LinkBuffer::performFinalization):
+        * assembler/LinkBuffer.h:
+        (JSC::LinkBuffer::wasAlreadyDisassembled):
+        (JSC::LinkBuffer::didAlreadyDisassemble):
+        (JSC::LinkBuffer::vm):
+        (JSC::LinkBuffer::executableOffsetFor):
+        * bytecode/CodeOrigin.h:
+        (JSC::CodeOrigin::CodeOrigin):
+        (JSC::CodeOrigin::isSet):
+        (JSC::CodeOrigin::operator bool):
+        (JSC::CodeOrigin::isHashTableDeletedValue):
+        (JSC::CodeOrigin::operator!): Deleted.
+        * ftl/FTLCompile.cpp:
+        (JSC::FTL::mmAllocateDataSection):
+        * ftl/FTLInlineCacheDescriptor.h:
+        (JSC::FTL::InlineCacheDescriptor::InlineCacheDescriptor):
+        (JSC::FTL::CheckInDescriptor::CheckInDescriptor):
+        (JSC::FTL::LazySlowPathDescriptor::LazySlowPathDescriptor):
+        * ftl/FTLJITCode.h:
+        * ftl/FTLJITFinalizer.cpp:
+        (JSC::FTL::JITFinalizer::finalizeFunction):
+        * ftl/FTLJITFinalizer.h:
+        * ftl/FTLLazySlowPath.cpp: Added.
+        (JSC::FTL::LazySlowPath::LazySlowPath):
+        (JSC::FTL::LazySlowPath::~LazySlowPath):
+        (JSC::FTL::LazySlowPath::generate):
+        * ftl/FTLLazySlowPath.h: Added.
+        (JSC::FTL::LazySlowPath::createGenerator):
+        (JSC::FTL::LazySlowPath::patchpoint):
+        (JSC::FTL::LazySlowPath::usedRegisters):
+        (JSC::FTL::LazySlowPath::callSiteIndex):
+        (JSC::FTL::LazySlowPath::stub):
+        * ftl/FTLLazySlowPathCall.h: Added.
+        (JSC::FTL::createLazyCallGenerator):
+        * ftl/FTLLowerDFGToLLVM.cpp:
+        (JSC::FTL::DFG::LowerDFGToLLVM::compileCreateActivation):
+        (JSC::FTL::DFG::LowerDFGToLLVM::compileNewFunction):
+        (JSC::FTL::DFG::LowerDFGToLLVM::compileCreateDirectArguments):
+        (JSC::FTL::DFG::LowerDFGToLLVM::compileNewArrayWithSize):
+        (JSC::FTL::DFG::LowerDFGToLLVM::compileMakeRope):
+        (JSC::FTL::DFG::LowerDFGToLLVM::compileNotifyWrite):
+        (JSC::FTL::DFG::LowerDFGToLLVM::compileIsObjectOrNull):
+        (JSC::FTL::DFG::LowerDFGToLLVM::compileIsFunction):
+        (JSC::FTL::DFG::LowerDFGToLLVM::compileIn):
+        (JSC::FTL::DFG::LowerDFGToLLVM::compileMaterializeNewObject):
+        (JSC::FTL::DFG::LowerDFGToLLVM::compileMaterializeCreateActivation):
+        (JSC::FTL::DFG::LowerDFGToLLVM::compileCheckWatchdogTimer):
+        (JSC::FTL::DFG::LowerDFGToLLVM::allocatePropertyStorageWithSizeImpl):
+        (JSC::FTL::DFG::LowerDFGToLLVM::allocateObject):
+        (JSC::FTL::DFG::LowerDFGToLLVM::allocateJSArray):
+        (JSC::FTL::DFG::LowerDFGToLLVM::buildTypeOf):
+        (JSC::FTL::DFG::LowerDFGToLLVM::sensibleDoubleToInt32):
+        (JSC::FTL::DFG::LowerDFGToLLVM::lazySlowPath):
+        (JSC::FTL::DFG::LowerDFGToLLVM::speculate):
+        (JSC::FTL::DFG::LowerDFGToLLVM::emitStoreBarrier):
+        * ftl/FTLOperations.cpp:
+        (JSC::FTL::operationMaterializeObjectInOSR):
+        (JSC::FTL::compileFTLLazySlowPath):
+        * ftl/FTLOperations.h:
+        * ftl/FTLSlowPathCall.cpp:
+        (JSC::FTL::SlowPathCallContext::SlowPathCallContext):
+        (JSC::FTL::SlowPathCallContext::~SlowPathCallContext):
+        (JSC::FTL::SlowPathCallContext::keyWithTarget):
+        (JSC::FTL::SlowPathCallContext::makeCall):
+        (JSC::FTL::callSiteIndexForCodeOrigin):
+        (JSC::FTL::storeCodeOrigin): Deleted.
+        (JSC::FTL::callOperation): Deleted.
+        * ftl/FTLSlowPathCall.h:
+        (JSC::FTL::callOperation):
+        * ftl/FTLState.h:
+        * ftl/FTLThunks.cpp:
+        (JSC::FTL::genericGenerationThunkGenerator):
+        (JSC::FTL::osrExitGenerationThunkGenerator):
+        (JSC::FTL::lazySlowPathGenerationThunkGenerator):
+        (JSC::FTL::registerClobberCheck):
+        * ftl/FTLThunks.h:
+        * interpreter/CallFrame.h:
+        (JSC::CallSiteIndex::CallSiteIndex):
+        (JSC::CallSiteIndex::operator bool):
+        (JSC::CallSiteIndex::bits):
+        * jit/CCallHelpers.h:
+        (JSC::CCallHelpers::setupArgument):
+        (JSC::CCallHelpers::setupArgumentsWithExecState):
+        * jit/JITOperations.cpp:
+
</ins><span class="cx"> 2015-10-12  Philip Chimento  &lt;philip.chimento@gmail.com&gt;
</span><span class="cx"> 
</span><span class="cx">         webkit-gtk-2.3.4 fails to link JavaScriptCore, missing symbols add_history and readline
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreJavaScriptCorevcxprojJavaScriptCorevcxproj"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/JavaScriptCore.vcxproj/JavaScriptCore.vcxproj (190859 => 190860)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/JavaScriptCore.vcxproj/JavaScriptCore.vcxproj        2015-10-12 17:25:37 UTC (rev 190859)
+++ trunk/Source/JavaScriptCore/JavaScriptCore.vcxproj/JavaScriptCore.vcxproj        2015-10-12 17:56:26 UTC (rev 190860)
</span><span class="lines">@@ -540,6 +540,7 @@
</span><span class="cx">     &lt;ClCompile Include=&quot;..\ftl\FTLJSCallBase.cpp&quot; /&gt;
</span><span class="cx">     &lt;ClCompile Include=&quot;..\ftl\FTLJSCallVarargs.cpp&quot; /&gt;
</span><span class="cx">     &lt;ClCompile Include=&quot;..\ftl\FTLJSTailCall.cpp&quot; /&gt;
</span><ins>+    &lt;ClCompile Include=&quot;..\ftl\FTLLazySlowPath.cpp&quot; /&gt;
</ins><span class="cx">     &lt;ClCompile Include=&quot;..\ftl\FTLLink.cpp&quot; /&gt;
</span><span class="cx">     &lt;ClCompile Include=&quot;..\ftl\FTLLocation.cpp&quot; /&gt;
</span><span class="cx">     &lt;ClCompile Include=&quot;..\ftl\FTLLowerDFGToLLVM.cpp&quot; /&gt;
</span><span class="lines">@@ -1302,6 +1303,8 @@
</span><span class="cx">     &lt;ClInclude Include=&quot;..\ftl\FTLJSCallBase.h&quot; /&gt;
</span><span class="cx">     &lt;ClInclude Include=&quot;..\ftl\FTLJSCallVarargs.h&quot; /&gt;
</span><span class="cx">     &lt;ClInclude Include=&quot;..\ftl\FTLJSTailCall.h&quot; /&gt;
</span><ins>+    &lt;ClInclude Include=&quot;..\ftl\FTLLazySlowPath.h&quot; /&gt;
+    &lt;ClInclude Include=&quot;..\ftl\FTLLazySlowPathCall.h&quot; /&gt;
</ins><span class="cx">     &lt;ClInclude Include=&quot;..\ftl\FTLLink.h&quot; /&gt;
</span><span class="cx">     &lt;ClInclude Include=&quot;..\ftl\FTLLocation.h&quot; /&gt;
</span><span class="cx">     &lt;ClInclude Include=&quot;..\ftl\FTLLowerDFGToLLVM.h&quot; /&gt;
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreJavaScriptCorexcodeprojprojectpbxproj"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj (190859 => 190860)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj        2015-10-12 17:25:37 UTC (rev 190859)
+++ trunk/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj        2015-10-12 17:56:26 UTC (rev 190860)
</span><span class="lines">@@ -494,6 +494,9 @@
</span><span class="cx">                 0FB17662196B8F9E0091052A /* DFGPureValue.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FB1765E196B8F9E0091052A /* DFGPureValue.cpp */; };
</span><span class="cx">                 0FB17663196B8F9E0091052A /* DFGPureValue.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FB1765F196B8F9E0091052A /* DFGPureValue.h */; };
</span><span class="cx">                 0FB438A319270B1D00E1FBC9 /* StructureSet.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FB438A219270B1D00E1FBC9 /* StructureSet.cpp */; };
</span><ins>+                0FB4FB731BC843140025CA5A /* FTLLazySlowPath.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FB4FB701BC843140025CA5A /* FTLLazySlowPath.cpp */; };
+                0FB4FB741BC843140025CA5A /* FTLLazySlowPath.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FB4FB711BC843140025CA5A /* FTLLazySlowPath.h */; };
+                0FB4FB751BC843140025CA5A /* FTLLazySlowPathCall.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FB4FB721BC843140025CA5A /* FTLLazySlowPathCall.h */; };
</ins><span class="cx">                 0FB5467714F59B5C002C2989 /* LazyOperandValueProfile.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FB5467614F59AD1002C2989 /* LazyOperandValueProfile.h */; settings = {ATTRIBUTES = (Private, ); }; };
</span><span class="cx">                 0FB5467914F5C46B002C2989 /* LazyOperandValueProfile.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FB5467814F5C468002C2989 /* LazyOperandValueProfile.cpp */; };
</span><span class="cx">                 0FB5467B14F5C7E1002C2989 /* MethodOfGettingAValueProfile.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FB5467A14F5C7D4002C2989 /* MethodOfGettingAValueProfile.h */; settings = {ATTRIBUTES = (Private, ); }; };
</span><span class="lines">@@ -2352,6 +2355,9 @@
</span><span class="cx">                 0FB4B51F16B62772003F696B /* DFGNodeAllocator.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGNodeAllocator.h; path = dfg/DFGNodeAllocator.h; sourceTree = &quot;&lt;group&gt;&quot;; };
</span><span class="cx">                 0FB4B52116B6278D003F696B /* FunctionExecutableDump.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = FunctionExecutableDump.cpp; sourceTree = &quot;&lt;group&gt;&quot;; };
</span><span class="cx">                 0FB4B52216B6278D003F696B /* FunctionExecutableDump.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = FunctionExecutableDump.h; sourceTree = &quot;&lt;group&gt;&quot;; };
</span><ins>+                0FB4FB701BC843140025CA5A /* FTLLazySlowPath.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = FTLLazySlowPath.cpp; path = ftl/FTLLazySlowPath.cpp; sourceTree = &quot;&lt;group&gt;&quot;; };
+                0FB4FB711BC843140025CA5A /* FTLLazySlowPath.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = FTLLazySlowPath.h; path = ftl/FTLLazySlowPath.h; sourceTree = &quot;&lt;group&gt;&quot;; };
+                0FB4FB721BC843140025CA5A /* FTLLazySlowPathCall.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = FTLLazySlowPathCall.h; path = ftl/FTLLazySlowPathCall.h; sourceTree = &quot;&lt;group&gt;&quot;; };
</ins><span class="cx">                 0FB5467614F59AD1002C2989 /* LazyOperandValueProfile.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = LazyOperandValueProfile.h; sourceTree = &quot;&lt;group&gt;&quot;; };
</span><span class="cx">                 0FB5467814F5C468002C2989 /* LazyOperandValueProfile.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = LazyOperandValueProfile.cpp; sourceTree = &quot;&lt;group&gt;&quot;; };
</span><span class="cx">                 0FB5467A14F5C7D4002C2989 /* MethodOfGettingAValueProfile.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = MethodOfGettingAValueProfile.h; sourceTree = &quot;&lt;group&gt;&quot;; };
</span><span class="lines">@@ -3989,6 +3995,9 @@
</span><span class="cx">                                 0FD120321A8C85BD000F5280 /* FTLJSCallVarargs.h */,
</span><span class="cx">                                 62774DA81B8D4B190006F05A /* FTLJSTailCall.cpp */,
</span><span class="cx">                                 62774DA91B8D4B190006F05A /* FTLJSTailCall.h */,
</span><ins>+                                0FB4FB701BC843140025CA5A /* FTLLazySlowPath.cpp */,
+                                0FB4FB711BC843140025CA5A /* FTLLazySlowPath.h */,
+                                0FB4FB721BC843140025CA5A /* FTLLazySlowPathCall.h */,
</ins><span class="cx">                                 0F8F2B93172E049E007DBDA5 /* FTLLink.cpp */,
</span><span class="cx">                                 0F8F2B94172E049E007DBDA5 /* FTLLink.h */,
</span><span class="cx">                                 0FCEFADD180738C000472CE4 /* FTLLocation.cpp */,
</span><span class="lines">@@ -6259,6 +6268,7 @@
</span><span class="cx">                                 0F63947815DCE34B006A597C /* DFGStructureAbstractValue.h in Headers */,
</span><span class="cx">                                 0F50AF3C193E8B3900674EE8 /* DFGStructureClobberState.h in Headers */,
</span><span class="cx">                                 0F79085619A290B200F6310C /* DFGStructureRegistrationPhase.h in Headers */,
</span><ins>+                                0FB4FB751BC843140025CA5A /* FTLLazySlowPathCall.h in Headers */,
</ins><span class="cx">                                 0F2FCCFF18A60070001A27F8 /* DFGThreadData.h in Headers */,
</span><span class="cx">                                 0FC097A2146B28CC00CF2442 /* DFGThunks.h in Headers */,
</span><span class="cx">                                 0FD8A32817D51F5700CA2C40 /* DFGTierUpCheckInjectionPhase.h in Headers */,
</span><span class="lines">@@ -6617,6 +6627,7 @@
</span><span class="cx">                                 7C184E1B17BEDBD3007CB63A /* JSPromise.h in Headers */,
</span><span class="cx">                                 7C184E2317BEE240007CB63A /* JSPromiseConstructor.h in Headers */,
</span><span class="cx">                                 7C008CDB187124BB00955C24 /* JSPromiseDeferred.h in Headers */,
</span><ins>+                                0FB4FB741BC843140025CA5A /* FTLLazySlowPath.h in Headers */,
</ins><span class="cx">                                 7C184E1F17BEE22E007CB63A /* JSPromisePrototype.h in Headers */,
</span><span class="cx">                                 2A05ABD61961DF2400341750 /* JSPropertyNameEnumerator.h in Headers */,
</span><span class="cx">                                 E3EF88751B66DF23003F26CB /* JSPropertyNameIterator.h in Headers */,
</span><span class="lines">@@ -8050,6 +8061,7 @@
</span><span class="cx">                                 14469DEB107EC7E700650446 /* StringConstructor.cpp in Sources */,
</span><span class="cx">                                 70EC0EC61AA0D7DA00B6AAFA /* StringIteratorPrototype.cpp in Sources */,
</span><span class="cx">                                 14469DEC107EC7E700650446 /* StringObject.cpp in Sources */,
</span><ins>+                                0FB4FB731BC843140025CA5A /* FTLLazySlowPath.cpp in Sources */,
</ins><span class="cx">                                 14469DED107EC7E700650446 /* StringPrototype.cpp in Sources */,
</span><span class="cx">                                 9335F24D12E6765B002B5553 /* StringRecursionChecker.cpp in Sources */,
</span><span class="cx">                                 BCDE3B430E6C832D001453A7 /* Structure.cpp in Sources */,
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreassemblerAbstractMacroAssemblerh"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/assembler/AbstractMacroAssembler.h (190859 => 190860)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/assembler/AbstractMacroAssembler.h        2015-10-12 17:25:37 UTC (rev 190859)
+++ trunk/Source/JavaScriptCore/assembler/AbstractMacroAssembler.h        2015-10-12 17:56:26 UTC (rev 190860)
</span><span class="lines">@@ -33,6 +33,7 @@
</span><span class="cx"> #include &quot;Options.h&quot;
</span><span class="cx"> #include &lt;wtf/CryptographicallyRandomNumber.h&gt;
</span><span class="cx"> #include &lt;wtf/Noncopyable.h&gt;
</span><ins>+#include &lt;wtf/SharedTask.h&gt;
</ins><span class="cx"> #include &lt;wtf/WeakRandom.h&gt;
</span><span class="cx"> 
</span><span class="cx"> #if ENABLE(ASSEMBLER)
</span><span class="lines">@@ -1004,6 +1005,17 @@
</span><span class="cx">         AssemblerType::replaceWithAddressComputation(label.dataLocation());
</span><span class="cx">     }
</span><span class="cx"> 
</span><ins>+    void addLinkTask(RefPtr&lt;SharedTask&lt;void(LinkBuffer&amp;)&gt;&gt; task)
+    {
+        m_linkTasks.append(task);
+    }
+
+    template&lt;typename Functor&gt;
+    void addLinkTask(const Functor&amp; functor)
+    {
+        m_linkTasks.append(createSharedTask&lt;void(LinkBuffer&amp;)&gt;(functor));
+    }
+
</ins><span class="cx"> protected:
</span><span class="cx">     AbstractMacroAssembler()
</span><span class="cx">         : m_randomSource(cryptographicallyRandomNumber())
</span><span class="lines">@@ -1099,10 +1111,9 @@
</span><span class="cx"> 
</span><span class="cx">     unsigned m_tempRegistersValidBits;
</span><span class="cx"> 
</span><ins>+    Vector&lt;RefPtr&lt;SharedTask&lt;void(LinkBuffer&amp;)&gt;&gt;&gt; m_linkTasks;
+
</ins><span class="cx">     friend class LinkBuffer;
</span><del>-
-private:
-
</del><span class="cx"> }; // class AbstractMacroAssembler
</span><span class="cx"> 
</span><span class="cx"> } // namespace JSC
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreassemblerLinkBuffercpp"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/assembler/LinkBuffer.cpp (190859 => 190860)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/assembler/LinkBuffer.cpp        2015-10-12 17:25:37 UTC (rev 190859)
+++ trunk/Source/JavaScriptCore/assembler/LinkBuffer.cpp        2015-10-12 17:56:26 UTC (rev 190860)
</span><span class="lines">@@ -193,6 +193,8 @@
</span><span class="cx"> #elif CPU(ARM64)
</span><span class="cx">     copyCompactAndLinkCode&lt;uint32_t&gt;(macroAssembler, ownerUID, effort);
</span><span class="cx"> #endif
</span><ins>+
+    m_linkTasks = WTF::move(macroAssembler.m_linkTasks);
</ins><span class="cx"> }
</span><span class="cx"> 
</span><span class="cx"> void LinkBuffer::allocate(size_t initialSize, void* ownerUID, JITCompilationEffort effort)
</span><span class="lines">@@ -224,6 +226,9 @@
</span><span class="cx"> 
</span><span class="cx"> void LinkBuffer::performFinalization()
</span><span class="cx"> {
</span><ins>+    for (auto&amp; task : m_linkTasks)
+        task-&gt;run(*this);
+
</ins><span class="cx"> #ifndef NDEBUG
</span><span class="cx">     ASSERT(!isCompilationThread());
</span><span class="cx">     ASSERT(!m_completed);
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreassemblerLinkBufferh"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/assembler/LinkBuffer.h (190859 => 190860)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/assembler/LinkBuffer.h        2015-10-12 17:25:37 UTC (rev 190859)
+++ trunk/Source/JavaScriptCore/assembler/LinkBuffer.h        2015-10-12 17:56:26 UTC (rev 190860)
</span><span class="lines">@@ -259,6 +259,8 @@
</span><span class="cx">     bool wasAlreadyDisassembled() const { return m_alreadyDisassembled; }
</span><span class="cx">     void didAlreadyDisassemble() { m_alreadyDisassembled = true; }
</span><span class="cx"> 
</span><ins>+    VM&amp; vm() { return *m_vm; }
+
</ins><span class="cx"> private:
</span><span class="cx"> #if ENABLE(BRANCH_COMPACTION)
</span><span class="cx">     int executableOffsetFor(int location)
</span><span class="lines">@@ -315,6 +317,7 @@
</span><span class="cx">     bool m_completed;
</span><span class="cx"> #endif
</span><span class="cx">     bool m_alreadyDisassembled { false };
</span><ins>+    Vector&lt;RefPtr&lt;SharedTask&lt;void(LinkBuffer&amp;)&gt;&gt;&gt; m_linkTasks;
</ins><span class="cx"> };
</span><span class="cx"> 
</span><span class="cx"> #define FINALIZE_CODE_IF(condition, linkBufferReference, dataLogFArgumentsForHeading)  \
</span></span></pre></div>
<a id="trunkSourceJavaScriptCorebytecodeCodeOriginh"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/bytecode/CodeOrigin.h (190859 => 190860)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/bytecode/CodeOrigin.h        2015-10-12 17:25:37 UTC (rev 190859)
+++ trunk/Source/JavaScriptCore/bytecode/CodeOrigin.h        2015-10-12 17:56:26 UTC (rev 190860)
</span><span class="lines">@@ -74,7 +74,7 @@
</span><span class="cx">     }
</span><span class="cx">     
</span><span class="cx">     bool isSet() const { return bytecodeIndex != invalidBytecodeIndex; }
</span><del>-    bool operator!() const { return !isSet(); }
</del><ins>+    explicit operator bool() const { return isSet(); }
</ins><span class="cx">     
</span><span class="cx">     bool isHashTableDeletedValue() const
</span><span class="cx">     {
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreftlFTLCompilecpp"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/ftl/FTLCompile.cpp (190859 => 190860)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/ftl/FTLCompile.cpp        2015-10-12 17:25:37 UTC (rev 190859)
+++ trunk/Source/JavaScriptCore/ftl/FTLCompile.cpp        2015-10-12 17:56:26 UTC (rev 190860)
</span><span class="lines">@@ -333,7 +333,7 @@
</span><span class="cx"> {
</span><span class="cx">     Graph&amp; graph = state.graph;
</span><span class="cx">     VM&amp; vm = graph.m_vm;
</span><del>-    StackMaps stackmaps = jitCode-&gt;stackmaps;
</del><ins>+    StackMaps&amp; stackmaps = jitCode-&gt;stackmaps;
</ins><span class="cx">     
</span><span class="cx">     int localsOffset = offsetOfStackRegion(recordMap, state.capturedStackmapID) + graph.m_nextMachineLocal;
</span><span class="cx">     int varargsSpillSlotsOffset = offsetOfStackRegion(recordMap, state.varargsSpillSlotsStackmapID);
</span><span class="lines">@@ -439,7 +439,10 @@
</span><span class="cx">         state.finalizer-&gt;exitThunksLinkBuffer = WTF::move(linkBuffer);
</span><span class="cx">     }
</span><span class="cx"> 
</span><del>-    if (!state.getByIds.isEmpty() || !state.putByIds.isEmpty() || !state.checkIns.isEmpty()) {
</del><ins>+    if (!state.getByIds.isEmpty()
+        || !state.putByIds.isEmpty()
+        || !state.checkIns.isEmpty()
+        || !state.lazySlowPaths.isEmpty()) {
</ins><span class="cx">         CCallHelpers slowPathJIT(&amp;vm, codeBlock);
</span><span class="cx">         
</span><span class="cx">         CCallHelpers::JumpList exceptionTarget;
</span><span class="lines">@@ -473,7 +476,8 @@
</span><span class="cx"> 
</span><span class="cx">                 MacroAssembler::Call call = callOperation(
</span><span class="cx">                     state, usedRegisters, slowPathJIT, codeOrigin, &amp;exceptionTarget,
</span><del>-                    operationGetByIdOptimize, result, gen.stubInfo(), base, getById.uid());
</del><ins>+                    operationGetByIdOptimize, result, CCallHelpers::TrustedImmPtr(gen.stubInfo()),
+                    base, CCallHelpers::TrustedImmPtr(getById.uid())).call();
</ins><span class="cx"> 
</span><span class="cx">                 gen.reportSlowPathCall(begin, call);
</span><span class="cx"> 
</span><span class="lines">@@ -511,7 +515,9 @@
</span><span class="cx">                 
</span><span class="cx">                 MacroAssembler::Call call = callOperation(
</span><span class="cx">                     state, usedRegisters, slowPathJIT, codeOrigin, &amp;exceptionTarget,
</span><del>-                    gen.slowPathFunction(), gen.stubInfo(), value, base, putById.uid());
</del><ins>+                    gen.slowPathFunction(), InvalidGPRReg,
+                    CCallHelpers::TrustedImmPtr(gen.stubInfo()), value, base,
+                    CCallHelpers::TrustedImmPtr(putById.uid())).call();
</ins><span class="cx">                 
</span><span class="cx">                 gen.reportSlowPathCall(begin, call);
</span><span class="cx">                 
</span><span class="lines">@@ -549,13 +555,56 @@
</span><span class="cx"> 
</span><span class="cx">                 MacroAssembler::Call slowCall = callOperation(
</span><span class="cx">                     state, usedRegisters, slowPathJIT, codeOrigin, &amp;exceptionTarget,
</span><del>-                    operationInOptimize, result, stubInfo, obj, checkIn.m_uid);
</del><ins>+                    operationInOptimize, result, CCallHelpers::TrustedImmPtr(stubInfo), obj,
+                    CCallHelpers::TrustedImmPtr(checkIn.uid())).call();
</ins><span class="cx"> 
</span><span class="cx">                 checkIn.m_slowPathDone.append(slowPathJIT.jump());
</span><span class="cx">                 
</span><span class="cx">                 checkIn.m_generators.append(CheckInGenerator(stubInfo, slowCall, begin));
</span><span class="cx">             }
</span><span class="cx">         }
</span><ins>+
+        for (unsigned i = state.lazySlowPaths.size(); i--;) {
+            LazySlowPathDescriptor&amp; descriptor = state.lazySlowPaths[i];
+
+            if (verboseCompilationEnabled())
+                dataLog(&quot;Handling lazySlowPath stackmap #&quot;, descriptor.stackmapID(), &quot;\n&quot;);
+
+            auto iter = recordMap.find(descriptor.stackmapID());
+            if (iter == recordMap.end()) {
+                // It was optimized out.
+                continue;
+            }
+
+            for (unsigned i = 0; i &lt; iter-&gt;value.size(); ++i) {
+                StackMaps::Record&amp; record = iter-&gt;value[i];
+                RegisterSet usedRegisters = usedRegistersFor(record);
+                Vector&lt;Location&gt; locations;
+                for (auto location : record.locations)
+                    locations.append(Location::forStackmaps(&amp;stackmaps, location));
+
+                char* startOfIC =
+                    bitwise_cast&lt;char*&gt;(generatedFunction) + record.instructionOffset;
+                CodeLocationLabel patchpoint((MacroAssemblerCodePtr(startOfIC)));
+                CodeLocationLabel exceptionTarget =
+                    state.finalizer-&gt;handleExceptionsLinkBuffer-&gt;entrypoint();
+
+                std::unique_ptr&lt;LazySlowPath&gt; lazySlowPath = std::make_unique&lt;LazySlowPath&gt;(
+                    patchpoint, exceptionTarget, usedRegisters, descriptor.callSiteIndex(),
+                    descriptor.m_linker-&gt;run(locations));
+
+                CCallHelpers::Label begin = slowPathJIT.label();
+
+                slowPathJIT.pushToSaveImmediateWithoutTouchingRegisters(
+                    CCallHelpers::TrustedImm32(state.jitCode-&gt;lazySlowPaths.size()));
+                CCallHelpers::Jump generatorJump = slowPathJIT.jump();
+                
+                descriptor.m_generators.append(std::make_tuple(lazySlowPath.get(), begin));
+
+                state.jitCode-&gt;lazySlowPaths.append(WTF::move(lazySlowPath));
+                state.finalizer-&gt;lazySlowPathGeneratorJumps.append(generatorJump);
+            }
+        }
</ins><span class="cx">         
</span><span class="cx">         exceptionTarget.link(&amp;slowPathJIT);
</span><span class="cx">         MacroAssembler::Jump exceptionJump = slowPathJIT.jump();
</span><span class="lines">@@ -578,12 +627,19 @@
</span><span class="cx">                 state, codeBlock, generatedFunction, recordMap, state.putByIds[i],
</span><span class="cx">                 sizeOfPutById());
</span><span class="cx">         }
</span><del>-
</del><span class="cx">         for (unsigned i = state.checkIns.size(); i--;) {
</span><span class="cx">             generateCheckInICFastPath(
</span><span class="cx">                 state, codeBlock, generatedFunction, recordMap, state.checkIns[i],
</span><span class="cx">                 sizeOfIn()); 
</span><del>-        } 
</del><ins>+        }
+        for (unsigned i = state.lazySlowPaths.size(); i--;) {
+            LazySlowPathDescriptor&amp; lazySlowPath = state.lazySlowPaths[i];
+            for (auto&amp; tuple : lazySlowPath.m_generators) {
+                MacroAssembler::replaceWithJump(
+                    std::get&lt;0&gt;(tuple)-&gt;patchpoint(),
+                    state.finalizer-&gt;sideCodeLinkBuffer-&gt;locationOf(std::get&lt;1&gt;(tuple)));
+            }
+        }
</ins><span class="cx">     }
</span><span class="cx">     
</span><span class="cx">     adjustCallICsForStackmaps(state.jsCalls, recordMap);
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreftlFTLInlineCacheDescriptorh"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/ftl/FTLInlineCacheDescriptor.h (190859 => 190860)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/ftl/FTLInlineCacheDescriptor.h        2015-10-12 17:25:37 UTC (rev 190859)
+++ trunk/Source/JavaScriptCore/ftl/FTLInlineCacheDescriptor.h        2015-10-12 17:56:26 UTC (rev 190860)
</span><span class="lines">@@ -1,5 +1,5 @@
</span><span class="cx"> /*
</span><del>- * Copyright (C) 2013, 2014 Apple Inc. All rights reserved.
</del><ins>+ * Copyright (C) 2013-2015 Apple Inc. All rights reserved.
</ins><span class="cx">  *
</span><span class="cx">  * Redistribution and use in source and binary forms, with or without
</span><span class="cx">  * modification, are permitted provided that the following conditions
</span><span class="lines">@@ -29,12 +29,15 @@
</span><span class="cx"> #if ENABLE(FTL_JIT)
</span><span class="cx"> 
</span><span class="cx"> #include &quot;CodeOrigin.h&quot;
</span><ins>+#include &quot;FTLLazySlowPath.h&quot;
</ins><span class="cx"> #include &quot;JITInlineCacheGenerator.h&quot;
</span><span class="cx"> #include &quot;MacroAssembler.h&quot;
</span><span class="cx"> #include &lt;wtf/text/UniquedStringImpl.h&gt;
</span><span class="cx"> 
</span><span class="cx"> namespace JSC { namespace FTL {
</span><span class="cx"> 
</span><ins>+class Location;
+
</ins><span class="cx"> class InlineCacheDescriptor {
</span><span class="cx"> public:
</span><span class="cx">     InlineCacheDescriptor() 
</span><span class="lines">@@ -113,18 +116,47 @@
</span><span class="cx"> public:
</span><span class="cx">     CheckInDescriptor() { }
</span><span class="cx">     
</span><del>-    CheckInDescriptor(unsigned stackmapID, CallSiteIndex callSite, const UniquedStringImpl* uid)
-        : InlineCacheDescriptor(stackmapID, callSite, nullptr)
-        , m_uid(uid)
</del><ins>+    CheckInDescriptor(unsigned stackmapID, CallSiteIndex callSite, UniquedStringImpl* uid)
+        : InlineCacheDescriptor(stackmapID, callSite, uid)
</ins><span class="cx">     {
</span><span class="cx">     }
</span><del>-
</del><span class="cx">     
</span><del>-    const UniquedStringImpl* m_uid;
</del><span class="cx">     Vector&lt;CheckInGenerator&gt; m_generators;
</span><span class="cx"> };
</span><span class="cx"> 
</span><ins>+// You can create a lazy slow path call in lowerDFGToLLVM by doing:
+// m_ftlState.lazySlowPaths.append(
+//     LazySlowPathDescriptor(
+//         stackmapID, callSiteIndex,
+//         createSharedTask&lt;RefPtr&lt;LazySlowPath::Generator&gt;(const Vector&lt;Location&gt;&amp;)&gt;(
+//             [] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
+//                 // This lambda should just record the registers that we will be using, and return
+//                 // a SharedTask that will actually generate the slow path.
+//                 return createLazyCallGenerator(
+//                     function, locations[0].directGPR(), locations[1].directGPR());
+//             })));
+//
+// Usually, you can use the LowerDFGToLLVM::lazySlowPath() helper, which takes care of the descriptor
+// for you and also creates the patchpoint.
+typedef RefPtr&lt;LazySlowPath::Generator&gt; LazySlowPathLinkerFunction(const Vector&lt;Location&gt;&amp;);
+typedef SharedTask&lt;LazySlowPathLinkerFunction&gt; LazySlowPathLinkerTask;
+class LazySlowPathDescriptor : public InlineCacheDescriptor {
+public:
+    LazySlowPathDescriptor() { }
</ins><span class="cx"> 
</span><ins>+    LazySlowPathDescriptor(
+        unsigned stackmapID, CallSiteIndex callSite,
+        RefPtr&lt;LazySlowPathLinkerTask&gt; linker)
+        : InlineCacheDescriptor(stackmapID, callSite, nullptr)
+        , m_linker(linker)
+    {
+    }
+
+    Vector&lt;std::tuple&lt;LazySlowPath*, CCallHelpers::Label&gt;&gt; m_generators;
+
+    RefPtr&lt;LazySlowPathLinkerTask&gt; m_linker;
+};
+
</ins><span class="cx"> } } // namespace JSC::FTL
</span><span class="cx"> 
</span><span class="cx"> #endif // ENABLE(FTL_JIT)
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreftlFTLJITCodeh"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/ftl/FTLJITCode.h (190859 => 190860)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/ftl/FTLJITCode.h        2015-10-12 17:25:37 UTC (rev 190859)
+++ trunk/Source/JavaScriptCore/ftl/FTLJITCode.h        2015-10-12 17:56:26 UTC (rev 190860)
</span><span class="lines">@@ -30,6 +30,7 @@
</span><span class="cx"> 
</span><span class="cx"> #include &quot;DFGCommonData.h&quot;
</span><span class="cx"> #include &quot;FTLDataSection.h&quot;
</span><ins>+#include &quot;FTLLazySlowPath.h&quot;
</ins><span class="cx"> #include &quot;FTLOSRExit.h&quot;
</span><span class="cx"> #include &quot;FTLStackMaps.h&quot;
</span><span class="cx"> #include &quot;FTLUnwindInfo.h&quot;
</span><span class="lines">@@ -86,6 +87,7 @@
</span><span class="cx">     DFG::CommonData common;
</span><span class="cx">     SegmentedVector&lt;OSRExit, 8&gt; osrExit;
</span><span class="cx">     StackMaps stackmaps;
</span><ins>+    Vector&lt;std::unique_ptr&lt;LazySlowPath&gt;&gt; lazySlowPaths;
</ins><span class="cx">     
</span><span class="cx"> private:
</span><span class="cx">     Vector&lt;RefPtr&lt;DataSection&gt;&gt; m_dataSections;
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreftlFTLJITFinalizercpp"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/ftl/FTLJITFinalizer.cpp (190859 => 190860)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/ftl/FTLJITFinalizer.cpp        2015-10-12 17:25:37 UTC (rev 190859)
+++ trunk/Source/JavaScriptCore/ftl/FTLJITFinalizer.cpp        2015-10-12 17:56:26 UTC (rev 190860)
</span><span class="lines">@@ -106,11 +106,11 @@
</span><span class="cx">         // Side code is for special slow paths that we generate ourselves, like for inline
</span><span class="cx">         // caches.
</span><span class="cx">         
</span><del>-        for (unsigned i = slowPathCalls.size(); i--;) {
-            SlowPathCall&amp; call = slowPathCalls[i];
</del><ins>+        for (CCallHelpers::Jump jump : lazySlowPathGeneratorJumps) {
</ins><span class="cx">             sideCodeLinkBuffer-&gt;link(
</span><del>-                call.call(),
-                CodeLocationLabel(m_plan.vm.ftlThunks-&gt;getSlowPathCallThunk(m_plan.vm, call.key()).code()));
</del><ins>+                jump,
+                CodeLocationLabel(
+                    m_plan.vm.getCTIStub(lazySlowPathGenerationThunkGenerator).code()));
</ins><span class="cx">         }
</span><span class="cx">         
</span><span class="cx">         jitCode-&gt;addHandle(FINALIZE_DFG_CODE(
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreftlFTLJITFinalizerh"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/ftl/FTLJITFinalizer.h (190859 => 190860)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/ftl/FTLJITFinalizer.h        2015-10-12 17:25:37 UTC (rev 190859)
+++ trunk/Source/JavaScriptCore/ftl/FTLJITFinalizer.h        2015-10-12 17:56:26 UTC (rev 190860)
</span><span class="lines">@@ -65,8 +65,8 @@
</span><span class="cx">     std::unique_ptr&lt;LinkBuffer&gt; sideCodeLinkBuffer;
</span><span class="cx">     std::unique_ptr&lt;LinkBuffer&gt; handleExceptionsLinkBuffer;
</span><span class="cx">     Vector&lt;OutOfLineCodeInfo&gt; outOfLineCodeInfos;
</span><del>-    Vector&lt;SlowPathCall&gt; slowPathCalls; // Calls inside the side code.
</del><span class="cx">     Vector&lt;OSRExitCompilationInfo&gt; osrExit;
</span><ins>+    Vector&lt;CCallHelpers::Jump&gt; lazySlowPathGeneratorJumps;
</ins><span class="cx">     GeneratedFunction function;
</span><span class="cx">     RefPtr&lt;JITCode&gt; jitCode;
</span><span class="cx"> };
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreftlFTLLazySlowPathcpp"></a>
<div class="addfile"><h4>Added: trunk/Source/JavaScriptCore/ftl/FTLLazySlowPath.cpp (0 => 190860)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/ftl/FTLLazySlowPath.cpp                                (rev 0)
+++ trunk/Source/JavaScriptCore/ftl/FTLLazySlowPath.cpp        2015-10-12 17:56:26 UTC (rev 190860)
</span><span class="lines">@@ -0,0 +1,74 @@
</span><ins>+/*
+ * Copyright (C) 2015 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#include &quot;config.h&quot;
+#include &quot;FTLLazySlowPath.h&quot;
+
+#include &quot;FTLSlowPathCall.h&quot;
+#include &quot;LinkBuffer.h&quot;
+
+namespace JSC { namespace FTL {
+
+LazySlowPath::LazySlowPath(
+    CodeLocationLabel patchpoint, CodeLocationLabel exceptionTarget,
+    const RegisterSet&amp; usedRegisters, CallSiteIndex callSiteIndex, RefPtr&lt;Generator&gt; generator)
+    : m_patchpoint(patchpoint)
+    , m_exceptionTarget(exceptionTarget)
+    , m_usedRegisters(usedRegisters)
+    , m_callSiteIndex(callSiteIndex)
+    , m_generator(generator)
+{
+}
+
+LazySlowPath::~LazySlowPath()
+{
+}
+
+void LazySlowPath::generate(CodeBlock* codeBlock)
+{
+    RELEASE_ASSERT(!m_stub);
+
+    VM&amp; vm = *codeBlock-&gt;vm();
+
+    CCallHelpers jit(&amp;vm, codeBlock);
+    GenerationParams params;
+    CCallHelpers::JumpList exceptionJumps;
+    params.exceptionJumps = m_exceptionTarget ? &amp;exceptionJumps : nullptr;
+    params.lazySlowPath = this;
+    m_generator-&gt;run(jit, params);
+
+    LinkBuffer linkBuffer(vm, jit, codeBlock, JITCompilationMustSucceed);
+    linkBuffer.link(
+        params.doneJumps, m_patchpoint.labelAtOffset(MacroAssembler::maxJumpReplacementSize()));
+    if (m_exceptionTarget)
+        linkBuffer.link(exceptionJumps, m_exceptionTarget);
+    m_stub = FINALIZE_CODE_FOR(codeBlock, linkBuffer, (&quot;Lazy slow path call stub&quot;));
+
+    MacroAssembler::replaceWithJump(m_patchpoint, CodeLocationLabel(m_stub.code()));
+}
+
+} } // namespace JSC::FTL
+
+
</ins></span></pre></div>
<a id="trunkSourceJavaScriptCoreftlFTLLazySlowPathh"></a>
<div class="addfile"><h4>Added: trunk/Source/JavaScriptCore/ftl/FTLLazySlowPath.h (0 => 190860)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/ftl/FTLLazySlowPath.h                                (rev 0)
+++ trunk/Source/JavaScriptCore/ftl/FTLLazySlowPath.h        2015-10-12 17:56:26 UTC (rev 190860)
</span><span class="lines">@@ -0,0 +1,91 @@
</span><ins>+/*
+ * Copyright (C) 2015 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#ifndef FTLLazySlowPath_h
+#define FTLLazySlowPath_h
+
+#include &quot;CCallHelpers.h&quot;
+#include &quot;CodeBlock.h&quot;
+#include &quot;CodeLocation.h&quot;
+#include &quot;GPRInfo.h&quot;
+#include &quot;MacroAssemblerCodeRef.h&quot;
+#include &quot;RegisterSet.h&quot;
+#include &lt;wtf/SharedTask.h&gt;
+
+namespace JSC { namespace FTL {
+
+// A LazySlowPath is an object that represents a piece of code that is part of FTL generated code
+// that will be generated lazily. It holds all of the important information needed to generate that
+// code, such as where to link jumps to and which registers are in use. It also has a reference to a
+// SharedTask that will do the actual code generation. That SharedTask may have additional data, like
+// which registers hold the inputs or outputs.
+class LazySlowPath {
+    WTF_MAKE_NONCOPYABLE(LazySlowPath);
+    WTF_MAKE_FAST_ALLOCATED;
+public:
+    struct GenerationParams {
+        // Extra parameters to the GeneratorFunction are made into fields of this struct, so that if
+        // we add new parameters, we don't have to change all of the users.
+        CCallHelpers::JumpList doneJumps;
+        CCallHelpers::JumpList* exceptionJumps;
+        LazySlowPath* lazySlowPath;
+    };
+
+    typedef void GeneratorFunction(CCallHelpers&amp;, GenerationParams&amp;);
+    typedef SharedTask&lt;GeneratorFunction&gt; Generator;
+
+    template&lt;typename Functor&gt;
+    static RefPtr&lt;Generator&gt; createGenerator(const Functor&amp; functor)
+    {
+        return createSharedTask&lt;GeneratorFunction&gt;(functor);
+    }
+    
+    LazySlowPath(
+        CodeLocationLabel patchpoint, CodeLocationLabel exceptionTarget,
+        const RegisterSet&amp; usedRegisters, CallSiteIndex callSiteIndex, RefPtr&lt;Generator&gt;);
+
+    ~LazySlowPath();
+
+    CodeLocationLabel patchpoint() const { return m_patchpoint; }
+    const RegisterSet&amp; usedRegisters() const { return m_usedRegisters; }
+    CallSiteIndex callSiteIndex() const { return m_callSiteIndex; }
+
+    void generate(CodeBlock*);
+
+    MacroAssemblerCodeRef stub() const { return m_stub; }
+
+private:
+    CodeLocationLabel m_patchpoint;
+    CodeLocationLabel m_exceptionTarget;
+    RegisterSet m_usedRegisters;
+    CallSiteIndex m_callSiteIndex;
+    MacroAssemblerCodeRef m_stub;
+    RefPtr&lt;Generator&gt; m_generator;
+};
+
+} } // namespace JSC::FTL
+
+#endif // FTLLazySlowPath_h
+
</ins></span></pre></div>
<a id="trunkSourceJavaScriptCoreftlFTLLazySlowPathCallh"></a>
<div class="addfile"><h4>Added: trunk/Source/JavaScriptCore/ftl/FTLLazySlowPathCall.h (0 => 190860)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/ftl/FTLLazySlowPathCall.h                                (rev 0)
+++ trunk/Source/JavaScriptCore/ftl/FTLLazySlowPathCall.h        2015-10-12 17:56:26 UTC (rev 190860)
</span><span class="lines">@@ -0,0 +1,56 @@
</span><ins>+/*
+ * Copyright (C) 2015 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#ifndef FTLLazySlowPathCall_h
+#define FTLLazySlowPathCall_h
+
+#include &quot;CodeBlock.h&quot;
+#include &quot;CodeLocation.h&quot;
+#include &quot;FTLLazySlowPath.h&quot;
+#include &quot;FTLSlowPathCall.h&quot;
+#include &quot;FTLThunks.h&quot;
+#include &quot;GPRInfo.h&quot;
+#include &quot;MacroAssemblerCodeRef.h&quot;
+#include &quot;RegisterSet.h&quot;
+
+namespace JSC { namespace FTL {
+
+template&lt;typename ResultType, typename... ArgumentTypes&gt;
+RefPtr&lt;LazySlowPath::Generator&gt; createLazyCallGenerator(
+    FunctionPtr function, ResultType result, ArgumentTypes... arguments)
+{
+    return LazySlowPath::createGenerator(
+        [=] (CCallHelpers&amp; jit, LazySlowPath::GenerationParams&amp; params) {
+            callOperation(
+                params.lazySlowPath-&gt;usedRegisters(), jit, params.lazySlowPath-&gt;callSiteIndex(),
+                params.exceptionJumps, function, result, arguments...);
+            params.doneJumps.append(jit.jump());
+        });
+}
+
+} } // namespace JSC::FTL
+
+#endif // FTLLazySlowPathCall_h
+
</ins></span></pre></div>
<a id="trunkSourceJavaScriptCoreftlFTLLowerDFGToLLVMcpp"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/ftl/FTLLowerDFGToLLVM.cpp (190859 => 190860)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/ftl/FTLLowerDFGToLLVM.cpp        2015-10-12 17:25:37 UTC (rev 190859)
+++ trunk/Source/JavaScriptCore/ftl/FTLLowerDFGToLLVM.cpp        2015-10-12 17:56:26 UTC (rev 190860)
</span><span class="lines">@@ -39,6 +39,7 @@
</span><span class="cx"> #include &quot;FTLForOSREntryJITCode.h&quot;
</span><span class="cx"> #include &quot;FTLFormattedValue.h&quot;
</span><span class="cx"> #include &quot;FTLInlineCacheSize.h&quot;
</span><ins>+#include &quot;FTLLazySlowPathCall.h&quot;
</ins><span class="cx"> #include &quot;FTLLoweredNodeValue.h&quot;
</span><span class="cx"> #include &quot;FTLOperations.h&quot;
</span><span class="cx"> #include &quot;FTLOutput.h&quot;
</span><span class="lines">@@ -50,6 +51,7 @@
</span><span class="cx"> #include &quot;OperandsInlines.h&quot;
</span><span class="cx"> #include &quot;ScopedArguments.h&quot;
</span><span class="cx"> #include &quot;ScopedArgumentsTable.h&quot;
</span><ins>+#include &quot;ScratchRegisterAllocator.h&quot;
</ins><span class="cx"> #include &quot;VirtualRegister.h&quot;
</span><span class="cx"> #include &quot;Watchdog.h&quot;
</span><span class="cx"> #include &lt;atomic&gt;
</span><span class="lines">@@ -3159,9 +3161,15 @@
</span><span class="cx">         m_out.jump(continuation);
</span><span class="cx">         
</span><span class="cx">         m_out.appendTo(slowPath, continuation);
</span><del>-        LValue callResult = vmCall(
-            m_out.operation(operationCreateActivationDirect), m_callFrame, weakPointer(structure),
-            scope, weakPointer(table), m_out.constInt64(JSValue::encode(initializationValue)));
</del><ins>+        LValue callResult = lazySlowPath(
+            [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
+                return createLazyCallGenerator(
+                    operationCreateActivationDirect, locations[0].directGPR(),
+                    CCallHelpers::TrustedImmPtr(structure), locations[1].directGPR(),
+                    CCallHelpers::TrustedImmPtr(table),
+                    CCallHelpers::TrustedImm64(JSValue::encode(initializationValue)));
+            },
+            scope);
</ins><span class="cx">         ValueFromBlock slowResult = m_out.anchor(callResult);
</span><span class="cx">         m_out.jump(continuation);
</span><span class="cx">         
</span><span class="lines">@@ -3215,11 +3223,25 @@
</span><span class="cx">         m_out.jump(continuation);
</span><span class="cx">         
</span><span class="cx">         m_out.appendTo(slowPath, continuation);
</span><del>-        
-        LValue callResult = isArrowFunction
-            ? vmCall(m_out.operation(operationNewArrowFunctionWithInvalidatedReallocationWatchpoint), m_callFrame, scope, weakPointer(executable), thisValue)
-            : vmCall(m_out.operation(operationNewFunctionWithInvalidatedReallocationWatchpoint), m_callFrame, scope, weakPointer(executable));
-        
</del><ins>+
+        Vector&lt;LValue&gt; slowPathArguments;
+        slowPathArguments.append(scope);
+        if (isArrowFunction)
+            slowPathArguments.append(thisValue);
+        LValue callResult = lazySlowPath(
+            [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
+                if (isArrowFunction) {
+                    return createLazyCallGenerator(
+                        operationNewArrowFunctionWithInvalidatedReallocationWatchpoint,
+                        locations[0].directGPR(), locations[1].directGPR(),
+                        CCallHelpers::TrustedImmPtr(executable), locations[2].directGPR());
+                }
+                return createLazyCallGenerator(
+                    operationNewFunctionWithInvalidatedReallocationWatchpoint,
+                    locations[0].directGPR(), locations[1].directGPR(),
+                    CCallHelpers::TrustedImmPtr(executable));
+            },
+            slowPathArguments);
</ins><span class="cx">         ValueFromBlock slowResult = m_out.anchor(callResult);
</span><span class="cx">         m_out.jump(continuation);
</span><span class="cx">         
</span><span class="lines">@@ -3271,9 +3293,13 @@
</span><span class="cx">         m_out.jump(continuation);
</span><span class="cx">         
</span><span class="cx">         m_out.appendTo(slowPath, continuation);
</span><del>-        LValue callResult = vmCall(
-            m_out.operation(operationCreateDirectArguments), m_callFrame, weakPointer(structure),
-            length.value, m_out.constInt32(minCapacity));
</del><ins>+        LValue callResult = lazySlowPath(
+            [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
+                return createLazyCallGenerator(
+                    operationCreateDirectArguments, locations[0].directGPR(),
+                    CCallHelpers::TrustedImmPtr(structure), locations[1].directGPR(),
+                    CCallHelpers::TrustedImm32(minCapacity));
+            }, length.value);
</ins><span class="cx">         ValueFromBlock slowResult = m_out.anchor(callResult);
</span><span class="cx">         m_out.jump(continuation);
</span><span class="cx">         
</span><span class="lines">@@ -3563,9 +3589,14 @@
</span><span class="cx">             m_out.appendTo(slowCase, continuation);
</span><span class="cx">             LValue structureValue = m_out.phi(
</span><span class="cx">                 m_out.intPtr, largeStructure, failStructure);
</span><del>-            ValueFromBlock slowResult = m_out.anchor(vmCall(
-                m_out.operation(operationNewArrayWithSize),
-                m_callFrame, structureValue, publicLength));
</del><ins>+            LValue slowResultValue = lazySlowPath(
+                [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
+                    return createLazyCallGenerator(
+                        operationNewArrayWithSize, locations[0].directGPR(),
+                        locations[1].directGPR(), locations[2].directGPR());
+                },
+                structureValue, publicLength);
+            ValueFromBlock slowResult = m_out.anchor(slowResultValue);
</ins><span class="cx">             m_out.jump(continuation);
</span><span class="cx">             
</span><span class="cx">             m_out.appendTo(continuation, lastNext);
</span><span class="lines">@@ -3762,20 +3793,29 @@
</span><span class="cx">         m_out.jump(continuation);
</span><span class="cx">         
</span><span class="cx">         m_out.appendTo(slowPath, continuation);
</span><del>-        ValueFromBlock slowResult;
</del><ins>+        LValue slowResultValue;
</ins><span class="cx">         switch (numKids) {
</span><span class="cx">         case 2:
</span><del>-            slowResult = m_out.anchor(vmCall(
-                m_out.operation(operationMakeRope2), m_callFrame, kids[0], kids[1]));
</del><ins>+            slowResultValue = lazySlowPath(
+                [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
+                    return createLazyCallGenerator(
+                        operationMakeRope2, locations[0].directGPR(), locations[1].directGPR(),
+                        locations[2].directGPR());
+                }, kids[0], kids[1]);
</ins><span class="cx">             break;
</span><span class="cx">         case 3:
</span><del>-            slowResult = m_out.anchor(vmCall(
-                m_out.operation(operationMakeRope3), m_callFrame, kids[0], kids[1], kids[2]));
</del><ins>+            slowResultValue = lazySlowPath(
+                [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
+                    return createLazyCallGenerator(
+                        operationMakeRope3, locations[0].directGPR(), locations[1].directGPR(),
+                        locations[2].directGPR(), locations[3].directGPR());
+                }, kids[0], kids[1], kids[2]);
</ins><span class="cx">             break;
</span><span class="cx">         default:
</span><span class="cx">             DFG_CRASH(m_graph, m_node, &quot;Bad number of children&quot;);
</span><span class="cx">             break;
</span><span class="cx">         }
</span><ins>+        ValueFromBlock slowResult = m_out.anchor(slowResultValue);
</ins><span class="cx">         m_out.jump(continuation);
</span><span class="cx">         
</span><span class="cx">         m_out.appendTo(continuation, lastNext);
</span><span class="lines">@@ -4132,7 +4172,11 @@
</span><span class="cx">         
</span><span class="cx">         LBasicBlock lastNext = m_out.appendTo(isNotInvalidated, continuation);
</span><span class="cx"> 
</span><del>-        vmCall(m_out.operation(operationNotifyWrite), m_callFrame, m_out.constIntPtr(set));
</del><ins>+        lazySlowPath(
+            [=] (const Vector&lt;Location&gt;&amp;) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
+                return createLazyCallGenerator(
+                    operationNotifyWrite, InvalidGPRReg, CCallHelpers::TrustedImmPtr(set));
+            });
</ins><span class="cx">         m_out.jump(continuation);
</span><span class="cx">         
</span><span class="cx">         m_out.appendTo(continuation, lastNext);
</span><span class="lines">@@ -4978,10 +5022,13 @@
</span><span class="cx">             rarely(slowPath), usually(continuation));
</span><span class="cx">         
</span><span class="cx">         m_out.appendTo(slowPath, notCellCase);
</span><del>-        LValue slowResultValue = vmCall(
-            m_out.operation(operationObjectIsObject), m_callFrame, weakPointer(globalObject),
-            value);
-        ValueFromBlock slowResult = m_out.anchor(m_out.notNull(slowResultValue));
</del><ins>+        LValue slowResultValue = lazySlowPath(
+            [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
+                return createLazyCallGenerator(
+                    operationObjectIsObject, locations[0].directGPR(),
+                    CCallHelpers::TrustedImmPtr(globalObject), locations[1].directGPR());
+            }, value);
+        ValueFromBlock slowResult = m_out.anchor(m_out.notZero64(slowResultValue));
</ins><span class="cx">         m_out.jump(continuation);
</span><span class="cx">         
</span><span class="cx">         m_out.appendTo(notCellCase, continuation);
</span><span class="lines">@@ -5025,9 +5072,12 @@
</span><span class="cx">             rarely(slowPath), usually(continuation));
</span><span class="cx">         
</span><span class="cx">         m_out.appendTo(slowPath, continuation);
</span><del>-        LValue slowResultValue = vmCall(
-            m_out.operation(operationObjectIsFunction), m_callFrame, weakPointer(globalObject),
-            value);
</del><ins>+        LValue slowResultValue = lazySlowPath(
+            [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
+                return createLazyCallGenerator(
+                    operationObjectIsFunction, locations[0].directGPR(),
+                    CCallHelpers::TrustedImmPtr(globalObject), locations[1].directGPR());
+            }, value);
</ins><span class="cx">         ValueFromBlock slowResult = m_out.anchor(m_out.notNull(slowResultValue));
</span><span class="cx">         m_out.jump(continuation);
</span><span class="cx">         
</span><span class="lines">@@ -5066,7 +5116,7 @@
</span><span class="cx">         if (JSString* string = m_node-&gt;child1()-&gt;dynamicCastConstant&lt;JSString*&gt;()) {
</span><span class="cx">             if (string-&gt;tryGetValueImpl() &amp;&amp; string-&gt;tryGetValueImpl()-&gt;isAtomic()) {
</span><span class="cx"> 
</span><del>-                const auto str = static_cast&lt;const AtomicStringImpl*&gt;(string-&gt;tryGetValueImpl());
</del><ins>+                UniquedStringImpl* str = bitwise_cast&lt;UniquedStringImpl*&gt;(string-&gt;tryGetValueImpl());
</ins><span class="cx">                 unsigned stackmapID = m_stackmapIDs++;
</span><span class="cx">             
</span><span class="cx">                 LValue call = m_out.call(
</span><span class="lines">@@ -5460,12 +5510,16 @@
</span><span class="cx">                 m_out.jump(continuation);
</span><span class="cx">                 
</span><span class="cx">                 m_out.appendTo(slowPath, continuation);
</span><del>-                
-                ValueFromBlock slowObject = m_out.anchor(vmCall(
-                    m_out.operation(operationNewObjectWithButterfly),
-                    m_callFrame, m_out.constIntPtr(structure)));
</del><ins>+
+                LValue slowObjectValue = lazySlowPath(
+                    [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
+                        return createLazyCallGenerator(
+                            operationNewObjectWithButterfly, locations[0].directGPR(),
+                            CCallHelpers::TrustedImmPtr(structure));
+                    });
+                ValueFromBlock slowObject = m_out.anchor(slowObjectValue);
</ins><span class="cx">                 ValueFromBlock slowButterfly = m_out.anchor(
</span><del>-                    m_out.loadPtr(slowObject.value(), m_heaps.JSObject_butterfly));
</del><ins>+                    m_out.loadPtr(slowObjectValue, m_heaps.JSObject_butterfly));
</ins><span class="cx">                 
</span><span class="cx">                 m_out.jump(continuation);
</span><span class="cx">                 
</span><span class="lines">@@ -5537,9 +5591,14 @@
</span><span class="cx">         // because all fields will be overwritten.
</span><span class="cx">         // FIXME: It may be worth creating an operation that calls a constructor on JSLexicalEnvironment that 
</span><span class="cx">         // doesn't initialize every slot because we are guaranteed to do that here.
</span><del>-        LValue callResult = vmCall(
-            m_out.operation(operationCreateActivationDirect), m_callFrame, weakPointer(structure),
-            scope, weakPointer(table), m_out.constInt64(JSValue::encode(jsUndefined())));
</del><ins>+        LValue callResult = lazySlowPath(
+            [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
+                return createLazyCallGenerator(
+                    operationCreateActivationDirect, locations[0].directGPR(),
+                    CCallHelpers::TrustedImmPtr(structure), locations[1].directGPR(),
+                    CCallHelpers::TrustedImmPtr(table),
+                    CCallHelpers::TrustedImm64(JSValue::encode(jsUndefined())));
+            }, scope);
</ins><span class="cx">         ValueFromBlock slowResult =  m_out.anchor(callResult);
</span><span class="cx">         m_out.jump(continuation);
</span><span class="cx"> 
</span><span class="lines">@@ -5581,7 +5640,10 @@
</span><span class="cx"> 
</span><span class="cx">         LBasicBlock lastNext = m_out.appendTo(timerDidFire, continuation);
</span><span class="cx"> 
</span><del>-        vmCall(m_out.operation(operationHandleWatchdogTimer), m_callFrame);
</del><ins>+        lazySlowPath(
+            [=] (const Vector&lt;Location&gt;&amp;) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
+                return createLazyCallGenerator(operationHandleWatchdogTimer, InvalidGPRReg);
+            });
</ins><span class="cx">         m_out.jump(continuation);
</span><span class="cx">         
</span><span class="cx">         m_out.appendTo(continuation, lastNext);
</span><span class="lines">@@ -6015,13 +6077,19 @@
</span><span class="cx">         
</span><span class="cx">         LValue slowButterflyValue;
</span><span class="cx">         if (sizeInValues == initialOutOfLineCapacity) {
</span><del>-            slowButterflyValue = vmCall(
-                m_out.operation(operationAllocatePropertyStorageWithInitialCapacity),
-                m_callFrame);
</del><ins>+            slowButterflyValue = lazySlowPath(
+                [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
+                    return createLazyCallGenerator(
+                        operationAllocatePropertyStorageWithInitialCapacity,
+                        locations[0].directGPR());
+                });
</ins><span class="cx">         } else {
</span><del>-            slowButterflyValue = vmCall(
-                m_out.operation(operationAllocatePropertyStorage),
-                m_callFrame, m_out.constIntPtr(sizeInValues));
</del><ins>+            slowButterflyValue = lazySlowPath(
+                [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
+                    return createLazyCallGenerator(
+                        operationAllocatePropertyStorage, locations[0].directGPR(),
+                        CCallHelpers::TrustedImmPtr(sizeInValues));
+                });
</ins><span class="cx">         }
</span><span class="cx">         ValueFromBlock slowButterfly = m_out.anchor(slowButterflyValue);
</span><span class="cx">         
</span><span class="lines">@@ -6303,9 +6371,14 @@
</span><span class="cx">         m_out.jump(continuation);
</span><span class="cx">         
</span><span class="cx">         m_out.appendTo(slowPath, continuation);
</span><del>-        
-        ValueFromBlock slowResult = m_out.anchor(vmCall(
-            m_out.operation(operationNewObject), m_callFrame, m_out.constIntPtr(structure)));
</del><ins>+
+        LValue slowResultValue = lazySlowPath(
+            [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
+                return createLazyCallGenerator(
+                    operationNewObject, locations[0].directGPR(),
+                    CCallHelpers::TrustedImmPtr(structure));
+            });
+        ValueFromBlock slowResult = m_out.anchor(slowResultValue);
</ins><span class="cx">         m_out.jump(continuation);
</span><span class="cx">         
</span><span class="cx">         m_out.appendTo(continuation, lastNext);
</span><span class="lines">@@ -6377,10 +6450,14 @@
</span><span class="cx">         m_out.jump(continuation);
</span><span class="cx">         
</span><span class="cx">         m_out.appendTo(slowPath, continuation);
</span><del>-        
-        ValueFromBlock slowArray = m_out.anchor(vmCall(
-            m_out.operation(operationNewArrayWithSize), m_callFrame,
-            m_out.constIntPtr(structure), m_out.constInt32(numElements)));
</del><ins>+
+        LValue slowArrayValue = lazySlowPath(
+            [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
+                return createLazyCallGenerator(
+                    operationNewArrayWithSize, locations[0].directGPR(),
+                    CCallHelpers::TrustedImmPtr(structure), CCallHelpers::TrustedImm32(numElements));
+            });
+        ValueFromBlock slowArray = m_out.anchor(slowArrayValue);
</ins><span class="cx">         ValueFromBlock slowButterfly = m_out.anchor(
</span><span class="cx">             m_out.loadPtr(slowArray.value(), m_heaps.JSObject_butterfly));
</span><span class="cx"> 
</span><span class="lines">@@ -7049,14 +7126,17 @@
</span><span class="cx">         functor(TypeofType::Object);
</span><span class="cx">         
</span><span class="cx">         m_out.appendTo(slowPath, unreachable);
</span><del>-        LValue result = vmCall(
-            m_out.operation(operationTypeOfObjectAsTypeofType), m_callFrame,
-            weakPointer(globalObject), value);
</del><ins>+        LValue result = lazySlowPath(
+            [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
+                return createLazyCallGenerator(
+                    operationTypeOfObjectAsTypeofType, locations[0].directGPR(),
+                    CCallHelpers::TrustedImmPtr(globalObject), locations[1].directGPR());
+            }, value);
</ins><span class="cx">         Vector&lt;SwitchCase, 3&gt; cases;
</span><span class="cx">         cases.append(SwitchCase(m_out.constInt32(static_cast&lt;int32_t&gt;(TypeofType::Undefined)), undefinedCase));
</span><span class="cx">         cases.append(SwitchCase(m_out.constInt32(static_cast&lt;int32_t&gt;(TypeofType::Object)), reallyObjectCase));
</span><span class="cx">         cases.append(SwitchCase(m_out.constInt32(static_cast&lt;int32_t&gt;(TypeofType::Function)), functionCase));
</span><del>-        m_out.switchInstruction(result, cases, unreachable, Weight());
</del><ins>+        m_out.switchInstruction(m_out.castToInt32(result), cases, unreachable, Weight());
</ins><span class="cx">         
</span><span class="cx">         m_out.appendTo(unreachable, notObjectCase);
</span><span class="cx">         m_out.unreachable();
</span><span class="lines">@@ -7163,6 +7243,115 @@
</span><span class="cx">         m_out.appendTo(continuation, lastNext);
</span><span class="cx">         return m_out.phi(m_out.int32, fastResult, slowResult);
</span><span class="cx">     }
</span><ins>+
+    // This is a mechanism for creating a code generator that fills in a gap in the code using our
+    // own MacroAssembler. This is useful for slow paths that involve a lot of code and we don't want
+    // to pay the price of LLVM optimizing it. A lazy slow path will only be generated if it actually
+    // executes. On the other hand, a lazy slow path always incurs the cost of two additional jumps.
+    // Also, the lazy slow path's register allocation state is slaved to whatever LLVM did, so you
+    // have to use a ScratchRegisterAllocator to try to use some unused registers and you may have
+    // to spill to top of stack if there aren't enough registers available.
+    //
+    // Lazy slow paths involve three different stages of execution. Each stage has unique
+    // capabilities and knowledge. The stages are:
+    //
+    // 1) DFG-&gt;LLVM lowering, i.e. code that runs in this phase. Lowering is the last time you will
+    //    have access to LValues. If there is an LValue that needs to be fed as input to a lazy slow
+    //    path, then you must pass it as an argument here (as one of the varargs arguments after the
+    //    functor). But, lowering doesn't know which registers will be used for those LValues. Hence
+    //    you pass a lambda to lazySlowPath() and that lambda will run during stage (2):
+    //
+    // 2) FTLCompile.cpp's fixFunctionBasedOnStackMaps. This code is the only stage at which we know
+    //    the mapping from arguments passed to this method in (1) and the registers that LLVM
+    //    selected for those arguments. You don't actually want to generate any code here, since then
+    //    the slow path wouldn't actually be lazily generated. Instead, you want to save the
+    //    registers being used for the arguments and defer code generation to stage (3) by creating
+    //    and returning a LazySlowPath::Generator:
+    //
+    // 3) LazySlowPath's generate() method. This code runs in response to the lazy slow path
+    //    executing for the first time. It will call the generator you created in stage (2).
+    //
+    // Note that each time you invoke stage (1), stage (2) may be invoked zero, one, or many times.
+    // Stage (2) will usually be invoked once for stage (1). But, LLVM may kill the code, in which
+    // case stage (2) won't run. LLVM may duplicate the code (for example via jump threading),
+    // leading to many calls to your stage (2) lambda. Stage (3) may be called zero or once for each
+    // stage (2). It will be called zero times if the slow path never runs. This is what you hope for
+    // whenever you use the lazySlowPath() mechanism.
+    //
+    // A typical use of lazySlowPath() will look like the example below, which just creates a slow
+    // path that adds some value to the input and returns it.
+    //
+    // // Stage (1) is here. This is your last chance to figure out which LValues to use as inputs.
+    // // Notice how we pass &quot;input&quot; as an argument to lazySlowPath().
+    // LValue input = ...;
+    // int addend = ...;
+    // LValue output = lazySlowPath(
+    //     [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
+    //         // Stage (2) is here. This is your last chance to figure out which registers are used
+    //         // for which values. Location zero is always the return value. You can ignore it if
+    //         // you don't want to return anything. Location 1 is the register for the first
+    //         // argument to the lazySlowPath(), i.e. &quot;input&quot;. Note that the Location object could
+    //         // also hold an FPR, if you are passing a double.
+    //         GPRReg outputGPR = locations[0].directGPR();
+    //         GPRReg inputGPR = locations[1].directGPR();
+    //         return LazySlowPath::createGenerator(
+    //             [=] (CCallHelpers&amp; jit, LazySlowPath::GenerationParams&amp; params) {
+    //                 // Stage (3) is here. This is when you generate code. You have access to the
+    //                 // registers you collected in stage (2) because this lambda closes over those
+    //                 // variables (outputGPR and inputGPR). You also have access to whatever extra
+    //                 // data you collected in stage (1), such as the addend in this case.
+    //                 jit.add32(TrustedImm32(addend), inputGPR, outputGPR);
+    //                 // You have to end by jumping to done. There is nothing to fall through to.
+    //                 // You can also jump to the exception handler (see LazySlowPath.h for more
+    //                 // info). Note that currently you cannot OSR exit.
+    //                 params.doneJumps.append(jit.jump());
+    //             });
+    //     },
+    //     input);
+    //
+    // Note that if your slow path is only doing a call, you can use the createLazyCallGenerator()
+    // helper. For example:
+    //
+    // LValue input = ...;
+    // LValue output = lazySlowPath(
+    //     [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
+    //         return createLazyCallGenerator(
+    //             operationDoThings, locations[0].directGPR(), locations[1].directGPR());
+    //     });
+    //
+    // Finally, note that all of the lambdas - both the stage (2) lambda and the stage (3) lambda -
+    // run after the function that created them returns. Hence, you should not use by-reference
+    // capture (i.e. [&amp;]) in any of these lambdas.
+    template&lt;typename Functor, typename... ArgumentTypes&gt;
+    LValue lazySlowPath(const Functor&amp; functor, ArgumentTypes... arguments)
+    {
+        return lazySlowPath(functor, Vector&lt;LValue&gt;{ arguments... });
+    }
+
+    template&lt;typename Functor&gt;
+    LValue lazySlowPath(const Functor&amp; functor, const Vector&lt;LValue&gt;&amp; userArguments)
+    {
+        unsigned stackmapID = m_stackmapIDs++;
+
+        Vector&lt;LValue&gt; arguments;
+        arguments.append(m_out.constInt64(stackmapID));
+        arguments.append(m_out.constInt32(MacroAssembler::maxJumpReplacementSize()));
+        arguments.append(constNull(m_out.ref8));
+        arguments.append(m_out.constInt32(userArguments.size()));
+        arguments.appendVector(userArguments);
+        LValue call = m_out.call(m_out.patchpointInt64Intrinsic(), arguments);
+        setInstructionCallingConvention(call, LLVMAnyRegCallConv);
+
+        CallSiteIndex callSiteIndex =
+            m_ftlState.jitCode-&gt;common.addCodeOrigin(m_node-&gt;origin.semantic);
+        
+        RefPtr&lt;LazySlowPathLinkerTask&gt; linker =
+            createSharedTask&lt;LazySlowPathLinkerFunction&gt;(functor);
+
+        m_ftlState.lazySlowPaths.append(LazySlowPathDescriptor(stackmapID, callSiteIndex, linker));
+
+        return call;
+    }
</ins><span class="cx">     
</span><span class="cx">     void speculate(
</span><span class="cx">         ExitKind kind, FormattedValue lowValue, Node* highValue, LValue failCondition)
</span><span class="lines">@@ -8267,33 +8456,67 @@
</span><span class="cx"> 
</span><span class="cx">     void emitStoreBarrier(LValue base)
</span><span class="cx">     {
</span><del>-        LBasicBlock isMarkedAndNotRemembered = FTL_NEW_BLOCK(m_out, (&quot;Store barrier is marked block&quot;));
-        LBasicBlock bufferHasSpace = FTL_NEW_BLOCK(m_out, (&quot;Store barrier buffer has space&quot;));
-        LBasicBlock bufferIsFull = FTL_NEW_BLOCK(m_out, (&quot;Store barrier buffer is full&quot;));
</del><ins>+        LBasicBlock slowPath = FTL_NEW_BLOCK(m_out, (&quot;Store barrier slow path&quot;));
</ins><span class="cx">         LBasicBlock continuation = FTL_NEW_BLOCK(m_out, (&quot;Store barrier continuation&quot;));
</span><span class="cx"> 
</span><del>-        // Check the mark byte. 
</del><span class="cx">         m_out.branch(
</span><del>-            m_out.notZero8(loadCellState(base)), usually(continuation), rarely(isMarkedAndNotRemembered));
</del><ins>+            m_out.notZero8(loadCellState(base)), usually(continuation), rarely(slowPath));
</ins><span class="cx"> 
</span><del>-        // Append to the write barrier buffer.
-        LBasicBlock lastNext = m_out.appendTo(isMarkedAndNotRemembered, bufferHasSpace);
-        LValue currentBufferIndex = m_out.load32(m_out.absolute(vm().heap.writeBarrierBuffer().currentIndexAddress()));
-        LValue bufferCapacity = m_out.constInt32(vm().heap.writeBarrierBuffer().capacity());
-        m_out.branch(
-            m_out.lessThan(currentBufferIndex, bufferCapacity),
-            usually(bufferHasSpace), rarely(bufferIsFull));
</del><ins>+        LBasicBlock lastNext = m_out.appendTo(slowPath, continuation);
</ins><span class="cx"> 
</span><del>-        // Buffer has space, store to it.
-        m_out.appendTo(bufferHasSpace, bufferIsFull);
-        LValue writeBarrierBufferBase = m_out.constIntPtr(vm().heap.writeBarrierBuffer().buffer());
-        m_out.storePtr(base, m_out.baseIndex(m_heaps.WriteBarrierBuffer_bufferContents, writeBarrierBufferBase, m_out.zeroExtPtr(currentBufferIndex)));
-        m_out.store32(m_out.add(currentBufferIndex, m_out.constInt32(1)), m_out.absolute(vm().heap.writeBarrierBuffer().currentIndexAddress()));
-        m_out.jump(continuation);
</del><ins>+        // We emit the store barrier slow path lazily. In a lot of cases, this will never fire. And
+        // when it does fire, it makes sense for us to generate this code using our JIT rather than
+        // wasting LLVM's time optimizing it.
+        lazySlowPath(
+            [=] (const Vector&lt;Location&gt;&amp; locations) -&gt; RefPtr&lt;LazySlowPath::Generator&gt; {
+                GPRReg baseGPR = locations[1].directGPR();
</ins><span class="cx"> 
</span><del>-        // Buffer is out of space, flush it.
-        m_out.appendTo(bufferIsFull, continuation);
-        vmCallNoExceptions(m_out.operation(operationFlushWriteBarrierBuffer), m_callFrame, base);
</del><ins>+                return LazySlowPath::createGenerator(
+                    [=] (CCallHelpers&amp; jit, LazySlowPath::GenerationParams&amp; params) {
+                        RegisterSet usedRegisters = params.lazySlowPath-&gt;usedRegisters();
+                        ScratchRegisterAllocator scratchRegisterAllocator(usedRegisters);
+                        scratchRegisterAllocator.lock(baseGPR);
+
+                        GPRReg scratch1 = scratchRegisterAllocator.allocateScratchGPR();
+                        GPRReg scratch2 = scratchRegisterAllocator.allocateScratchGPR();
+
+                        unsigned bytesPushed =
+                            scratchRegisterAllocator.preserveReusedRegistersByPushing(jit);
+
+                        // We've already saved these, so when we make a slow path call, we don't have
+                        // to save them again.
+                        usedRegisters.exclude(RegisterSet(scratch1, scratch2));
+
+                        WriteBarrierBuffer&amp; writeBarrierBuffer = jit.vm()-&gt;heap.writeBarrierBuffer();
+                        jit.load32(writeBarrierBuffer.currentIndexAddress(), scratch2);
+                        CCallHelpers::Jump needToFlush = jit.branch32(
+                            CCallHelpers::AboveOrEqual, scratch2,
+                            CCallHelpers::TrustedImm32(writeBarrierBuffer.capacity()));
+
+                        jit.add32(CCallHelpers::TrustedImm32(1), scratch2);
+                        jit.store32(scratch2, writeBarrierBuffer.currentIndexAddress());
+
+                        jit.move(CCallHelpers::TrustedImmPtr(writeBarrierBuffer.buffer()), scratch1);
+                        jit.storePtr(
+                            baseGPR,
+                            CCallHelpers::BaseIndex(
+                                scratch1, scratch2, CCallHelpers::ScalePtr,
+                                static_cast&lt;int32_t&gt;(-sizeof(void*))));
+
+                        scratchRegisterAllocator.restoreReusedRegistersByPopping(jit, bytesPushed);
+
+                        params.doneJumps.append(jit.jump());
+
+                        needToFlush.link(&amp;jit);
+                        callOperation(
+                            usedRegisters, jit, params.lazySlowPath-&gt;callSiteIndex(),
+                            params.exceptionJumps, operationFlushWriteBarrierBuffer, InvalidGPRReg,
+                            baseGPR);
+                        scratchRegisterAllocator.restoreReusedRegistersByPopping(jit, bytesPushed);
+                        params.doneJumps.append(jit.jump());
+                    });
+            },
+            base);
</ins><span class="cx">         m_out.jump(continuation);
</span><span class="cx"> 
</span><span class="cx">         m_out.appendTo(continuation, lastNext);
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreftlFTLOperationscpp"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/ftl/FTLOperations.cpp (190859 => 190860)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/ftl/FTLOperations.cpp        2015-10-12 17:25:37 UTC (rev 190859)
+++ trunk/Source/JavaScriptCore/ftl/FTLOperations.cpp        2015-10-12 17:56:26 UTC (rev 190860)
</span><span class="lines">@@ -30,6 +30,8 @@
</span><span class="cx"> 
</span><span class="cx"> #include &quot;ClonedArguments.h&quot;
</span><span class="cx"> #include &quot;DirectArguments.h&quot;
</span><ins>+#include &quot;FTLJITCode.h&quot;
+#include &quot;FTLLazySlowPath.h&quot;
</ins><span class="cx"> #include &quot;InlineCallFrame.h&quot;
</span><span class="cx"> #include &quot;JSCInlines.h&quot;
</span><span class="cx"> #include &quot;JSLexicalEnvironment.h&quot;
</span><span class="lines">@@ -357,6 +359,22 @@
</span><span class="cx">     }
</span><span class="cx"> }
</span><span class="cx"> 
</span><ins>+extern &quot;C&quot; void* JIT_OPERATION compileFTLLazySlowPath(ExecState* exec, unsigned index)
+{
+    VM&amp; vm = exec-&gt;vm();
+
+    // We cannot GC. We've got pointers in evil places.
+    DeferGCForAWhile deferGC(vm.heap);
+
+    CodeBlock* codeBlock = exec-&gt;codeBlock();
+    JITCode* jitCode = codeBlock-&gt;jitCode()-&gt;ftl();
+
+    LazySlowPath&amp; lazySlowPath = *jitCode-&gt;lazySlowPaths[index];
+    lazySlowPath.generate(codeBlock);
+
+    return lazySlowPath.stub().code().executableAddress();
+}
+
</ins><span class="cx"> } } // namespace JSC::FTL
</span><span class="cx"> 
</span><span class="cx"> #endif // ENABLE(FTL_JIT)
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreftlFTLOperationsh"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/ftl/FTLOperations.h (190859 => 190860)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/ftl/FTLOperations.h        2015-10-12 17:25:37 UTC (rev 190859)
+++ trunk/Source/JavaScriptCore/ftl/FTLOperations.h        2015-10-12 17:56:26 UTC (rev 190860)
</span><span class="lines">@@ -33,6 +33,8 @@
</span><span class="cx"> 
</span><span class="cx"> namespace JSC { namespace FTL {
</span><span class="cx"> 
</span><ins>+class LazySlowPath;
+
</ins><span class="cx"> extern &quot;C&quot; {
</span><span class="cx"> 
</span><span class="cx"> JSCell* JIT_OPERATION operationNewObjectWithButterfly(ExecState*, Structure*) WTF_INTERNAL;
</span><span class="lines">@@ -43,6 +45,8 @@
</span><span class="cx"> void JIT_OPERATION operationPopulateObjectInOSR(
</span><span class="cx">     ExecState*, ExitTimeObjectMaterialization*, EncodedJSValue*, EncodedJSValue*) WTF_INTERNAL;
</span><span class="cx"> 
</span><ins>+void* JIT_OPERATION compileFTLLazySlowPath(ExecState*, unsigned) WTF_INTERNAL;
+
</ins><span class="cx"> } // extern &quot;C&quot;
</span><span class="cx"> 
</span><span class="cx"> } } // namespace JSC::DFG
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreftlFTLSlowPathCallcpp"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/ftl/FTLSlowPathCall.cpp (190859 => 190860)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/ftl/FTLSlowPathCall.cpp        2015-10-12 17:25:37 UTC (rev 190859)
+++ trunk/Source/JavaScriptCore/ftl/FTLSlowPathCall.cpp        2015-10-12 17:56:26 UTC (rev 190860)
</span><span class="lines">@@ -1,5 +1,5 @@
</span><span class="cx"> /*
</span><del>- * Copyright (C) 2013, 2014 Apple Inc. All rights reserved.
</del><ins>+ * Copyright (C) 2013-2015 Apple Inc. All rights reserved.
</ins><span class="cx">  *
</span><span class="cx">  * Redistribution and use in source and binary forms, with or without
</span><span class="cx">  * modification, are permitted provided that the following conditions
</span><span class="lines">@@ -30,188 +30,116 @@
</span><span class="cx"> 
</span><span class="cx"> #include &quot;CCallHelpers.h&quot;
</span><span class="cx"> #include &quot;FTLState.h&quot;
</span><ins>+#include &quot;FTLThunks.h&quot;
</ins><span class="cx"> #include &quot;GPRInfo.h&quot;
</span><span class="cx"> #include &quot;JSCInlines.h&quot;
</span><span class="cx"> 
</span><span class="cx"> namespace JSC { namespace FTL {
</span><span class="cx"> 
</span><del>-namespace {
-
</del><span class="cx"> // This code relies on us being 64-bit. FTL is currently always 64-bit.
</span><span class="cx"> static const size_t wordSize = 8;
</span><span class="cx"> 
</span><del>-// This will be an RAII thingy that will set up the necessary stack sizes and offsets and such.
-class CallContext {
-public:
-    CallContext(
-        State&amp; state, const RegisterSet&amp; usedRegisters, CCallHelpers&amp; jit,
-        unsigned numArgs, GPRReg returnRegister)
-        : m_state(state)
-        , m_usedRegisters(usedRegisters)
-        , m_jit(jit)
-        , m_numArgs(numArgs)
-        , m_returnRegister(returnRegister)
-    {
-        // We don't care that you're using callee-save, stack, or hardware registers.
-        m_usedRegisters.exclude(RegisterSet::stackRegisters());
-        m_usedRegisters.exclude(RegisterSet::reservedHardwareRegisters());
-        m_usedRegisters.exclude(RegisterSet::calleeSaveRegisters());
</del><ins>+SlowPathCallContext::SlowPathCallContext(
+    RegisterSet usedRegisters, CCallHelpers&amp; jit, unsigned numArgs, GPRReg returnRegister)
+    : m_jit(jit)
+    , m_numArgs(numArgs)
+    , m_returnRegister(returnRegister)
+{
+    // We don't care that you're using callee-save, stack, or hardware registers.
+    usedRegisters.exclude(RegisterSet::stackRegisters());
+    usedRegisters.exclude(RegisterSet::reservedHardwareRegisters());
+    usedRegisters.exclude(RegisterSet::calleeSaveRegisters());
</ins><span class="cx">         
</span><del>-        // The return register doesn't need to be saved.
-        if (m_returnRegister != InvalidGPRReg)
-            m_usedRegisters.clear(m_returnRegister);
</del><ins>+    // The return register doesn't need to be saved.
+    if (m_returnRegister != InvalidGPRReg)
+        usedRegisters.clear(m_returnRegister);
</ins><span class="cx">         
</span><del>-        size_t stackBytesNeededForReturnAddress = wordSize;
</del><ins>+    size_t stackBytesNeededForReturnAddress = wordSize;
</ins><span class="cx">         
</span><del>-        m_offsetToSavingArea =
-            (std::max(m_numArgs, NUMBER_OF_ARGUMENT_REGISTERS) - NUMBER_OF_ARGUMENT_REGISTERS) * wordSize;
</del><ins>+    m_offsetToSavingArea =
+        (std::max(m_numArgs, NUMBER_OF_ARGUMENT_REGISTERS) - NUMBER_OF_ARGUMENT_REGISTERS) * wordSize;
</ins><span class="cx">         
</span><del>-        for (unsigned i = std::min(NUMBER_OF_ARGUMENT_REGISTERS, numArgs); i--;)
-            m_argumentRegisters.set(GPRInfo::toArgumentRegister(i));
-        m_callingConventionRegisters.merge(m_argumentRegisters);
-        if (returnRegister != InvalidGPRReg)
-            m_callingConventionRegisters.set(GPRInfo::returnValueGPR);
-        m_callingConventionRegisters.filter(m_usedRegisters);
</del><ins>+    for (unsigned i = std::min(NUMBER_OF_ARGUMENT_REGISTERS, numArgs); i--;)
+        m_argumentRegisters.set(GPRInfo::toArgumentRegister(i));
+    m_callingConventionRegisters.merge(m_argumentRegisters);
+    if (returnRegister != InvalidGPRReg)
+        m_callingConventionRegisters.set(GPRInfo::returnValueGPR);
+    m_callingConventionRegisters.filter(usedRegisters);
</ins><span class="cx">         
</span><del>-        unsigned numberOfCallingConventionRegisters =
-            m_callingConventionRegisters.numberOfSetRegisters();
</del><ins>+    unsigned numberOfCallingConventionRegisters =
+        m_callingConventionRegisters.numberOfSetRegisters();
</ins><span class="cx">         
</span><del>-        size_t offsetToThunkSavingArea =
-            m_offsetToSavingArea +
-            numberOfCallingConventionRegisters * wordSize;
</del><ins>+    size_t offsetToThunkSavingArea =
+        m_offsetToSavingArea +
+        numberOfCallingConventionRegisters * wordSize;
</ins><span class="cx">         
</span><del>-        m_stackBytesNeeded =
-            offsetToThunkSavingArea +
-            stackBytesNeededForReturnAddress +
-            (m_usedRegisters.numberOfSetRegisters() - numberOfCallingConventionRegisters) * wordSize;
</del><ins>+    m_stackBytesNeeded =
+        offsetToThunkSavingArea +
+        stackBytesNeededForReturnAddress +
+        (usedRegisters.numberOfSetRegisters() - numberOfCallingConventionRegisters) * wordSize;
</ins><span class="cx">         
</span><del>-        m_stackBytesNeeded = (m_stackBytesNeeded + stackAlignmentBytes() - 1) &amp; ~(stackAlignmentBytes() - 1);
</del><ins>+    m_stackBytesNeeded = (m_stackBytesNeeded + stackAlignmentBytes() - 1) &amp; ~(stackAlignmentBytes() - 1);
</ins><span class="cx">         
</span><del>-        m_jit.subPtr(CCallHelpers::TrustedImm32(m_stackBytesNeeded), CCallHelpers::stackPointerRegister);
</del><ins>+    m_jit.subPtr(CCallHelpers::TrustedImm32(m_stackBytesNeeded), CCallHelpers::stackPointerRegister);
+
+    m_thunkSaveSet = usedRegisters;
</ins><span class="cx">         
</span><del>-        m_thunkSaveSet = m_usedRegisters;
-        
-        // This relies on all calling convention registers also being temp registers.
-        unsigned stackIndex = 0;
-        for (unsigned i = GPRInfo::numberOfRegisters; i--;) {
-            GPRReg reg = GPRInfo::toRegister(i);
-            if (!m_callingConventionRegisters.get(reg))
-                continue;
-            m_jit.storePtr(reg, CCallHelpers::Address(CCallHelpers::stackPointerRegister, m_offsetToSavingArea + (stackIndex++) * wordSize));
-            m_thunkSaveSet.clear(reg);
-        }
-        
-        m_offset = offsetToThunkSavingArea;
</del><ins>+    // This relies on all calling convention registers also being temp registers.
+    unsigned stackIndex = 0;
+    for (unsigned i = GPRInfo::numberOfRegisters; i--;) {
+        GPRReg reg = GPRInfo::toRegister(i);
+        if (!m_callingConventionRegisters.get(reg))
+            continue;
+        m_jit.storePtr(reg, CCallHelpers::Address(CCallHelpers::stackPointerRegister, m_offsetToSavingArea + (stackIndex++) * wordSize));
+        m_thunkSaveSet.clear(reg);
</ins><span class="cx">     }
</span><del>-    
-    ~CallContext()
-    {
-        if (m_returnRegister != InvalidGPRReg)
-            m_jit.move(GPRInfo::returnValueGPR, m_returnRegister);
</del><span class="cx">         
</span><del>-        unsigned stackIndex = 0;
-        for (unsigned i = GPRInfo::numberOfRegisters; i--;) {
-            GPRReg reg = GPRInfo::toRegister(i);
-            if (!m_callingConventionRegisters.get(reg))
-                continue;
-            m_jit.loadPtr(CCallHelpers::Address(CCallHelpers::stackPointerRegister, m_offsetToSavingArea + (stackIndex++) * wordSize), reg);
-        }
-        
-        m_jit.addPtr(CCallHelpers::TrustedImm32(m_stackBytesNeeded), CCallHelpers::stackPointerRegister);
-    }
</del><ins>+    m_offset = offsetToThunkSavingArea;
+}
</ins><span class="cx">     
</span><del>-    RegisterSet usedRegisters() const
-    {
-        return m_thunkSaveSet;
-    }
</del><ins>+SlowPathCallContext::~SlowPathCallContext()
+{
+    if (m_returnRegister != InvalidGPRReg)
+        m_jit.move(GPRInfo::returnValueGPR, m_returnRegister);
</ins><span class="cx">     
</span><del>-    ptrdiff_t offset() const
-    {
-        return m_offset;
</del><ins>+    unsigned stackIndex = 0;
+    for (unsigned i = GPRInfo::numberOfRegisters; i--;) {
+        GPRReg reg = GPRInfo::toRegister(i);
+        if (!m_callingConventionRegisters.get(reg))
+            continue;
+        m_jit.loadPtr(CCallHelpers::Address(CCallHelpers::stackPointerRegister, m_offsetToSavingArea + (stackIndex++) * wordSize), reg);
</ins><span class="cx">     }
</span><span class="cx">     
</span><del>-    SlowPathCallKey keyWithTarget(void* callTarget) const
-    {
-        return SlowPathCallKey(usedRegisters(), callTarget, m_argumentRegisters, offset());
-    }
-    
-    MacroAssembler::Call makeCall(void* callTarget, MacroAssembler::JumpList* exceptionTarget)
-    {
-        MacroAssembler::Call result = m_jit.call();
-        m_state.finalizer-&gt;slowPathCalls.append(SlowPathCall(
-            result, keyWithTarget(callTarget)));
-        if (exceptionTarget)
-            exceptionTarget-&gt;append(m_jit.emitExceptionCheck());
-        return result;
-    }
-    
-private:
-    State&amp; m_state;
-    RegisterSet m_usedRegisters;
-    RegisterSet m_argumentRegisters;
-    RegisterSet m_callingConventionRegisters;
-    CCallHelpers&amp; m_jit;
-    unsigned m_numArgs;
-    GPRReg m_returnRegister;
-    size_t m_offsetToSavingArea;
-    size_t m_stackBytesNeeded;
-    RegisterSet m_thunkSaveSet;
-    ptrdiff_t m_offset;
-};
-
-} // anonymous namespace
-
-void storeCodeOrigin(State&amp; state, CCallHelpers&amp; jit, CodeOrigin codeOrigin)
-{
-    if (!codeOrigin.isSet())
-        return;
-    
-    CallSiteIndex callSite = state.jitCode-&gt;common.addCodeOrigin(codeOrigin);
-    unsigned locationBits = callSite.bits();
-    jit.store32(
-        CCallHelpers::TrustedImm32(locationBits),
-        CCallHelpers::tagFor(static_cast&lt;VirtualRegister&gt;(JSStack::ArgumentCount)));
</del><ins>+    m_jit.addPtr(CCallHelpers::TrustedImm32(m_stackBytesNeeded), CCallHelpers::stackPointerRegister);
</ins><span class="cx"> }
</span><span class="cx"> 
</span><del>-MacroAssembler::Call callOperation(
-    State&amp; state, const RegisterSet&amp; usedRegisters, CCallHelpers&amp; jit,
-    CodeOrigin codeOrigin, MacroAssembler::JumpList* exceptionTarget,
-    J_JITOperation_ESsiCI operation, GPRReg result, StructureStubInfo* stubInfo,
-    GPRReg object, const UniquedStringImpl* uid)
</del><ins>+SlowPathCallKey SlowPathCallContext::keyWithTarget(void* callTarget) const
</ins><span class="cx"> {
</span><del>-    storeCodeOrigin(state, jit, codeOrigin);
-    CallContext context(state, usedRegisters, jit, 4, result);
-    jit.setupArgumentsWithExecState(
-        CCallHelpers::TrustedImmPtr(stubInfo), object, CCallHelpers::TrustedImmPtr(uid));
-    return context.makeCall(bitwise_cast&lt;void*&gt;(operation), exceptionTarget);
</del><ins>+    return SlowPathCallKey(m_thunkSaveSet, callTarget, m_argumentRegisters, m_offset);
</ins><span class="cx"> }
</span><span class="cx"> 
</span><del>-MacroAssembler::Call callOperation(
-    State&amp; state, const RegisterSet&amp; usedRegisters, CCallHelpers&amp; jit,
-    CodeOrigin codeOrigin, MacroAssembler::JumpList* exceptionTarget,
-    J_JITOperation_ESsiJI operation, GPRReg result, StructureStubInfo* stubInfo,
-    GPRReg object, UniquedStringImpl* uid)
</del><ins>+SlowPathCall SlowPathCallContext::makeCall(void* callTarget)
</ins><span class="cx"> {
</span><del>-    storeCodeOrigin(state, jit, codeOrigin);
-    CallContext context(state, usedRegisters, jit, 4, result);
-    jit.setupArgumentsWithExecState(
-        CCallHelpers::TrustedImmPtr(stubInfo), object,
-        CCallHelpers::TrustedImmPtr(uid));
-    return context.makeCall(bitwise_cast&lt;void*&gt;(operation), exceptionTarget);
</del><ins>+    SlowPathCall result = SlowPathCall(m_jit.call(), keyWithTarget(callTarget));
+
+    m_jit.addLinkTask(
+        [result] (LinkBuffer&amp; linkBuffer) {
+            VM&amp; vm = linkBuffer.vm();
+
+            MacroAssemblerCodeRef thunk =
+                vm.ftlThunks-&gt;getSlowPathCallThunk(vm, result.key());
+
+            linkBuffer.link(result.call(), CodeLocationLabel(thunk.code()));
+        });
+    
+    return result;
</ins><span class="cx"> }
</span><span class="cx"> 
</span><del>-MacroAssembler::Call callOperation(
-    State&amp; state, const RegisterSet&amp; usedRegisters, CCallHelpers&amp; jit, 
-    CodeOrigin codeOrigin, MacroAssembler::JumpList* exceptionTarget,
-    V_JITOperation_ESsiJJI operation, StructureStubInfo* stubInfo, GPRReg value,
-    GPRReg object, UniquedStringImpl* uid)
</del><ins>+CallSiteIndex callSiteIndexForCodeOrigin(State&amp; state, CodeOrigin codeOrigin)
</ins><span class="cx"> {
</span><del>-    storeCodeOrigin(state, jit, codeOrigin);
-    CallContext context(state, usedRegisters, jit, 5, InvalidGPRReg);
-    jit.setupArgumentsWithExecState(
-        CCallHelpers::TrustedImmPtr(stubInfo), value, object,
-        CCallHelpers::TrustedImmPtr(uid));
-    return context.makeCall(bitwise_cast&lt;void*&gt;(operation), exceptionTarget);
</del><ins>+    if (codeOrigin)
+        return state.jitCode-&gt;common.addCodeOrigin(codeOrigin);
+    return CallSiteIndex();
</ins><span class="cx"> }
</span><span class="cx"> 
</span><span class="cx"> } } // namespace JSC::FTL
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreftlFTLSlowPathCallh"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/ftl/FTLSlowPathCall.h (190859 => 190860)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/ftl/FTLSlowPathCall.h        2015-10-12 17:25:37 UTC (rev 190859)
+++ trunk/Source/JavaScriptCore/ftl/FTLSlowPathCall.h        2015-10-12 17:56:26 UTC (rev 190860)
</span><span class="lines">@@ -1,5 +1,5 @@
</span><span class="cx"> /*
</span><del>- * Copyright (C) 2013, 2014 Apple Inc. All rights reserved.
</del><ins>+ * Copyright (C) 2013-2015 Apple Inc. All rights reserved.
</ins><span class="cx">  *
</span><span class="cx">  * Redistribution and use in source and binary forms, with or without
</span><span class="cx">  * modification, are permitted provided that the following conditions
</span><span class="lines">@@ -55,21 +55,72 @@
</span><span class="cx">     SlowPathCallKey m_key;
</span><span class="cx"> };
</span><span class="cx"> 
</span><del>-void storeCodeOrigin(State&amp;, CCallHelpers&amp;, CodeOrigin);
</del><ins>+// This will be an RAII thingy that will set up the necessary stack sizes and offsets and such.
+class SlowPathCallContext {
+public:
+    SlowPathCallContext(RegisterSet usedRegisters, CCallHelpers&amp;, unsigned numArgs, GPRReg returnRegister);
+    ~SlowPathCallContext();
</ins><span class="cx"> 
</span><del>-MacroAssembler::Call callOperation(
-    State&amp;, const RegisterSet&amp;, CCallHelpers&amp;, CodeOrigin, CCallHelpers::JumpList*,
-    J_JITOperation_ESsiCI, GPRReg, StructureStubInfo*, GPRReg,
-    const UniquedStringImpl* uid);
-MacroAssembler::Call callOperation(
-    State&amp;, const RegisterSet&amp;, CCallHelpers&amp;, CodeOrigin, CCallHelpers::JumpList*,
-    J_JITOperation_ESsiJI, GPRReg result, StructureStubInfo*, GPRReg object,
-    UniquedStringImpl* uid);
-MacroAssembler::Call callOperation(
-    State&amp;, const RegisterSet&amp;, CCallHelpers&amp;, CodeOrigin, CCallHelpers::JumpList*,
-    V_JITOperation_ESsiJJI, StructureStubInfo*, GPRReg value, GPRReg object,
-    UniquedStringImpl* uid);
</del><ins>+    // NOTE: The call that this returns is already going to be linked by the JIT using addLinkTask(),
+    // so there is no need for you to link it yourself.
+    SlowPathCall makeCall(void* callTarget);
</ins><span class="cx"> 
</span><ins>+private:
+    SlowPathCallKey keyWithTarget(void* callTarget) const;
+    
+    RegisterSet m_argumentRegisters;
+    RegisterSet m_callingConventionRegisters;
+    CCallHelpers&amp; m_jit;
+    unsigned m_numArgs;
+    GPRReg m_returnRegister;
+    size_t m_offsetToSavingArea;
+    size_t m_stackBytesNeeded;
+    RegisterSet m_thunkSaveSet;
+    ptrdiff_t m_offset;
+};
+
+template&lt;typename... ArgumentTypes&gt;
+SlowPathCall callOperation(
+    const RegisterSet&amp; usedRegisters, CCallHelpers&amp; jit, CCallHelpers::JumpList* exceptionTarget,
+    FunctionPtr function, GPRReg resultGPR, ArgumentTypes... arguments)
+{
+    SlowPathCall call;
+    {
+        SlowPathCallContext context(usedRegisters, jit, sizeof...(ArgumentTypes) + 1, resultGPR);
+        jit.setupArgumentsWithExecState(arguments...);
+        call = context.makeCall(function.value());
+    }
+    if (exceptionTarget)
+        exceptionTarget-&gt;append(jit.emitExceptionCheck());
+    return call;
+}
+
+template&lt;typename... ArgumentTypes&gt;
+SlowPathCall callOperation(
+    const RegisterSet&amp; usedRegisters, CCallHelpers&amp; jit, CallSiteIndex callSiteIndex,
+    CCallHelpers::JumpList* exceptionTarget, FunctionPtr function, GPRReg resultGPR,
+    ArgumentTypes... arguments)
+{
+    if (callSiteIndex) {
+        jit.store32(
+            CCallHelpers::TrustedImm32(callSiteIndex.bits()),
+            CCallHelpers::tagFor(JSStack::ArgumentCount));
+    }
+    return callOperation(usedRegisters, jit, exceptionTarget, function, resultGPR, arguments...);
+}
+
+CallSiteIndex callSiteIndexForCodeOrigin(State&amp;, CodeOrigin);
+
+template&lt;typename... ArgumentTypes&gt;
+SlowPathCall callOperation(
+    State&amp; state, const RegisterSet&amp; usedRegisters, CCallHelpers&amp; jit, CodeOrigin codeOrigin,
+    CCallHelpers::JumpList* exceptionTarget, FunctionPtr function, GPRReg result, ArgumentTypes... arguments)
+{
+    return callOperation(
+        usedRegisters, jit, callSiteIndexForCodeOrigin(state, codeOrigin), exceptionTarget, function,
+        result, arguments...);
+}
+
</ins><span class="cx"> } } // namespace JSC::FTL
</span><span class="cx"> 
</span><span class="cx"> #endif // ENABLE(FTL_JIT)
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreftlFTLStateh"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/ftl/FTLState.h (190859 => 190860)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/ftl/FTLState.h        2015-10-12 17:25:37 UTC (rev 190859)
+++ trunk/Source/JavaScriptCore/ftl/FTLState.h        2015-10-12 17:56:26 UTC (rev 190860)
</span><span class="lines">@@ -78,6 +78,7 @@
</span><span class="cx">     SegmentedVector&lt;GetByIdDescriptor&gt; getByIds;
</span><span class="cx">     SegmentedVector&lt;PutByIdDescriptor&gt; putByIds;
</span><span class="cx">     SegmentedVector&lt;CheckInDescriptor&gt; checkIns;
</span><ins>+    SegmentedVector&lt;LazySlowPathDescriptor&gt; lazySlowPaths;
</ins><span class="cx">     Vector&lt;JSCall&gt; jsCalls;
</span><span class="cx">     Vector&lt;JSCallVarargs&gt; jsCallVarargses;
</span><span class="cx">     Vector&lt;JSTailCall&gt; jsTailCalls;
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreftlFTLThunkscpp"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/ftl/FTLThunks.cpp (190859 => 190860)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/ftl/FTLThunks.cpp        2015-10-12 17:25:37 UTC (rev 190859)
+++ trunk/Source/JavaScriptCore/ftl/FTLThunks.cpp        2015-10-12 17:56:26 UTC (rev 190860)
</span><span class="lines">@@ -31,6 +31,7 @@
</span><span class="cx"> #include &quot;AssemblyHelpers.h&quot;
</span><span class="cx"> #include &quot;FPRInfo.h&quot;
</span><span class="cx"> #include &quot;FTLOSRExitCompiler.h&quot;
</span><ins>+#include &quot;FTLOperations.h&quot;
</ins><span class="cx"> #include &quot;FTLSaveRestore.h&quot;
</span><span class="cx"> #include &quot;GPRInfo.h&quot;
</span><span class="cx"> #include &quot;LinkBuffer.h&quot;
</span><span class="lines">@@ -39,11 +40,12 @@
</span><span class="cx"> 
</span><span class="cx"> using namespace DFG;
</span><span class="cx"> 
</span><del>-MacroAssemblerCodeRef osrExitGenerationThunkGenerator(VM* vm)
</del><ins>+static MacroAssemblerCodeRef genericGenerationThunkGenerator(
+    VM* vm, FunctionPtr generationFunction, const char* name, unsigned extraPopsToRestore)
</ins><span class="cx"> {
</span><span class="cx">     AssemblyHelpers jit(vm, 0);
</span><span class="cx">     
</span><del>-    // Note that the &quot;return address&quot; will be the OSR exit ID.
</del><ins>+    // Note that the &quot;return address&quot; will be the ID that we pass to the generation function.
</ins><span class="cx">     
</span><span class="cx">     ptrdiff_t stackMisalignment = MacroAssembler::pushToSaveByteOffset();
</span><span class="cx">     
</span><span class="lines">@@ -90,11 +92,14 @@
</span><span class="cx">     while (numberOfRequiredPops--)
</span><span class="cx">         jit.popToRestore(GPRInfo::regT1);
</span><span class="cx">     jit.popToRestore(MacroAssembler::framePointerRegister);
</span><del>-    
-    // At this point we're sitting on the return address - so if we did a jump right now, the
-    // tail-callee would be happy. Instead we'll stash the callee in the return address and then
-    // restore all registers.
-    
</del><ins>+
+    // When we came in here, there was an additional thing pushed to the stack. Some clients want it
+    // popped before proceeding.
+    while (extraPopsToRestore--)
+        jit.popToRestore(GPRInfo::regT1);
+
+    // Put the return address wherever the return instruction wants it. On all platforms, this
+    // ensures that the return address is out of the way of register restoration.
</ins><span class="cx">     jit.restoreReturnAddressBeforeReturn(GPRInfo::regT0);
</span><span class="cx"> 
</span><span class="cx">     restoreAllRegisters(jit, buffer);
</span><span class="lines">@@ -102,10 +107,24 @@
</span><span class="cx">     jit.ret();
</span><span class="cx">     
</span><span class="cx">     LinkBuffer patchBuffer(*vm, jit, GLOBAL_THUNK_ID);
</span><del>-    patchBuffer.link(functionCall, compileFTLOSRExit);
-    return FINALIZE_CODE(patchBuffer, (&quot;FTL OSR exit generation thunk&quot;));
</del><ins>+    patchBuffer.link(functionCall, generationFunction);
+    return FINALIZE_CODE(patchBuffer, (&quot;%s&quot;, name));
</ins><span class="cx"> }
</span><span class="cx"> 
</span><ins>+MacroAssemblerCodeRef osrExitGenerationThunkGenerator(VM* vm)
+{
+    unsigned extraPopsToRestore = 0;
+    return genericGenerationThunkGenerator(
+        vm, compileFTLOSRExit, &quot;FTL OSR exit generation thunk&quot;, extraPopsToRestore);
+}
+
+MacroAssemblerCodeRef lazySlowPathGenerationThunkGenerator(VM* vm)
+{
+    unsigned extraPopsToRestore = 1;
+    return genericGenerationThunkGenerator(
+        vm, compileFTLLazySlowPath, &quot;FTL lazy slow path generation thunk&quot;, extraPopsToRestore);
+}
+
</ins><span class="cx"> static void registerClobberCheck(AssemblyHelpers&amp; jit, RegisterSet dontClobber)
</span><span class="cx"> {
</span><span class="cx">     if (!Options::clobberAllRegsInFTLICSlowPath())
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreftlFTLThunksh"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/ftl/FTLThunks.h (190859 => 190860)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/ftl/FTLThunks.h        2015-10-12 17:25:37 UTC (rev 190859)
+++ trunk/Source/JavaScriptCore/ftl/FTLThunks.h        2015-10-12 17:56:26 UTC (rev 190860)
</span><span class="lines">@@ -40,6 +40,7 @@
</span><span class="cx"> namespace FTL {
</span><span class="cx"> 
</span><span class="cx"> MacroAssemblerCodeRef osrExitGenerationThunkGenerator(VM*);
</span><ins>+MacroAssemblerCodeRef lazySlowPathGenerationThunkGenerator(VM*);
</ins><span class="cx"> MacroAssemblerCodeRef slowPathCallThunkGenerator(VM&amp;, const SlowPathCallKey&amp;);
</span><span class="cx"> 
</span><span class="cx"> template&lt;typename KeyTypeArgument&gt;
</span></span></pre></div>
<a id="trunkSourceJavaScriptCoreinterpreterCallFrameh"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/interpreter/CallFrame.h (190859 => 190860)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/interpreter/CallFrame.h        2015-10-12 17:25:37 UTC (rev 190859)
+++ trunk/Source/JavaScriptCore/interpreter/CallFrame.h        2015-10-12 17:56:26 UTC (rev 190860)
</span><span class="lines">@@ -1,7 +1,7 @@
</span><span class="cx"> /*
</span><span class="cx">  *  Copyright (C) 1999-2001 Harri Porten (porten@kde.org)
</span><span class="cx">  *  Copyright (C) 2001 Peter Kelly (pmk@post.com)
</span><del>- *  Copyright (C) 2003, 2007, 2008, 2011, 2013, 2014 Apple Inc. All rights reserved.
</del><ins>+ *  Copyright (C) 2003, 2007, 2008, 2011, 2013-2015 Apple Inc. All rights reserved.
</ins><span class="cx">  *
</span><span class="cx">  *  This library is free software; you can redistribute it and/or
</span><span class="cx">  *  modify it under the terms of the GNU Library General Public
</span><span class="lines">@@ -39,6 +39,11 @@
</span><span class="cx">     class JSScope;
</span><span class="cx"> 
</span><span class="cx">     struct CallSiteIndex {
</span><ins>+        CallSiteIndex()
+            : m_bits(UINT_MAX)
+        {
+        }
+        
</ins><span class="cx">         explicit CallSiteIndex(uint32_t bits)
</span><span class="cx">             : m_bits(bits)
</span><span class="cx">         { }
</span><span class="lines">@@ -47,6 +52,9 @@
</span><span class="cx">             : m_bits(bitwise_cast&lt;uint32_t&gt;(instruction))
</span><span class="cx">         { }
</span><span class="cx"> #endif
</span><ins>+
+        explicit operator bool() const { return m_bits != UINT_MAX; }
+        
</ins><span class="cx">         inline uint32_t bits() const { return m_bits; }
</span><span class="cx"> 
</span><span class="cx">     private:
</span></span></pre></div>
<a id="trunkSourceJavaScriptCorejitCCallHelpersh"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/jit/CCallHelpers.h (190859 => 190860)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/jit/CCallHelpers.h        2015-10-12 17:25:37 UTC (rev 190859)
+++ trunk/Source/JavaScriptCore/jit/CCallHelpers.h        2015-10-12 17:56:26 UTC (rev 190860)
</span><span class="lines">@@ -63,6 +63,8 @@
</span><span class="cx">         poke(GPRInfo::nonArgGPR0, POKE_ARGUMENT_OFFSET + argumentIndex - GPRInfo::numberOfArgumentRegisters);
</span><span class="cx">     }
</span><span class="cx"> 
</span><ins>+    void setupArgumentsWithExecState() { setupArgumentsExecState(); }
+
</ins><span class="cx">     // These methods used to sort arguments into the correct registers.
</span><span class="cx">     // On X86 we use cdecl calling conventions, which pass all arguments on the
</span><span class="cx">     // stack. On other architectures we may need to sort values into the
</span></span></pre></div>
<a id="trunkSourceJavaScriptCorejitJITOperationscpp"></a>
<div class="modfile"><h4>Modified: trunk/Source/JavaScriptCore/jit/JITOperations.cpp (190859 => 190860)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/JavaScriptCore/jit/JITOperations.cpp        2015-10-12 17:25:37 UTC (rev 190859)
+++ trunk/Source/JavaScriptCore/jit/JITOperations.cpp        2015-10-12 17:56:26 UTC (rev 190860)
</span><span class="lines">@@ -1031,7 +1031,7 @@
</span><span class="cx"> {
</span><span class="cx">     VM* vm = &amp;exec-&gt;vm();
</span><span class="cx">     NativeCallFrameTracer tracer(vm, exec);
</span><del>-    
</del><ins>+
</ins><span class="cx">     return constructEmptyObject(exec, structure);
</span><span class="cx"> }
</span><span class="cx"> 
</span></span></pre></div>
<a id="trunkSourceWTFChangeLog"></a>
<div class="modfile"><h4>Modified: trunk/Source/WTF/ChangeLog (190859 => 190860)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/WTF/ChangeLog        2015-10-12 17:25:37 UTC (rev 190859)
+++ trunk/Source/WTF/ChangeLog        2015-10-12 17:56:26 UTC (rev 190860)
</span><span class="lines">@@ -1,3 +1,37 @@
</span><ins>+2015-10-10  Filip Pizlo  &lt;fpizlo@apple.com&gt;
+
+        FTL should generate code to call slow paths lazily
+        https://bugs.webkit.org/show_bug.cgi?id=149936
+
+        Reviewed by Saam Barati.
+
+        Enables SharedTask to handle any function type, not just void().
+
+        It's probably better to use SharedTask instead of std::function in performance-sensitive
+        code. std::function uses the system malloc and has copy semantics. SharedTask uses FastMalloc
+        and has aliasing semantics. So, you can just trust that it will have sensible performance
+        characteristics.
+
+        * wtf/ParallelHelperPool.cpp:
+        (WTF::ParallelHelperClient::~ParallelHelperClient):
+        (WTF::ParallelHelperClient::setTask):
+        (WTF::ParallelHelperClient::doSomeHelping):
+        (WTF::ParallelHelperClient::runTaskInParallel):
+        (WTF::ParallelHelperClient::finish):
+        (WTF::ParallelHelperClient::claimTask):
+        (WTF::ParallelHelperClient::runTask):
+        (WTF::ParallelHelperPool::doSomeHelping):
+        (WTF::ParallelHelperPool::helperThreadBody):
+        * wtf/ParallelHelperPool.h:
+        (WTF::ParallelHelperClient::setFunction):
+        (WTF::ParallelHelperClient::runFunctionInParallel):
+        (WTF::ParallelHelperClient::pool):
+        * wtf/SharedTask.h:
+        (WTF::createSharedTask):
+        (WTF::SharedTask::SharedTask): Deleted.
+        (WTF::SharedTask::~SharedTask): Deleted.
+        (WTF::SharedTaskFunctor::SharedTaskFunctor): Deleted.
+
</ins><span class="cx"> 2015-10-10  Dan Bernstein  &lt;mitz@apple.com&gt;
</span><span class="cx"> 
</span><span class="cx">         [iOS] Remove unnecessary iOS version checks
</span></span></pre></div>
<a id="trunkSourceWTFwtfParallelHelperPoolcpp"></a>
<div class="modfile"><h4>Modified: trunk/Source/WTF/wtf/ParallelHelperPool.cpp (190859 => 190860)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/WTF/wtf/ParallelHelperPool.cpp        2015-10-12 17:25:37 UTC (rev 190859)
+++ trunk/Source/WTF/wtf/ParallelHelperPool.cpp        2015-10-12 17:56:26 UTC (rev 190860)
</span><span class="lines">@@ -53,7 +53,7 @@
</span><span class="cx">     }
</span><span class="cx"> }
</span><span class="cx"> 
</span><del>-void ParallelHelperClient::setTask(RefPtr&lt;SharedTask&gt; task)
</del><ins>+void ParallelHelperClient::setTask(RefPtr&lt;SharedTask&lt;void()&gt;&gt; task)
</ins><span class="cx"> {
</span><span class="cx">     LockHolder locker(m_pool-&gt;m_lock);
</span><span class="cx">     RELEASE_ASSERT(!m_task);
</span><span class="lines">@@ -69,7 +69,7 @@
</span><span class="cx"> 
</span><span class="cx"> void ParallelHelperClient::doSomeHelping()
</span><span class="cx"> {
</span><del>-    RefPtr&lt;SharedTask&gt; task;
</del><ins>+    RefPtr&lt;SharedTask&lt;void()&gt;&gt; task;
</ins><span class="cx">     {
</span><span class="cx">         LockHolder locker(m_pool-&gt;m_lock);
</span><span class="cx">         task = claimTask(locker);
</span><span class="lines">@@ -80,7 +80,7 @@
</span><span class="cx">     runTask(task);
</span><span class="cx"> }
</span><span class="cx"> 
</span><del>-void ParallelHelperClient::runTaskInParallel(RefPtr&lt;SharedTask&gt; task)
</del><ins>+void ParallelHelperClient::runTaskInParallel(RefPtr&lt;SharedTask&lt;void()&gt;&gt; task)
</ins><span class="cx"> {
</span><span class="cx">     setTask(task);
</span><span class="cx">     doSomeHelping();
</span><span class="lines">@@ -94,7 +94,7 @@
</span><span class="cx">         m_pool-&gt;m_workCompleteCondition.wait(m_pool-&gt;m_lock);
</span><span class="cx"> }
</span><span class="cx"> 
</span><del>-RefPtr&lt;SharedTask&gt; ParallelHelperClient::claimTask(const LockHolder&amp;)
</del><ins>+RefPtr&lt;SharedTask&lt;void()&gt;&gt; ParallelHelperClient::claimTask(const LockHolder&amp;)
</ins><span class="cx"> {
</span><span class="cx">     if (!m_task)
</span><span class="cx">         return nullptr;
</span><span class="lines">@@ -103,7 +103,7 @@
</span><span class="cx">     return m_task;
</span><span class="cx"> }
</span><span class="cx"> 
</span><del>-void ParallelHelperClient::runTask(RefPtr&lt;SharedTask&gt; task)
</del><ins>+void ParallelHelperClient::runTask(RefPtr&lt;SharedTask&lt;void()&gt;&gt; task)
</ins><span class="cx"> {
</span><span class="cx">     RELEASE_ASSERT(m_numActive);
</span><span class="cx">     RELEASE_ASSERT(task);
</span><span class="lines">@@ -153,7 +153,7 @@
</span><span class="cx"> void ParallelHelperPool::doSomeHelping()
</span><span class="cx"> {
</span><span class="cx">     ParallelHelperClient* client;
</span><del>-    RefPtr&lt;SharedTask&gt; task;
</del><ins>+    RefPtr&lt;SharedTask&lt;void()&gt;&gt; task;
</ins><span class="cx">     {
</span><span class="cx">         LockHolder locker(m_lock);
</span><span class="cx">         client = getClientWithTask(locker);
</span><span class="lines">@@ -182,7 +182,7 @@
</span><span class="cx"> {
</span><span class="cx">     for (;;) {
</span><span class="cx">         ParallelHelperClient* client;
</span><del>-        RefPtr&lt;SharedTask&gt; task;
</del><ins>+        RefPtr&lt;SharedTask&lt;void()&gt;&gt; task;
</ins><span class="cx"> 
</span><span class="cx">         {
</span><span class="cx">             LockHolder locker(m_lock);
</span></span></pre></div>
<a id="trunkSourceWTFwtfParallelHelperPoolh"></a>
<div class="modfile"><h4>Modified: trunk/Source/WTF/wtf/ParallelHelperPool.h (190859 => 190860)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/WTF/wtf/ParallelHelperPool.h        2015-10-12 17:25:37 UTC (rev 190859)
+++ trunk/Source/WTF/wtf/ParallelHelperPool.h        2015-10-12 17:56:26 UTC (rev 190860)
</span><span class="lines">@@ -130,12 +130,12 @@
</span><span class="cx">     WTF_EXPORT_PRIVATE ParallelHelperClient(RefPtr&lt;ParallelHelperPool&gt;);
</span><span class="cx">     WTF_EXPORT_PRIVATE ~ParallelHelperClient();
</span><span class="cx"> 
</span><del>-    WTF_EXPORT_PRIVATE void setTask(RefPtr&lt;SharedTask&gt;);
</del><ins>+    WTF_EXPORT_PRIVATE void setTask(RefPtr&lt;SharedTask&lt;void()&gt;&gt;);
</ins><span class="cx"> 
</span><span class="cx">     template&lt;typename Functor&gt;
</span><span class="cx">     void setFunction(const Functor&amp; functor)
</span><span class="cx">     {
</span><del>-        setTask(createSharedTask(functor));
</del><ins>+        setTask(createSharedTask&lt;void()&gt;(functor));
</ins><span class="cx">     }
</span><span class="cx">     
</span><span class="cx">     WTF_EXPORT_PRIVATE void finish();
</span><span class="lines">@@ -146,7 +146,7 @@
</span><span class="cx">     // client-&gt;setTask(task);
</span><span class="cx">     // client-&gt;doSomeHelping();
</span><span class="cx">     // client-&gt;finish();
</span><del>-    WTF_EXPORT_PRIVATE void runTaskInParallel(RefPtr&lt;SharedTask&gt;);
</del><ins>+    WTF_EXPORT_PRIVATE void runTaskInParallel(RefPtr&lt;SharedTask&lt;void()&gt;&gt;);
</ins><span class="cx"> 
</span><span class="cx">     // Equivalent to:
</span><span class="cx">     // client-&gt;setFunction(functor);
</span><span class="lines">@@ -155,7 +155,7 @@
</span><span class="cx">     template&lt;typename Functor&gt;
</span><span class="cx">     void runFunctionInParallel(const Functor&amp; functor)
</span><span class="cx">     {
</span><del>-        runTaskInParallel(createSharedTask(functor));
</del><ins>+        runTaskInParallel(createSharedTask&lt;void()&gt;(functor));
</ins><span class="cx">     }
</span><span class="cx"> 
</span><span class="cx">     ParallelHelperPool&amp; pool() { return *m_pool; }
</span><span class="lines">@@ -165,11 +165,11 @@
</span><span class="cx">     friend class ParallelHelperPool;
</span><span class="cx"> 
</span><span class="cx">     void finish(const LockHolder&amp;);
</span><del>-    RefPtr&lt;SharedTask&gt; claimTask(const LockHolder&amp;);
-    void runTask(RefPtr&lt;SharedTask&gt;);
</del><ins>+    RefPtr&lt;SharedTask&lt;void()&gt;&gt; claimTask(const LockHolder&amp;);
+    void runTask(RefPtr&lt;SharedTask&lt;void()&gt;&gt;);
</ins><span class="cx">     
</span><span class="cx">     RefPtr&lt;ParallelHelperPool&gt; m_pool;
</span><del>-    RefPtr&lt;SharedTask&gt; m_task;
</del><ins>+    RefPtr&lt;SharedTask&lt;void()&gt;&gt; m_task;
</ins><span class="cx">     unsigned m_numActive { 0 };
</span><span class="cx"> };
</span><span class="cx"> 
</span></span></pre></div>
<a id="trunkSourceWTFwtfSharedTaskh"></a>
<div class="modfile"><h4>Modified: trunk/Source/WTF/wtf/SharedTask.h (190859 => 190860)</h4>
<pre class="diff"><span>
<span class="info">--- trunk/Source/WTF/wtf/SharedTask.h        2015-10-12 17:25:37 UTC (rev 190859)
+++ trunk/Source/WTF/wtf/SharedTask.h        2015-10-12 17:56:26 UTC (rev 190860)
</span><span class="lines">@@ -41,13 +41,13 @@
</span><span class="cx"> //
</span><span class="cx"> // Here's an example of how SharedTask can be better than std::function. If you do:
</span><span class="cx"> //
</span><del>-// std::function a = b;
</del><ins>+// std::function&lt;int(double)&gt; a = b;
</ins><span class="cx"> //
</span><span class="cx"> // Then &quot;a&quot; will get its own copy of all captured by-value variables. The act of copying may
</span><span class="cx"> // require calls to system malloc, and it may be linear time in the total size of captured
</span><span class="cx"> // variables. On the other hand, if you do:
</span><span class="cx"> //
</span><del>-// RefPtr&lt;SharedTask&gt; a = b;
</del><ins>+// RefPtr&lt;SharedTask&lt;int(double)&gt; a = b;
</ins><span class="cx"> //
</span><span class="cx"> // Then &quot;a&quot; will point to the same task as b, and the only work involved is the CAS to increase the
</span><span class="cx"> // reference count.
</span><span class="lines">@@ -58,18 +58,21 @@
</span><span class="cx"> // createSharedTask(), below). But SharedTask also allows you to create your own subclass and put
</span><span class="cx"> // state in member fields. This can be more natural if you want fine-grained control over what
</span><span class="cx"> // state is shared between instances of the task.
</span><del>-class SharedTask : public ThreadSafeRefCounted&lt;SharedTask&gt; {
</del><ins>+template&lt;typename FunctionType&gt; class SharedTask;
+template&lt;typename ResultType, typename... ArgumentTypes&gt;
+class SharedTask&lt;ResultType (ArgumentTypes...)&gt; : public ThreadSafeRefCounted&lt;SharedTask&lt;ResultType (ArgumentTypes...)&gt;&gt; {
</ins><span class="cx"> public:
</span><span class="cx">     SharedTask() { }
</span><span class="cx">     virtual ~SharedTask() { }
</span><span class="cx"> 
</span><del>-    virtual void run() = 0;
</del><ins>+    virtual ResultType run(ArgumentTypes&amp;&amp;...) = 0;
</ins><span class="cx"> };
</span><span class="cx"> 
</span><span class="cx"> // This is a utility class that allows you to create a SharedTask subclass using a lambda. Usually,
</span><span class="cx"> // you don't want to use this class directly. Use createSharedTask() instead.
</span><del>-template&lt;typename Functor&gt;
-class SharedTaskFunctor : public SharedTask {
</del><ins>+template&lt;typename FunctionType, typename Functor&gt; class SharedTaskFunctor;
+template&lt;typename ResultType, typename... ArgumentTypes, typename Functor&gt;
+class SharedTaskFunctor&lt;ResultType (ArgumentTypes...), Functor&gt; : public SharedTask&lt;ResultType (ArgumentTypes...)&gt; {
</ins><span class="cx"> public:
</span><span class="cx">     SharedTaskFunctor(const Functor&amp; functor)
</span><span class="cx">         : m_functor(functor)
</span><span class="lines">@@ -77,9 +80,9 @@
</span><span class="cx">     }
</span><span class="cx"> 
</span><span class="cx"> private:
</span><del>-    void run() override
</del><ins>+    ResultType run(ArgumentTypes&amp;&amp;... arguments) override
</ins><span class="cx">     {
</span><del>-        m_functor();
</del><ins>+        return m_functor(std::forward&lt;ArgumentTypes&gt;(arguments)...);
</ins><span class="cx">     }
</span><span class="cx"> 
</span><span class="cx">     Functor m_functor;
</span><span class="lines">@@ -87,7 +90,7 @@
</span><span class="cx"> 
</span><span class="cx"> // Create a SharedTask from a functor, such as a lambda. You can use this like so:
</span><span class="cx"> //
</span><del>-// RefPtr&lt;SharedTask&gt; task = createSharedTask(
</del><ins>+// RefPtr&lt;SharedTask&lt;void()&gt;&gt; task = createSharedTask&lt;void()&gt;(
</ins><span class="cx"> //     [=] () {
</span><span class="cx"> //         do things;
</span><span class="cx"> //     });
</span><span class="lines">@@ -102,10 +105,10 @@
</span><span class="cx"> // On the other hand, if you use something like ParallelHelperClient::runTaskInParallel() (or its
</span><span class="cx"> // helper, runFunctionInParallel(), which does createSharedTask() for you), then it can be OK to
</span><span class="cx"> // use [&amp;], since the stack frame will remain live for the entire duration of the task's lifetime.
</span><del>-template&lt;typename Functor&gt;
-Ref&lt;SharedTask&gt; createSharedTask(const Functor&amp; functor)
</del><ins>+template&lt;typename FunctionType, typename Functor&gt;
+Ref&lt;SharedTask&lt;FunctionType&gt;&gt; createSharedTask(const Functor&amp; functor)
</ins><span class="cx"> {
</span><del>-    return adoptRef(*new SharedTaskFunctor&lt;Functor&gt;(functor));
</del><ins>+    return adoptRef(*new SharedTaskFunctor&lt;FunctionType, Functor&gt;(functor));
</ins><span class="cx"> }
</span><span class="cx"> 
</span><span class="cx"> } // namespace WTF
</span></span></pre>
</div>
</div>

</body>
</html>